Search Results

Search found 22711 results on 909 pages for 'amazon product api'.

Page 395/909 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • I, Android

    - by andrewbrust
    I’m just back from the 2011 Consumer Electronics Show (CES).  I go to CES to get a sense of what Microsoft is doing in the consumer space, and how people are reacting to it.  When I first went to CES 2 years ago, Steve Ballmer announced the beta of Windows 7 at his keynote address, and the crowd went wild.  When I went again last year, everyone was hoping for a Windows tablet announcement at the Ballmer keynote.  Although they didn’t get one (unless you count the unreleased HP Slate running Windows 7), people continued to show anticipation around Project Natal (which became Xbox 360 Kinect) and around Windows Phone 7.  On the show floor last year, there were machines everywhere running Windows 7, including lots of netbooks.  Microsoft had a serious influence at the show both years. But this year, one brand, one product, one operating system evidenced itself over and over again: Android.  Whether in the multitude of tablet devices that were shown across the show, or the burgeoning number of smartphones shown (including all four forthcoming 4G-LTE handsets at Verizon Wireless’ booth) or the Google TV set top box from Logitech and the embedded implementation in new Sony TV models, Android was was there. There was excitement in the ubiquity of Android 2.2 (Froyo) and the emergence of Android 2.3 (Gingerbread).  There was anticipation around the tablet-optimized Android 3.0 (Honeycomb).  There were highly customized skins.  There was even an official CES Android app for navigating the exhibit halls and planning events.  Android was so ubiquitous, in fact, that it became surprising to find a device that was running anything else.  It was as if Android had become the de facto Original Equipment Manufacturing (OEM) operating system. Motorola’s booth was nothing less than an Android showcase.  And it was large, and it was packed.  Clearly Moto’s fortunes have improved dramatically in the last year and change.  The fact that the company morphed from being a core Windows Mobile OEM to an Android poster child seems non-coincidental to their improved fortunes. Even erstwhile WinMo OEMs who now do produce Windows Phone 7 devices were not pushing them.  Perhaps I missed them, but I couldn’t find WP7 handsets at Samsung’s booth, nor at LG’s.  And since the only carrier exhibiting at the show was Verizon Wireless, which doesn’t yet have WP7 devices, this left Microsoft’s booth as the only place to see the phones. Why is Android so popular with consumer electronics manufacturers in Japan, South Korea, China and Taiwan?  Yes, it’s free, but there’s more to it than that.  Android seems to have succeeded as an OEM OS because it’s directed at OEMs who are permitted to personalize it and extend it, and it provides enough base usability and touch-friendliness that OEMs want it.  In the process, it has become a de facto standard (which makes OEMs want it even more), and has done so in a remarkably short time: the OS was launched on a single phone in the US just 2 1/4 years ago. Despite its success and popularity, Apple’s iOS would never be used by OEMs, because it’s not meant to be embedded and customized, but rather to provide a fully finished experience.  Ironically, Windows Phone 7 is likewise disqualified from such embedded use.  Windows Mobile (6.x and earlier) may have been a candidate had it not atrophied so much in its final 5 years of life. What can Microsoft do?  It could start by developing a true touch-centric OS for tablets, whether that be within Windows 8, or derived from Windows Phone 7.  It would then need to deconstruct that finished product into components, via a new or altered version of Windows Embedded or Windows Embedded Compact.  And if Microsoft went that far, it would only make sense to work with its OEMs and mobile carriers to make certain they showcase their products using the OS at CES, and other consumer electronics venues, prominently. Mostly though, Microsoft would need to decide if it were really committed to putting sustained time, effort and money into a commodity product, especially given the far greater financial return that it now derives from its core Windows and Office franchises. Microsoft would need to see an OEM OS for what it is: a loss leader that helps build brand and platform momentum for up-level products.  Is that enough to make the investment worthwhile?  One thing is certain: if that question is not acknowledged and answered honestly, then any investment will be squandered.

    Read the article

  • Grid Layouts in ADF Faces using Trinidad

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF Faces does provide a data table component but none to define grid layouts. Grids are common in web design and developers often try HTML table markup wrapped in an f:verbatim tag or directly added the page to build a desired layout. Usually these attempts fail, showing unpredictable results, However, ADF Faces does not provide a table layout component, but Apache MyFaces Trinidad does. The Trinidad trh:tableLayout component is a thin wrapper around the HTML table element and contains a series of row layout elements, trh:rowLayout. Each trh:rowLayout component may contain one or many trh:cellLayout components to format cells content. <trh:tableLayout id="tl1" halign="left">   <trh:rowLayout id="rl1" valign="top" halign="left">     <trh:cellFormat id="cf1" width="100" header="true">        <af:outputLabel value="Label 1" id="ol1"/>     </trh:cellFormat>     <trh:cellFormat id="cf2" header="true"                               width="300">        <af:outputLabel value="Label 2" id="outputLabel1"/>        </trh:cellFormat>      </trh:rowLayout>      <trh:rowLayout id="rowLayout1" valign="top" halign="left">        <trh:cellFormat id="cellFormat1" width="100" header="false">           <af:outputLabel value="Label 3" id="outputLabel2"/>        </trh:cellFormat>     </trh:rowLayout>        ... </trh:tableLayout> To add the Trinidad tag library to your ADF Faces projects ... Open the Component Palette and right mouse click into it Choose "Edit Tag Libraries" and select the Trinidad components. Move them to the "Selected Libraries" section and Ok the dialog.The first time you drag a Trinidad component to a page, the web.xml file is updated with the required filters Note: The Trinidad tags don't participate in the ADF Faces RC geometry management. However, they are JSF components that are part of the JSF request lifecycle. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF Faces RC components work well with Trinidad layout components that don't use PPR. The PPR implementation of Trinidad is different from the one in ADF Faces. However, when you mix ADF Faces components with Trinidad components, avoid Trinidad components that have integrated PPR behavior. Only use passive Trinidad components.See:http://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_tableLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_rowLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_cellFormat.html .

    Read the article

  • Sun Ray Hardware Last Order Dates & Extension of Premier Support for Desktop Virtualization Software

    - by Adam Hawley
    In light of the recent announcement  to end new feature development for Oracle Virtual Desktop Infrastructure Software (VDI), Oracle Sun Ray Software (SRS), Oracle Virtual Desktop Client (OVDC) Software, and Oracle Sun Ray Client hardware (3, 3i, and 3 Plus), there have been questions and concerns regarding what this means in terms of customers with new or existing deployments.  The following updates clarify some of these commonly asked questions. Extension of Premier Support for Software Though there will be no new feature additions to these products, customers will have access to maintenance update releases for Oracle Virtual Desktop Infrastructure and Sun Ray Software, including Oracle Virtual Desktop Client and Sun Ray Operating Software (SROS) until Premier Support Ends.  To ensure that customer investments for these products are protected, Oracle  Premier Support for these products has been extended by 3 years to following dates: Sun Ray Software - November 2017 Oracle Virtual Desktop Infrastructure - March 2017 Note that OVDC support is also extended to the above dates since OVDC is licensed by default as part the SRS and VDI products.   As a reminder, this only affects the products listed above.  Oracle Secure Global Desktop and Oracle VM VirtualBox will continue to be enhanced with new features from time-to-time and, as a result, they are not affected by the changes detailed in this message. The extension of support means that customers under a support contract will still be able to file service requests through Oracle Support, and Oracle will continue to provide the utmost level of support to our customers as expected,  until the published Premier Support end date.  Following the end of Premier Support, Sustaining Support remains an 'indefinite' period of time.   Sun Ray 3 Series Clients - Last Order Dates For Sun Ray Client hardware, customers can continue to purchase Sun Ray Client devices until the following last order dates: Product Marketing Part Number Last Order Date Last Ship Date Sun Ray 3 Plus TC3-P0Z-00, TC3-PTZ-00 (TAA) September 13, 2013 February 28, 2014 Sun Ray 3 Client TC3-00Z-00 February 28, 2014 August 31, 2014 Sun Ray 3i Client TC3-I0Z-00 February 28, 2014 August 31, 2014 Payflex Smart Cards X1403A-N, X1404A-N February 28, 2014 August 31, 2014 Note the difference in the Last Order Date for the Sun Ray 3 Plus (September 13, 2013) compared to the other products that have a Last Order Date of February 28, 2014. The rapidly approaching date for Sun Ray 3 Plus is due to a supplier phasing-out production of a key component of the 3 Plus.   Given September 13 is unfortunately quite soon, we strongly encourage you to place your last time buy as soon as possible to maximize Oracle's ability fulfill your order. Keep in mind you can schedule shipments to be delivered as late as the end of February 2014, but the last day to order is September 13, 2013. Customers wishing to purchase other models - Sun Ray 3 Clients and/or Sun Ray 3i Clients - have additional time (until February 28, 2014) to assess their needs and to allow fulfillment of last time orders.  Please note that availability of supply cannot be absolutely guaranteed up to the last order dates and we strongly recommend placing last time buys as early as possible.  Warranty replacements for Sun Ray Client hardware for customers covered by Oracle Hardware Systems Support contracts will be available beyond last order dates, per Oracle's policy found on Oracle.com here.  Per that policy, Oracle intends to provide replacement hardware for up to 5 years beyond the last ship date, but hardware may not be available beyond the 5 year period after the last ship date for reasons beyond Oracle's control. In any case, by design, Sun Ray Clients have an extremely long lifespan  and mean time between failures (MTBF) - much longer than PCs, and over the years we have continued to see first- and second generations of Sun Rays still in daily use.  This is no different for the Sun Ray 3, 3i, and 3 Plus.   Because of this, and in addition to Oracle's continued support for SRS, VDI, and SROS, Sun Ray and Oracle VDI deployments can continue to expand and exist as a viable solution for some time in the future. Continued Availability of Product Licenses and Support Oracle will continue to offer all existing software licenses, and software and hardware support including: Product licenses and Premier Support for Sun Ray Software and Oracle Virtual Desktop Infrastructure Premier Support for Operating Systems (for Sun Ray Operating Software maintenance upgrades/support)  Premier Support for Systems (for Sun Ray Operating Software maintenance upgrades/support and hardware warranty) Support renewals For More Information For more information, please refer to the following documents for specific dates and policies associated with the support of these products: Document 1478170.1 - Oracle Desktop Virtualization Software and Hardware Lifetime Support Schedule Document 1450710.1 - Sun Ray Client Hardware Lifetime schedule Document 1568808.1 - Document Support Policies for Discontinued Oracle Virtual Desktop Infrastructure, Sun Ray Software and Hardware and Oracle Virtual Desktop Client Development For Sales Orders and Questions Please contact your Oracle Sales Representative or Saurabh Vijay ([email protected])

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • Upgrading Agent Controllers in Oracle Enterprise Manager Ops Center 12c

    - by S Stelting
    Oracle Enterprise Manager Ops Center 12c recently released an upgrade for Solaris Agent Controllers. In this week's blog post, we'll show you how to upgrade agent controllers. Detailed instructions about upgrading Agent Controllers are available in the product documentation here. This blog post uses an Enterprise Controller which is configured for connected mode operation. If you'd like to apply the agent update in a disconnected installation, additional instructions are available here. Step 1: Download Agent Controller Updates With a connected mode Ops Center installation, you can check for product updates at any time by selecting the Enterprise Controller from the left-hand Administration navigation tab. Select the right-hand Action link “Ops Center Downloads” to open a pop-up dialog displaying any new product updates. In this example, the Enterprise Controller has already been upgraded to the latest version (Update 1, also shown as build version 2076) so only the Agent Controller updates will appear. There are three updates available: one for Solaris 10 X86, one for Solaris 8-10 SPARC, and one for all versions of Solaris 11. Note that the last update in the screen shot is the Solaris 11 update; for details on any of the downloads, place your mouse over the information icon under the details column for a pop-up text region. Select the software to download and click the Next button to display the Ops Center license agreement. Review and click the check box to accept the license agreement, then click the Next button to begin downloading the software. The status screen shows the current download status. If desired, you can perform the downloads as a background job. Simply click the check box, then click the next button to proceed to the summary screen. The summary screen shows the updates to be downloaded as well as the current status. Clicking the Finish button will close the dialog and return to the Browser UI. The download job will continue to run in Ops Center and progress can still be viewed from the jobs menu at the bottom of the browser window. Step 2: Check the Version of Existing Agent Controllers After the download job completes, you can check the availability of agent updates as well as the current versions of your Agent Controllers from the left-hand Assets navigation tab. Select “Operating Systems” from the pull-down tab lets to display only OS assets. Next, select “Solaris” in the left-hand tab to display the Solaris assets. Finally, select the Summary tab in the center display panel to show which versions of agent controllers are installed in your data center. Notice that a few of the OS assets are not displayed in the Agent Controllers tab. Ops Center will not display OS instances which do not have an Agent Controller installation. This includes Enterprise Controllers and Proxy Controllers (unless the agent has been activated on the OS instance) and and OS instances using agentless management. For Agent Controllers which support an update, the version of agent software (in this example, 2083) appears to the right of the currently installed version. Step 3: Upgrade Your Agent Controllers If desired, you can upgrade agent controllers from the previous screen by selecting the desired systems and clicking the upgrade button. Alternatively, you can click the link “Upgrade All Agent Controllers” in the right-hand Actions menu: In either case, a pop-up dialog lets you start the upgrade process. The first screen in the dialog lets you choose the upgrade method: Ops Center provides three ways to upgrade agent controllers: Automatic Upgrade: If Agent Controllers are running on all assets, Ops Center can automatically upgrade the software to the latest version without requiring any login credentials to the system SSH using a single set of credentials: If all assets use the same login credentials, you can apply a single set to all assets for the upgrade process. The log-in credentials are the same ones used for asset discovery and management, which are stored in the Plan Management navigation tab under Credentials. SSH using individual credentials: If assets use different login credentials, you can select a different set for each asset. After selecting the upgrade method, click the Next button to proceed to the summary screen. Click the Finish button to close the pop-up dialog and start the upgrade job for the agent controllers. The upgrade job runs a series of tasks in parallel, and will upgrade all agents which have been selected. Once the job completes, the OS instances in your data center will be upgraded and running the latest version of Agent Controller software.

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • Best Method For Evaluating Existing Software or New Software

    How many of us have been faced with having to decide on an off-the-self or a custom built component, application, or solution to integrate in to an existing system or to be the core foundation of a new system? What is the best method for evaluating existing software or new software still in the design phase? One of the industry preferred methodologies to use is the Active Reviews for Intermediate Designs (ARID) evaluation process.  ARID is a hybrid mixture of the Active Design Review (ADR) methodology and the Architectural Tradeoff Analysis Method (ATAM). So what is ARID? ARD’s main goal is to ensure quality, detailed designs in software. One way in which it does this is by empowering reviewers by assigning generic open ended survey questions. This approach attempts to remove the possibility for allowing the standard answers such as “Yes” or “No”. The ADR process ignores the “Yes”/”No” questions due to the fact that they can be leading based on how the question is asked. Additionally these questions tend to receive less thought in comparison to more open ended questions. Common Active Design Review Questions What possible exceptions can occur in this component, application, or solution? How should exceptions be handled in this component, application, or solution? Where should exceptions be handled in this component, application, or solution? How should the component, application, or solution flow based on the design? What is the maximum execution time for every component, application, or solution? What environments can this component, application, or solution? What data dependencies does this component, application, or solution have? What kind of data does this component, application, or solution require? Ok, now I know what ARID is, how can I apply? Let’s imagine that your organization is going to purchase an off-the-shelf (OTS) solution for its customer-relationship management software. What process would we use to ensure that the correct purchase is made? If we use ARID, then we will have a series of 9 steps broken up by 2 phases in order to ensure that the correct OTS solution is purchases. Phase 1 Identify the Reviewers Prepare the Design Briefing Prepare the Seed Scenarios Prepare the Materials When identifying reviewers for a design it is preferred that they be pulled from a candidate pool comprised of developers that are going to implement the design. The believe is that developers actually implementing the design will have more a vested interest in ensuring that the design is correct prior to the start of code. Design debriefing consist of a summary of the design, examples of the design solving real world examples put in to use and should be no longer than two hours typically. The primary goal of this briefing is to adequately summarize the design so that the review members could actually implement the design. In the example of purchasing an OTS product I would attempt to review my briefing prior to its distribution with the review facilitator to ensure that nothing was excluded that should have not been. This practice will also allow me to test the length of the briefing to ensure that can be delivered in an appropriate about of time. Seed Scenarios are designed to illustrate conceptualized scenarios when applied with a set of sample data. These scenarios can then be used by the reviewers in the actual evaluation of the software, All materials needed for the evaluation should be prepared ahead of time so that they can be reviewed prior to and during the meeting. Materials Included: Presentation Seed Scenarios Review Agenda Phase 2 Present ARID Present Design Brainstorm and prioritize scenarios Apply scenarios Summarize Prior to the start of any ARID review meeting the Facilitator should define the remaining steps of ARID so that all the participants know exactly what they are doing prior to the start of the review process. Once the ARID rules have been laid out, then the lead designer presents an overview of the design which typically takes about two hours. During this time no questions about the design or rational are allowed to be asked by the review panel as a standard, but they are written down for use latter in the process. After the presentation the list of compiled questions is then summarized and sent back to the lead designer as areas that need to be addressed further. In the example of purchasing an OTS product issues could arise regarding security, the implementation needed or even if this is this the correct product to solve the needed solution. After the Design presentation a brainstorming and prioritize scenarios process begins by reducing the seed scenarios down to just the highest priority scenarios.  These will then be used to test the design for suitability. Once the selected scenarios have been defined the reviewers apply the examples provided in the presentation to the scenarios. The intended output of this process is to provide code or pseudo code that makes use of the examples provided while solving the selected seed scenarios. As a standard rule, the designers of the systems are not allowed to help the review board unless they all become stuck. When this occurs it is documented and along with the reason why the designer needed to help the review panel back on track. Once all of the scenarios have been completed the review facilitator reviews with the group issues that arise during the process. Then the reviewers will be polled as to efficacy of the review experience. References: Clements, Paul., Kazman, Rick., Klien, Mark. (2002). Evaluating Software Architectures: Methods and Case Studies Indianapolis, IN: Addison-Wesley

    Read the article

  • A Visual Studio Release Grows in Brooklyn

    - by andrewbrust
    Yesterday, Microsoft held its flagship launch event for Office 2010 in Manhattan.  Today, the Redmond software company is holding a local launch event for Visual Studio (VS) 2010, in Brooklyn.  How come information workers get the 212 treatment and developers are relegated to 718? Well, here’s the thing: the Brooklyn Marriott is actually a great place for an event, but you need some intimate knowledge of New York City to know that.  NBC’s Studio 8H, where the Office launch was held yesterday (and from where SNL is broadcast) is a pretty small venue, but you’d need some inside knowledge to recognize that.  Likewise, while Office 2010 is a product whose value is apparent.  Appreciating VS 2010’s value takes a bit more savvy.  Setting aside its year-based designation, this release of VS, counting the old Visual Basic releases, is the 10th version of the product.  How can a developer audience get excited about an integrated development environment when it reaches double-digit version numbers?  Well, it can be tough.  Luckily, Microsoft sent Jay Schmelzer, a Group Program Manager from the Visual Studio team in Redmond, to come tell the Brooklyn audience why they should be excited. Turns out there’s a lot of reasons.  Support fro SharePoint development is a big one.  In previous versions of VS, that support has been anemic, at best.  Shortage of SharePoint developers is a huge issue in the industry, and this should help.  There’s also built in support for Windows Azure (Microsoft’s cloud platform) and, through a download, support for the forthcoming Windows Phone 7 platform.  ASP.NET MVC, a “close-to-the-metal” Web development option that does away with the Web Forms abstraction layer, has a first-class presence in VS.  So too does jQuery, the Open Source environment that makes JavaScript development a breeze.  The jQuery support is so good that Microsoft now contributes to that Open Source project and offers IntelliSense support for it in the code editor. Speaking of the VS code editor, it now supports multi-monitor setups, zoom-in, and block selection.  If you’re not a developer, this may sound confusing and minute.  I’ll just say that for people who are developers these are little things that really contribute to productivity, and that translates into lower development costs. The really cool demo, though, was around Visual Studio 2010’s new debugging features.  This stuff is hard to showcase, but I believe it’s truly breakthrough technology: imagine being able to step backwards in time to see what might have caused a bug.  Cool?  Now imagine being able to do that, even if you weren’t the tester and weren’t present while the testing was being done.  Then imagine being able to see a video screen capture of what the tester was doing with your app when the bug occurred.  VS 2010 allows all that.  This could be the demise of the IWOMM (“it works on my machine”) syndrome. After the keynote, I asked Schmelzer if any of Microsoft’s competitors have debugging tools that come close to VS 2010’s.  His answer was an earnest “we don’t think so.”  If that’s true, that’s a big deal, and a huge advantage for developer teams who adopt it.  It will make software development much cheaper and more efficient.  Kind of like holding a launch event at the Brooklyn Marriott instead of 30 Rock in Manhattan! VS 2010 (version 10) and Office 2010 (version 14) aren’t the only new product versions Microsoft is releasing right now.  There’s also SQL Server 2008 R2 (version 10.5), Exchange 2010 (version 8, I believe), SharePoint 2010 (version 4) and, of course, Windows 7.  With so many new versions at such levels of maturity, I think it’s fair to say Microsoft has reached middle-age.  How does a company stave off a potential mid-life crisis, especially when with young Turks like Google coming along and competing so fiercely?  Hard to say.  But if focusing on core value, including value that’s hard to play into a sexy demo, is part oft the answer, then Microsoft’s doing OK.  And if some new tricks, like Windows Phone 7, can gain some traction, that might round things out nicely. Are the legacy products old tricks, or are they revised classics?  I honestly don’t know, because it’s the market’s prerogative to pass that judgement.  I can say this though: based on today’s show, I think Microsoft’s been doing its homework.

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Xobni Plus for Outlook [Review]

    - by The Geek
    Overview Xobni Plus is an addin that will bring a sidebar to Outlook which allows you to search through your inbox and contacts a lot easier. It provides the ability to search and keep track of your favorite social networks. Searching with Xobni is a lot more powerful than the default search feature in Outlook. It let’s you drill down your searches to conversations, email, links, and attachments. It now supports Outlook 2010 both 32 & 64-bit versions. Installation & Setup Installation is easy following the wizard. After completing the wizard you can tell you’re friends on Facebook and Twitter that you are now using it. You can also decide to join their Product Improvement Program if you want. After installation when you open Outlook, Xobni appears as a sidebar on the right side. Don’t worry about it always being in the way, as you can hide it if you need more room for other Outlook functions. After Xobni free is installed, you can upgrade to the Plus version at any time. A new window will open up and you can use your Credit Card, PayPal, or redeem a code if you have one. Features & Use Where to begin with the amount of features available in Xobni Plus? It really has an amazing amount of cool features. Of course you’ll have all of the features of the Free Version which we previously covered…and a lot more. After Xobni is installed you’ll notice a section for it on the Ribbon. From here you can search Xobni, show or hide the Sidebar, and change other options. It allows you to easily keep up with various social networks like Facebook, Twitter, and LinkedIn… Check out email analytics and contact ranks. Click on the Files Exchanged tab to search for specific attachments. Quickly search links exchanged with your contacts. Hover over a link to get a preview of what it entails. It gives you the ability to index all of your Yahoo mail as well, without the need for purchasing Yahoo Plus! Then your Yahoo messages appear in the Xobni sidebar. When you select a contact you can see related messages from you Yahoo account. Easily index all of your mail…including Yahoo mail for better organization and faster search results. There are several options you can select to change the way Xobni works. From setting up your Yahoo email, Indexing options, and much more. Additional Features of Xobni Plus Advanced Search Capabilities – Filter results, Boolean & Phrase Search, Ability to search Appointments & Tasks, Advanced Search Builder Search unlimited PST data files Xobni contacts in the compose screen Find links exchanged with your contacts View calendar appointments One year premium tech support No Ads! Performance We ran Xobni Plus on Outlook 2010 32-bit on a Dual-Core AMD Athlon system with 4GB of RAM and found it to run quite smoothly. However, we did notice it would sometimes slow down launching Outlook, especially if other apps are running at the same time. Product Support When you buy a license for Xobni Plus you get a full year of premium tech support. They provide a Questions and Answers page on their site where you can run a search query and answers appear instantly. You can contact support directly as a Plus member through their web form and they advise the turn around time is 2 business days. However, when we tested it, we received a response within 24 hours. They also provide FAQ, Community forum, and you can download the Owners Manual in PDF format from the support page. Conclusion Xobni Plus is a very powerful addin for Outlook and includes a lot more features that we didn’t cover in this review. You can download Xobni free edition which includes an 8 day free trial of the Plus version. This provides a good way to start getting familiar with it. Then upgrade to Xobni Plus at any time for $29.95. Once you get started, you’ll find the sidebar is nicely laid out and intuitive to use. If you live out of Outlook during the day, Xobni Plus is a great addition for fast and powerful searches. It provides an easy way to keep all of your contacts and messages well organized and easy to find. Xobni Plus works with XP, Vista, and Windows 7 (32 & 64-bit editions) Outlook 2003, 2007 and both 32 & 64-bit editions of Outlook 2010. Download Xobni Plus Download Xobni Free Edition Rating Installation: 8 Ease of Use: 8 Features: 9 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Xobni Free Powers Up Outlook’s Search and ContactsCreate an Email Template in Outlook 2003Add Social Elements to Your Gmail Contacts with RapportiveChange Outlook Startup FolderClear Outlook Searches and MRU (Most Recently Used) Lists TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Easiest, most fun way to program 2D games? Flash? XNA? Some other engine?

    - by Maxi
    Hi, this is a post detailing my search for the most enjoyable way for a hobbyist game programmer to sweeten his free time with making a game. My requirements: I looked at Flash first, I made a couple of small games but I'm doubtful of the performance. I would like to make a fairly large strategy game, with several hundred units fighting simultaneously, explosions and animations included. Also zoomable maps. I saw that Adobe has a new 3D API for Flash, but I don't know if that improves 2D performance aswell, I couldn't find anything related to that question on their MAX10 sessions. Would you say that Flash is a good technology for making large 2D games easily? I really like Actionscript, and I love how easy everything is in Flash. There are several engines available which make it even easier. I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well, but there are no good engines available for it, and it will only run on Windows, which is a huge turn off. Even bigger is, that you need to install the XNA redistributable each time you want to give the game to someone. If I use XNA, I would have to make all the tools myself, and I'd probably have to make them with WPF. (I'd love to make tools with Adobe AIR, but unfortunately the API's for image manipulation etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing through APIs. After all, I want to make a game, not an engine. So the question becomes: Is there maybe a cross platform (free or free to develop?) engine available that I could use for 2D development? I prefer: C#, Actionscript. I don't mind using c++ if the toolset is above average, but I highly doubt that there is something out there like that. Please prove me wrong :) So summary: I'd like to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm actually looking for a long time already. You'd help me a lot to make a decision finally. Feature-wise the Flatredball engine would be ideal. But I tried their tools, and quite frankly, they are horrible. Absolutely unusable, I'd need to make my own for sure. I didn't look at their API, but if their tools are so bad, I'm not inclined to look further. Unity3D. This one is quite nice, but I really don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each platform separately. The editor usability is average, the product overall is good enough for most purposes, but learning it myself would be overkill. Shiva 3D. It looks good enough, but again: I don't really need 3D. The editor usability is a little worse than Unity3D in my opinion and it wasn't clear to me how to start programming. I think it requires C++ for coding, so that's a negative too. I want to have fun, and c# is fun ;) SDL. Quite frankly, I'd still need to port to all those different SDL implementations. And I don't like OpenGL style programming, it's just plain ugly. And it needs c++, I know that there might be some wrappers available, but I don't like to use wrappers, because... Irrlicht. A lot of features, but support seems to be low and it is aimed at enthusiasts. C# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple platform support and c++. Torque2D. Costs something to use, and I didn't hear a lot of good things about support and documentation. Also costs extra for each platform.

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Documentation Changes in Solaris 11.1

    - by alanc
    One of the first places you can see Solaris 11.1 changes are in the docs, which have now been posted in the Solaris 11.1 Library on docs.oracle.com. I spent a good deal of time reviewing documentation for this release, and thought some would be interesting to blog about, but didn't review all the changes (not by a long shot), and am not going to cover all the changes here, so there's plenty left for you to discover on your own. Just comparing the Solaris 11.1 Library list of docs against the Solaris 11 list will show a lot of reorganization and refactoring of the doc set, especially in the system administration guides. Hopefully the new break down will make it easier to get straight to the sections you need when a task is at hand. Packaging System Unfortunately, the excellent in-depth guide for how to build packages for the new Image Packaging System (IPS) in Solaris 11 wasn't done in time to make the initial Solaris 11 doc set. An interim version was published shortly after release, in PDF form on the OTN IPS page. For Solaris 11.1 it was included in the doc set, as Packaging and Delivering Software With the Image Packaging System in Oracle Solaris 11.1, so should be easier to find, and easier to share links to specific pages the HTML version. Beyond just how to build a package, it includes details on how Solaris is packaged, and how package updates work, which may be useful to all system administrators who deal with Solaris 11 upgrades & installations. The Adding and Updating Oracle Solaris 11.1 Software Packages was also extended, including new sections on Relaxing Version Constraints Specified by Incorporations and Locking Packages to a Specified Version that may be of interest to those who want to keep the Solaris 11 versions of certain packages when they upgrade, such as the couple of packages that had functionality removed by an (unusual for an update release) End of Feature process in the 11.1 release. Also added in this release is a document containing the lists of all the packages in each of the major package groups in Solaris 11.1 (solaris-desktop, solaris-large-server, and solaris-small-server). While you can simply get the contents of those groups from the package repository, either via the web interface or the pkg command line, the documentation puts them in handy tables for easier side-by-side comparison, or viewing the lists before you've installed the system to pick which one you want to initially install. X Window System We've not had good X11 coverage in the online Solaris docs in a while, mostly relying on the man pages, and upstream X.Org docs. In this release, we've integrated some X coverage into the Solaris 11.1 Desktop Adminstrator's Guide, including sections on installing fonts for fontconfig or legacy X11 clients, X server configuration, and setting up remote access via X11 or VNC. Of course we continue to work on improving the docs, including a lot of contributions to the upstream docs all OS'es share (more about that another time). Security One of the things Oracle likes to do for its products is to publish security guides for administrators & developers to know how to build systems that meet their security needs. For Solaris, we started this with Solaris 11, providing a guide for sysadmins to find where the security relevant configuration options were documented. The Solaris 11.1 Security Guidelines extend this to cover new security features, such as Address Space Layout Randomization (ASLR) and Read-Only Zones, as well as adding additional guidelines for existing features, such as how to limit the size of tmpfs filesystems, to avoid users driving the system into swap thrashing situations. For developers, the corresponding document is the Developer's Guide to Oracle Solaris 11 Security, which has been the source for years for documentation of security-relevant Solaris API's such as PAM, GSS-API, and the Solaris Cryptographic Framework. For Solaris 11.1, a new appendix was added to start providing Secure Coding Guidelines for Developers, leveraging the CERT Secure Coding Standards and OWASP guidelines to provide the base recommendations for common programming languages and their standard API's. Solaris specific secure programming guidance was added via links to other documentation in the product doc set. In parallel, we updated the Solaris C Libary Functions security considerations list with details of Solaris 11 enhancements such as FD_CLOEXEC flags, additional *at() functions, and new stdio functions such as asprintf() and getline(). A number of code examples throughout the Solaris 11.1 doc set were updated to follow these recommendations, changing unbounded strcpy() calls to strlcpy(), sprintf() to snprintf(), etc. so that developers following our examples start out with safer code. The Writing Device Drivers guide even had the appendix updated to list which of these utility functions, like snprintf() and strlcpy(), are now available via the Kernel DDI. Little Things Of course all the big new features got documented, and some major efforts were put into refactoring and renovation, but there were also a lot of smaller things that got fixed as well in the nearly a year between the Solaris 11 and 11.1 doc releases - again too many to list here, but a random sampling of the ones I know about & found interesting or useful: The Privileges section of the DTrace Guide now gives users a pointer to find out how to set up DTrace privileges for non-global zones and what limitations are in place there. A new section on Recommended iSCSI Configuration Practices was added to the iSCSI configuration section when it moved into the SAN Configuration and Multipathing administration guide. The Managing System Power Services section contains an expanded explanation of the various tunables for power management in Solaris 11.1. The sample dcmd sources in /usr/demo/mdb were updated to include ::help output, so that developers like myself who follow the examples don't forget to include it (until a helpful code reviewer pointed it out while reviewing the mdb module changes for Xorg 1.12). The README file in that directory was updated to show the correct paths for installing both kernel & userspace modules, including the 64-bit variants.

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • How You Helped Shape Java EE 7...

    - by reza_rahman
    I have been working with the JCP in various roles since EJB 3/Java EE 5 (much of it on my own time), eventually culminating in my decision to accept my current role at Oracle (despite it's inevitable set of unique challenges, a role I find by and large positive and fulfilling). During these years, it has always been clear to me that pretty much everyone in the JCP genuinely cares about openness, feedback and developer participation. Perhaps the most visible sign to date of this high regard for grassroots level input is a survey on Java EE 7 gathered a few months ago. The survey was designed to get open feedback on a number of critical issues central to the Java EE 7 umbrella specification including what APIs to include in the standard. When we started the survey, I don't think anyone was certain what the level of participation from developers would really be. I also think everyone was pleasantly surprised that a large number of developers (around 1100) took the time out to vote on these very important issues that could impact their own professional life. And it wasn't just a matter of the quantity of responses. I was particularly impressed with the quality of the comments made through the survey (some of which I'll try to do justice to below). With Java EE 7 under our belt and the horizons for Java EE 8 emerging, this is a good time to thank everyone that took the survey once again for their thoughts and let you know what the impact of your voice actually was. As an aside, you may be happy to know that we are working hard behind the scenes to try to put together a similar survey to help kick off the agenda for Java EE 8 (although this is by no means certain). I'll break things down by the questions asked in the survey, the responses and the resulting change in the specification. APIs to Add to Java EE 7 Full/Web Profile The first question in the survey asked which of four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. Developers by and large wanted all the new APIs added to the full platform. The comments expressed particularly strong support for WebSocket and JCache. Others expressed dissatisfaction over the lack of a JSON binding (as opposed to JSON processing) API. WebSocket, JSON-P and JBatch are now part of Java EE 7. In addition, the long-awaited Java EE Concurrency Utilities API was also included in the Full Profile. Unfortunately, JCache was not finalized in time for Java EE 7 and the decision was made not to hold up the Java EE release any longer. JCache continues to move forward strongly and will very likely be included in Java EE 8 (it will be available much sooner than Java EE 8 to boot). An emergent standard for JSON-B is also a strong possibility for Java EE 8. When it came to the Web Profile, developers were supportive of adding WebSocket and JSON-P, but not JBatch and JCache. Both WebSocket and JSON-P are now part of the Web Profile, now also including the already popular JAX-RS API. Enabling CDI by Default The second question asked whether CDI should be enabled in Java EE by default. The overwhelming majority of developers supported the default enablement of CDI. In addition, developers expressed a desire for better CDI/Java EE alignment (with regards to EJB and JSF in particular). Some developers expressed legitimate concerns over the performance implications of enabling CDI globally as well as the potential conflict with other JSR 330 implementations like Spring and Guice. CDI is enabled by default in Java EE 7. Respecting the legitimate concerns, CDI 1.1 was very careful to add additional controls around component scanning. While a lot of work was done in Java EE 6 and Java EE 7 around CDI alignment, further alignment is under serious consideration for Java EE 8. Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations (e.g. @BatchContext). A majority of developers wanted consistent usage of @Inject. The comments again reflected a strong desire for CDI/Java EE alignment. A lot of emphasis in Java EE 7 was put into using @Inject consistently. For example, the JBatch specification is focused on using @Inject wherever possible. JAX-RS remains an exception with it's existing custom injection annotations. However, the JAX-RS specification leads understand the importance of eventual convergence, hopefully in Java EE 8. Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A solid majority of developers supported the idea of making @Stereotype more universal in Java EE. The comments maintained the general theme of strong support for CDI/Java EE alignment Unfortunately, there was not enough time and resources in Java EE 7 to implement this fairly pervasive feature. However, it remains a serious consideration for Java EE 8. Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE. Developers strongly supported the concept. Along with injection, interceptors are now supported across all Java EE 7 components including Servlets, Filters, Listeners, JAX-WS endpoints, JAX-RS resources, WebSocket endpoints and so on. I hope you are encouraged by how your input to the survey helped shape Java EE 7 and continues to shape Java EE 8. Participating in these sorts of surveys is of course just one way of contributing to Java EE. Another great way to stay involved is the Adopt-A-JSR Program. A large number of developers are already participating through their local JUGs. You could of course become a Java EE JSR expert group member or observer. You should stay tuned to The Aquarium for the progress of Java EE 8 JSRs if that's something you want to look into...

    Read the article

  • Customer Perspectives: Oracle Data Integrator

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE The Data Integration Product Management team will be hosting a customer panel session dedicated to Oracle Data Integrator at Oracle OpenWorld. I will have the pleasure to present this session with three of our customers: Paychex, Ross Stores and Turkcell. In this session, you will hear how Paychex, Ross Stores and Turkcell utilize Oracle Data Integrator to meet their IT and business needs. Our customers will be able to share with you how they use ODI in their environments, best practices, lessons learned and benefits of implementing Oracle Data Integrator. If you're interested in hearing more about how our customers use Oracle Data Integrator then I recommend attending this session: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Customer Perspectives: Oracle Data Integrator Wednesday October, 3rd, 1:15PM - 2:15PM Marriott Marquis – Golden Gate C3 v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} The Data Integration track at OpenWorld covers variety of topics and speakers. In addition to product management of Oracle GoldenGate, Oracle Data Integrator, and Enteprise Data Quality presenting product updates and roadmap, we have several customer panels and stand-alone sessions featuring select customers such as St. Jude Medical, Raymond James, Aderas, Turkcell, Paychex, Comcast, Ticketmaster, Bank of America and more. You can see an overview of Data Integration sessions here.  Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you are not able to attend OpenWorld, please check out our latest resources for Data Integration and Oracle GoldenGate. In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld. We hope to see you at OpenWorld and stay in touch via our future blogs. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Migrating R Scripts from Development to Production

    - by Mark Hornick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “How do I move my R scripts stored in one database instance to another? I have my development/test system and want to migrate to production.” Users of Oracle R Enterprise Embedded R Execution will often store their R scripts in the R Script Repository in Oracle Database, especially when using the ORE SQL API. From previous blog posts, you may recall that Embedded R Execution enables running R scripts managed by Oracle Database using both R and SQL interfaces. In ORE 1.3.1., the SQL API requires scripts to be stored in the database and referenced by name in SQL queries. The SQL API enables seamless integration with database-based applications and ease of production deployment. Loading R scripts in the repository Before talking about migration, we’ll first introduce how users store R scripts in Oracle Database. Users can add R scripts to the repository in R using the function ore.scriptCreate, or SQL using the function sys.rqScriptCreate. For the sample R script     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100) users wrap this in a function and store it in the R Script Repository with a name. In R, this looks like ore.scriptCreate("RandomRedDots", function () { line-height: 115%; font-family: "Courier New";">     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)) }) In SQL, this looks like begin sys.rqScriptCreate('RandomRedDots',  'function(){     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)   }'); end; / The R function ore.scriptDrop and SQL function sys.rqScriptDrop can be used to drop these scripts as well. Note that the system will give an error if the script name already exists. Accessing R scripts once they’ve been loaded If you’re not using a source code control system, it is possible that your R scripts can be misplaced or files modified, making what is stored in Oracle Database to only or best copy of your R code. If you’ve loaded your R scripts to the database, it is straightforward to access these scripts from the database table SYS.RQ_SCRIPTS. For example, select * from sys.rq_scripts where name='myScriptName'; From R, scripts in the repository can be loaded into the R client engine using a function similar to the following: ore.scriptLoad <- function(name) { query <- paste("select script from sys.rq_scripts where name='",name,"'",sep="") str.f <- OREbase:::.ore.dbGetQuery(query) assign(name,eval(parse(text = str.f)),pos=1) } ore.scriptLoad("myFunctionName") This function is also useful if you want to load an existing R script from the repository into another R script in the repository – think modular coding style. Just include this function in the body of the other function and load the named script. Migrating R scripts from one database instance to another To move a set of functions from one system to another, the following script loads the functions from one R script repository into the client R engine, then connects to the target database and creates the scripts there with the same names. scriptNames <- OREbase:::.ore.dbGetQuery("select name from sys.rq_scripts where name not like 'RQG$%' and name not like 'RQ$%'")$NAME for(s in scriptNames) { cat(s,"\n") ore.scriptLoad(s) } ore.disconnect() ore.connect("rquser","orcl","localhost","rquser") for(s in scriptNames) { cat(s,"\n") ore.scriptDrop(s) ore.scriptCreate(s,get(s)) } Best Practice When naming R scripts, keep in mind that the name can be up to 128 characters. As such, consider organizing scripts in a directory structure manner. For example, if an organization has multiple groups or applications sharing the same database and there are multiple components, use “/” to facilitate the function organization: line-height: 115%;">ore.scriptCreate("/org1/app1/component1/myFuntion1", myFunction1) ore.scriptCreate("/org1/app1/component1/myFuntion2", myFunction2) ore.scriptCreate("/org1/app2/component2/myFuntion2", myFunction2) ore.scriptCreate("/org2/app2/component1/myFuntion3", myFunction3) ore.scriptCreate("/org3/app2/component1/myFuntion4", myFunction4) Users can then query for all functions using the path prefix when looking up functions. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • D&rsquo;Arcy&rsquo;s Book Club - The New Strategic Selling

    - by D'Arcy Lussier
    The New Strategic Selling Miller and Heiman Amazon.ca Amazon.com Chapters Everybody is a salesmen. Every day, without knowing it, we sell something to someone. Now, the typical vision people think of when they hear the word “sales” is the sleazy used car salesperson who does whatever they can to get you to buy the clunker on their lot. But selling is not an action tied to money and products. Selling is about convincing people to see your point of view and act on it. If you want your company to cover a trip to a conference, you may have to sell the idea to your boss. If you want to buy that new big screen TV, you have to sell the idea to your significant other. If you want to go on a weekend fishing trip with the boys you might be called in to help sell the idea to your buddies wife. We all sell, but we don’t all sell very well. So enter The New Strategic Selling, a book based on the sales course put on by the Miller-Heiman group. In fact, this isn’t really a “New” strategy to selling as its been around for a number of years. But the concepts they present, the ideas about selling, these are still very radical based on what most of us have experienced. Gone are the high pressure, win at all cost, GlenGarry-GlenRoss style of sales…instead the book presents a framework to switch to need-based selling. It’s the idea that instead of going in raving about a product or service, you build a relationship where the buyer expresses what their needs are and your response is to present a solution that best fits that need. Instead of focussing on the amount of money you can squeeze out of a client, you focus on whether everyone wins, that they receive win-results from the engagement, that repeat business is developed over time delivering value over and over again. The great thing about the book is that what it teaches…things like how to identify different buying influencers, how to prepare for meetings, techniques to solicit information about what the buyer is really thinking/feeling…these things are entirely applicable in *any* situation that you need to sell to someone…and remember: selling is convincing people to see your point of view and act on it. So that new big screen TV you want to buy but need to convince your wife on? This book can help you. That training opportunity you want your company to send you on? This book can help you. The upgrade to your community park that you want to lobby the local civic authorities for? This book can help you. The book is a bit wordy. I found that the length could have been reduced and the points still have gotten across. That’s really the only knock that I have though; the insight that it provides is so worthwhile that having to chew through extra words is well worth it. You definitely don’t have to be a professional salesperson to benefit from this book. Rating: 4/5

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Encrypt images before uploading to Dropbox [migrated]

    - by Cherry
    I want to encrypt a file first before the file will be uploaded to the dropbox. So i have implement the encryption inside the uploading of the codes. However, there is an error after i integrate the codes together. Where did my mistake go wrong? Error at putFileOverwriteRequest and it says The method putFileOverwriteRequest(String, InputStream, long, ProgressListener) in the type DropboxAPI is not applicable for the arguments (String, FileOutputStream, long, new ProgressListener(){}) Another problem is that this FileOutputStream fis = new FileOutputStream(new File("dont know what to put in this field")); i do not know where to put the file so that after i read the file, it will call the path and then upload to the Dropbox. Anyone is kind to help me in this? As time is running out for me and i still cant solve the problem. Thank you in advance. The full code is as below. public class UploadPicture extends AsyncTask<Void, Long, Boolean> { private DropboxAPI<?> mApi; private String mPath; private File mFile; private long mFileLen; private UploadRequest mRequest; private Context mContext; private final ProgressDialog mDialog; private String mErrorMsg; public UploadPicture(Context context, DropboxAPI<?> api, String dropboxPath, File file) { // We set the context this way so we don't accidentally leak activities mContext = context.getApplicationContext(); mFileLen = file.length(); mApi = api; mPath = dropboxPath; mFile = file; mDialog = new ProgressDialog(context); mDialog.setMax(100); mDialog.setMessage("Uploading " + file.getName()); mDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); mDialog.setProgress(0); mDialog.setButton("Cancel", new OnClickListener() { public void onClick(DialogInterface dialog, int which) { // This will cancel the putFile operation mRequest.abort(); } }); mDialog.show(); } @Override protected Boolean doInBackground(Void... params) { try { KeyGenerator keygen = KeyGenerator.getInstance("DES"); SecretKey key = keygen.generateKey(); //generate key //encrypt file here first byte[] plainData; byte[] encryptedData; Cipher cipher = Cipher.getInstance("DES/ECB/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key); //File f = new File(mFile); //read file FileInputStream in = new FileInputStream(mFile); //obtains input bytes from a file plainData = new byte[(int)mFile.length()]; in.read(plainData); //Read bytes of data into an array of bytes encryptedData = cipher.doFinal(plainData); //encrypt data FileOutputStream fis = new FileOutputStream(new File("dont know what to put in this field")); //upload to a path first then call the path so that it can be uploaded up to the dropbox //save encrypted file to dropbox // By creating a request, we get a handle to the putFile operation, // so we can cancel it later if we want to //FileInputStream fis = new FileInputStream(mFile); String path = mPath + mFile.getName(); mRequest = mApi.putFileOverwriteRequest(path, fis, mFile.length(), new ProgressListener() { @Override public long progressInterval() { // Update the progress bar every half-second or so return 500; } @Override public void onProgress(long bytes, long total) { publishProgress(bytes); } }); if (mRequest != null) { mRequest.upload(); return true; } } catch (DropboxUnlinkedException e) { // This session wasn't authenticated properly or user unlinked mErrorMsg = "This app wasn't authenticated properly."; } catch (DropboxFileSizeException e) { // File size too big to upload via the API mErrorMsg = "This file is too big to upload"; } catch (DropboxPartialFileException e) { // We canceled the operation mErrorMsg = "Upload canceled"; } catch (DropboxServerException e) { // Server-side exception. These are examples of what could happen, // but we don't do anything special with them here. if (e.error == DropboxServerException._401_UNAUTHORIZED) { // Unauthorized, so we should unlink them. You may want to // automatically log the user out in this case. } else if (e.error == DropboxServerException._403_FORBIDDEN) { // Not allowed to access this } else if (e.error == DropboxServerException._404_NOT_FOUND) { // path not found (or if it was the thumbnail, can't be // thumbnailed) } else if (e.error == DropboxServerException._507_INSUFFICIENT_STORAGE) { // user is over quota } else { // Something else } // This gets the Dropbox error, translated into the user's language mErrorMsg = e.body.userError; if (mErrorMsg == null) { mErrorMsg = e.body.error; } } catch (DropboxIOException e) { // Happens all the time, probably want to retry automatically. mErrorMsg = "Network error. Try again."; } catch (DropboxParseException e) { // Probably due to Dropbox server restarting, should retry mErrorMsg = "Dropbox error. Try again."; } catch (DropboxException e) { // Unknown error mErrorMsg = "Unknown error. Try again."; } catch (FileNotFoundException e) { } return false; } @Override protected void onProgressUpdate(Long... progress) { int percent = (int)(100.0*(double)progress[0]/mFileLen + 0.5); mDialog.setProgress(percent); } @Override protected void onPostExecute(Boolean result) { mDialog.dismiss(); if (result) { showToast("Image successfully uploaded"); } else { showToast(mErrorMsg); } } private void showToast(String msg) { Toast error = Toast.makeText(mContext, msg, Toast.LENGTH_LONG); error.show(); } }

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >