Search Results

Search found 25250 results on 1010 pages for 'oracle oracle e business suite workflow notifications worklist'.

Page 522/1010 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • SQL Modify question

    - by Jeff
    I need to replace a lot of values for a Table in SQL if the inactivity is greater then 30 days. I have UPDATE VERSION SET isActive = 0 WHERE customerNo = ( SELECT c.VersionNo FROM Activity b INNER JOIN VERSION c ON b.VersionNo = c.VersionNo WHERE (Months_between(sysdate, b.Activitye) > 30) ); It only works for one value though, if there is more then one returned it fails. What am I missing here? If someone could educate me on what is going on, I'd also appreciate it.

    Read the article

  • HowTo check whether Exception Block is available for the main PLSQL block or routine

    - by user1297211
    I am trying to think of a validator that checks for Exception block available in PL/SQL block or any routine for the main body ( Highlighted in Bold). Eg : DECLARE some data Procedure xyx IS BEGIN .... EXCEPTION .. END; BEGIN some data BEGIN .... EXCEPTION .. END; **EXCEPTION** some data BEGIN .... EXCEPTION .. END; END; This is a simple example there can be many other scenarios but my need id to find that Exception block is avaialble for the main block of PL/SQL code. Please let me know if you have any suggestion. Thanks

    Read the article

  • Performance: Subquery or Joining

    - by Auro
    Hello I got a little question about performance of a subquery / joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • if not (i_ReLaunch = 1 and (dt_enddate is not null)) How this epression will be evaluated in Oracle

    - by Phani Kumar PV
    if not (i_ReLaunch = 1 and (dt_enddate is not null)) How this epression will be evaluated in Oracle 10g when the input value of the i_ReLaunch = null and the value of the dt_enddate is not null it is entering the loop. according to the rules in normal c# and all it should not enter the loop as it will be as follows with the values. If( not(false and (true)) = if not( false) =if( true) which implies it should enters the loop But it is not happening Can someone let me know if i am wrong at any place

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

  • replicating master tables mapping in transaction tables

    - by NoDisplay
    I have three master tables for location information Country {ID, Name} State {ID, Name, CountryID} City {ID, Name, StateID} Now I have one transcation table called Person which hold the person name and his location information. My Question is shall I have only CityID in the Person table like this: Person {ID, Name, CityID}' And have view of join query which give me detail like "Person{ID,Name,City,State,Country}" or Shall I replicate the mapping Person {ID, Name, CityID, StateID, CountryID} Please suggest which do you feel is to be selected and why? if there is any other option available, please suggest. Thanks in advance.

    Read the article

  • Fill data gaps without UNION

    - by Dave Jarvis
    Problem There are data gaps that need to be filled, possibly using PARTITION BY. Query Statement The select statement reads as follows: SELECT count( r.incident_id ) AS incident_tally, r.severity_cd, r.incident_typ_cd FROM report_vw r GROUP BY r.severity_cd, r.incident_typ_cd ORDER BY r.severity_cd, r.incident_typ_cd Code Tables The severity codes and incident type codes are from: severity_vw incident_type_vw Actual Result Data 36 0 ENVIRONMENT 1 1 DISASTER 27 1 ENVIRONMENT 4 2 SAFETY 1 3 SAFETY Required Result Data 36 0 ENVIRONMENT 0 0 DISASTER 0 0 SAFETY 27 1 ENVIRONMENT 0 1 DISASTER 0 1 SAFETY 0 2 ENVIRONMENT 0 2 DISASTER 4 2 SAFETY 0 3 ENVIRONMENT 0 3 DISASTER 1 3 SAFETY Any ideas how to use PARTITION BY (or JOINs) to fill in the zero counts?

    Read the article

  • Create trigger for auto incerment id and default unix datetime

    - by user1804985
    Any one help me to create a trigger for auto increment fld_id and Unix datetime. My table field is fld_id(int),fld_date(number),fld_value(varchar2). My insert query is insert into table (fld_value)values('xxx'); insert into table (fld_value)values('yyy'); I need the table record like this fld_id fld_date fld_value 1 1354357476 xxx 2 1354357478 yyy Please help me to create this.I can't able to do this..

    Read the article

  • Difference between INSERT INTO and INSERT ALL INTO

    - by emily soto
    While I was inserting some records in table i found that.. INSERT INTO T_CANDYBAR_DATA SELECT CONSUMER_ID,CANDYBAR_NAME,SURVEY_YEAR,GENDER,1 AS STAT_TYPE,OVERALL_RATING FROM CANDYBAR_CONSUMPTION_DATA UNION SELECT CONSUMER_ID,CANDYBAR_NAME,SURVEY_YEAR,GENDER,2 AS STAT_TYPE,NUMBER_BARS_CONSUMED FROM CANDYBAR_CONSUMPTION_DATA; 79 rows inserted. INSERT ALL INTO t_candybar_data VALUES (consumer_id,candybar_name,survey_year,gender,1,overall_rating) INTO t_candybar_data VALUES (consumer_id,candybar_name,survey_year,gender,2,number_bars_consumed) SELECT * FROM candybar_consumption_data 86 rows inserted. I have read somewhere that INSERT ALL INTO automatically unions then why those difference is showing.

    Read the article

  • Select distinct ... inner join vs. select ... where id in (...)

    - by Tonio
    I'm trying to create a subset of a table (as a materialized view), defined as those records which have a matching record in another materialized view. For example, let's say I have a Users table with user_id and name columns, and a Log table, with entry_id, user_id, activity, and timestamp columns. First I create a materialized view of the Log table, selecting only those rows with timestamp some_date. Now I want a materliazed view of the Users referenced in my snapshot of the Log table. I can either create it as select * from Users where user_id in (select user_id from Log_mview), or I can do select distinct u.* from Users u inner join Log_mview l on u.user_id = l.user_id (need the distinct to avoid multiple hits from users with multiple log entries). The former seems cleaner and more elegant, but takes much longer. Am I missing something? Is there a better way to do this?

    Read the article

  • Caluculating sum of activity

    - by Maddy
    I have a table which is with following kind of information activity cost order date other information 10 1 100 -- 20 2 100 10 1 100 30 4 100 40 4 100 20 2 100 40 4 100 20 2 100 10 1 101 10 1 101 20 1 101 My requirement is to get sum of all activities over a work order ex: for order 100 1+2+4+4=11 1(for activity 10) 2(for activity 20) 4 (for activity 30) etc. i tried with group by, its taking lot time for calculation. There are 1lakh plus records in warehouse. is there any possibility in efficient way. SELECT SUM(MIN(cost)) FROM COST_WAREHOUSE a WHERE order = 100 GROUP BY (order, ACTIVITY)

    Read the article

  • ATG Live Webcast Event - EBS 12 OAF Rich UI Enhancements

    - by Bill Sawyer
    The E-Business Suite Applications Technology Group (ATG) participates in several conferences a year, including Oracle OpenWorld in San Francisco and OAUG/Collaborate.   We announce new releases, roadmaps, updates, and other news at these events.  These events are exciting, drawing thousands of attendees, but it's clear that only a fraction of our EBS users are able to participate. We touch upon many of the same announcements here on this blog, but a blog article is necessarily different than an hour-long conference session.  We're very interested in offering more in-depth technical content and the chance to interact directly with senior ATG Development staff.  New ATG Live Webcast series -- free of charge As part of that initiative, I'm very pleased to announce that we're launching a new series of free ATG Live Webcasts jointly with Oracle University.  Our goal is to provide solid, authoritative coverage of some of the latest ATG technologies, broadcasting live from our development labs to you. Our first event is titled: The Latest E-Business Suite R12.x OA Framework Rich User Interface Enhancements This live one-hour webcast will offer a comprehensive review of the latest user interface enhancements and updates to OA Framework in EBS 12. Developers will get a detailed look at new features designed to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and web services. Topics will include new rich user interface (UI) capabilities such as: 

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • It's raining development VirtualBox images again!

    - by pieter.humphrey
                                                The cloud has burst.. forecast is looking like large amounts of VirtualBox images are coming down from OTN.   Are you finding the install for Database, WebLogic, SOA or WebCenter to be complicated when your goal is simply to setup a development sandbox?  Sick of giving your credit card info to cloud vendors, only to be stuck in a walled garden where you can't connect to your own internal systems?   Are you new to Java and just wanted something technical to sink your teeth into?  Or maybe you just want to put some stuff on that new terabyte drive you got? ;) Have no fear.  VirtualBox 4.0 is here.  We've have several development (read: don't use in production) images that were designed for use for in-person events, but we're posting them for your enjoyment.  Some of the images have step by step hands on labs baked into them too!  So get a freeware download manager like BitComet, install VirtualBox, an MD5 checksum utility (if you are on windows) and get wet!   del.icio.us Tags: java,development,java ee,java fx,virtualBox,virtualization,database,soa,weblogic,jdeveloper,eclipse,netbeans,sql developer,times ten,zend,php,SOA,SOA Suite,BPM,BAM,B2B,hudson,maven,subversion,Eclipse,Solaris,OTN Technorati Tags: java,development,java ee,java fx,virtualBox,virtualization,database,soa,weblogic,jdeveloper,eclipse,netbeans,sql developer,times ten,zend,php,SOA,SOA Suite,BPM,BAM,B2B,hudson,maven,subversion,Eclipse,Solaris,OTN

    Read the article

  • Oracle (xe) 10 vs 11 . Have I lost the SQL tuning pages ? Am I going out of my mind?

    - by Richard Green
    Ok .. so perhaps the title needs calming down a bit, but basically I am after the xe 11g equivalent of the pages that you can see here : http://docs.oracle.com/cd/B25329_01/doc/admin.102/b25107/getstart.htm#BABHJAGE whcih you can then navigate to stuff like "top 50 queries" and "longest running queries" etc etc. For the life of me, I can't find that on the most recent xe edition. Please can someone direct me to where I might find these very useful admin pages ! Or was I imagining it all along :-/ Edit: These are the pages I am after: http://docs.oracle.com/cd/B25329_01/doc/admin.102/b25107/monitoring.htm

    Read the article

  • Security in OBIEE 11g, Part 2

    - by Rob Reynolds
    Continuing the series on OBIEE 11g, our guest blogger this week is Pravin Janardanam. Here is Part 2 of his overview of Security in OBIEE 11g. OBIEE 11g Security Overview, Part 2 by Pravin Janardanam In my previous blog on Security, I discussed the OBIEE 11g changes regarding Authentication mechanism, RPD protection and encryption. This blog will include a discussion about OBIEE 11g Authorization and other Security aspects. Authorization: Authorization in 10g was achieved using a combination of Users, Groups and association of privileges and object permissions to users and Groups. Two keys changes to Authorization in OBIEE 11g are: Application Roles Policies / Permission Groups Application Roles are introduced in OBIEE 11g. An application role is specific to the application. They can be mapped to other application roles defined in the same application scope and also to enterprise users or groups, and they are used in authorization decisions. Application roles in 11g take the place of Groups in 10g within OBIEE application. In OBIEE 10g, any changes to corporate LDAP groups require a corresponding change to Groups and their permission assignment. In OBIEE 11g, Application roles provide insulation between permission definitions and corporate LDAP Groups. Permissions are defined at Application Role level and changes to LDAP groups just require a reassignment of the Group to the Application Roles. Permissions and privileges are assigned to Application Roles and users in OBIEE 11g compared to Groups and Users in 10g. The diagram below shows the relationship between users, groups and application roles. Note that the Groups shown in the diagram refer to LDAP Groups (WebLogic Groups by default) and not OBIEE application Groups. The following screenshot compares the permission windows from Admin tool in 10g vs 11g. Note that the Groups in the OBIEE 10g are replaced with Application Roles in OBIEE 11g. The same is applicable to OBIEE web catalog objects.    The default Application Roles available after OBIEE 11g installation are BIAdministrator, BISystem, BIConsumer and BIAuthor. Application policies are the authorization policies that an application relies upon for controlling access to its resources. An Application Role is defined by the Application Policy. The following screenshot shows the policies defined for BIAdministrator and BISystem Roles. Note that the permission for impersonation is granted to BISystem Role. In OBIEE 10g, the permission to manage repositories and Impersonation were assigned to “Administrators” group with no control to separate these permissions in the Administrators group. Hence user “Administrator” also had the permission to impersonate. In OBI11g, BIAdministrator does not have the permission to impersonate. This gives more flexibility to have multiple users perform different administrative functions. Application Roles, Policies, association of Policies to application roles and association of users and groups to application roles are managed using Fusion Middleware Enterprise Manager (FMW EM). They reside in the policy store, identified by the system-jazn-data.xml file. The screenshots below show where they are created and managed in FMW EM. The following screenshot shows the assignment of WebLogic Groups to Application Roles. The following screenshot shows the assignment of Permissions to Application Roles (Application Policies). Note: Object level permission association to Applications Roles resides in the RPD for repository objects. Permissions and Privilege for web catalog objects resides in the OBIEE Web Catalog. Wherever Groups were used in the web catalog and RPD has been replaced with Application roles in OBIEE 11g. Following are the tools used in OBIEE 11g Security Administration: ·       Users and Groups are managed in Oracle WebLogic Administration console (by default). If WebLogic is integrated with other LDAP products, then Users and Groups needs to managed using the interface provide by the respective LDAP vendor – New in OBIEE 11g ·       Application Roles and Application Policies are managed in Oracle Enterprise Manager - Fusion Middleware Control – New in OBIEE 11g ·       Repository object permissions are managed in OBIEE Administration tool – Same as 10g but the assignment is to Application Roles instead of Groups ·       Presentation Services Catalog Permissions and Privileges are managed in OBI Application administration page - Same as 10g but the assignment is to Application Roles instead of Groups Credential Store: Credential Store is a single consolidated service provider to store and manage the application credentials securely. The credential store contains credentials that either user supplied or system generated. Credential store in OBIEE 10g is file based and is managed using cryptotools utility. In 11g, Credential store can be managed directly from the FMW Enterprise Manager and is stored in cwallet.sso file. By default, the Credential Store stores password for deployed RPDs, BI Publisher data sources and BISystem user. In addition, Credential store can be LDAP based but only Oracle Internet Directory is supported right now. As you can see OBIEE security is integrated with Oracle Fusion Middleware security architecture. This provides a common security framework for all components of Business Intelligence and Fusion Middleware applications.

    Read the article

  • Tom Kyte Budapestre jön!

    - by Lajos Sárecz
    Épp azon tunodöm, hogy blogom olvasói között van-e olyan, aki ne ismerné a asktom.oracle.com oldalt. Gyanítom, hogy kevesen vannak. Bár Tom mostanában elég elfoglaltnak tunik, hiszen népszeru oldalán jelenleg azt kéri, hogy elmaradásai miatt késobb kérdezzenek tole, most csupán a már megválaszolt kérdések böngészésére van lehetoség. Megjegyzem ez sem kis ajándék, ráadásul a mester aktivitását mutatja az az adat, amely a fooldalon látható: Az elmúlt négy hétben 47 új kérdést kapott, elolvasott 532 reakciót és megválaszolt ezek közül 380-at. Csoda, hogy van ideje átruccanni Európába, és eloadást tartani a hazai szakembereknek is. Információim szerint ez olyannyira egyedülálló lehetoség lesz, hogy eddig még ilyen nem volt Magyarországon, másrészt valószínuleg a jövoben nem is nagyon lesz még a régióban sem, mivel egyre inkább az a trend hogy úgynevezett virtual class-okat fog tartani o is, azaz személyesen majd maximum az éves OpenWorld konferencián lehet ot látni egy-egy eloadás erejéig. Áprilisban, Budapesten viszont két teljes napig lehet hallgatni tole a hasznosabbnál hasznosabb tanácsokat. Mik is lesznek ezek? Miért fontos a bind változók használata? Hogyan segíti a teljesítményt, a skálázhatóságot és még a biztonságot is? Hogyan muködik a materializált nézet? Mikor érdemes használni és hogyan lehet a leghasznosabbá tenni? Mikor milyen indexet érdemes használni? Mindenki tisztában van azzal, hogy indexekre szükség van, az már kevésbé egyértelmu mikor melyiket érdemes használni az optimális teljesítmény érdekében. Az eloadáson választ kapunk arra is Tom Kyte-tól, milyen szempontok alapján kell kiválasztani a megfelelo indexelést. Milyen adattárolási formákat érdemes választani? Elsore tán nem is gondolnánk hányféle trükk van az adatok optimális tárolására. Hogy csak a legfontosabbakat említsem: klaszeterezett adatszervezés, index-szervezésu tábla, particionálás, tömörítés. Mikor van szükség az adatok átszervezésére? Mik a legjobb technikák az adatok átszervezésére, hogyan lehet ezt úgy végrehajtani, hogy legkevésbé érintse az alkalmazás felhasználóit? Azt gondolom ezek a témák minden gyakorló rendszergazdának és Oracle fejlesztonek ismerosen csengenek, azonban abban egészen biztos vagyok, hogy mindenki számos újdonságot, hasznos tanácsot kaphat, ha részt vesz Thomas Kyte 2 napos tréningjén. Ja és nem utolsó sorban, egészen biztos, hogy lehetoség lesz kérdezni is Tom-tól! További információ és a regisztráció az Oracle University oldalán érheto el.

    Read the article

  • Show Notes: Architects in the Cloud

    - by Bob Rhubart
    Stephen G. Bennett and Archie Reed, the authors of Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing,  discuss what’s new and what’s not so new about cloud computing, talk about how marketing hype has muddied understanding of what cloud is and what it can do, and explore other issues in the latest ArchBeat interview series. Listen to Part 1 Listen to Part 2 (December 22) Listen to Part 3 (December 29) Listen to Part 4 (January 5) Connect If you have questions, comments, or would otherwise like to interact directly with Steve or Archie, you can so through the following channels: Stephen G. Bennett Blog | Twitter | LinkedIn Archie Reed Blog | Twitter | LinkedIn Steve and Archie have also set up a Twitter account and blog specifically for their book: Twitter: @concisecloud Blog: concisecloud.com Of course, the book is available on Amazon: http://amzn.to/silverclouddarklinings Stay tuned: RSS Technorati Tags: oracle,otn,archbeat,cloud computing,podcast,. stephen bennett,archie reed del.icio.us Tags: oracle,otn,archbeat,cloud computing,podcast,. stephen bennett,archie reed

    Read the article

  • Grounded in Dublin

    - by Mike Dietrich
    Friday's hands-on workshop in the Oracle office in Dublin was quite good fun for everybody - except for Mick who has just recognized that his Ryanair flight back to Cork has been canceled (So I hope you've returned home well!) and me as my flights back to Munich via London City had been canceled as well. It's always good to have somebody in the workshop from Air Lingus so I've got hourly information what's going in in the Irish airspace (and now I know that the system dealing with such situations is an well prepared Oracle database which runs just like a switch watch - Thanks again for all your support!!! Was great to talk to you!!!). But to be honest, there are worse places to be grounded for a few days than Dublin. At least it gave me the chance to do something which I never had time enough before when visiting Oracle Ireland: a bit of sightseeing. When I've realized that nothing seems to move over the weekend I started organizing my travel back yesterday. It was no fun at all because there's no single system to book such a travel. Figuring out all possibilities and options going back to Munich was the first challange. Irish Ferries webpage was moaning with all the unexpected load (currently it's fully down). Hotel booking websites showed vacancies in Holyhead but didn't let me book. And calling them just reveiled that there are no rooms left. Haven't stayed overnight in a train station for quite a while ;-) The website of VirginTrains puzzled me with offering a seat at an enormous price for a train ride from Holyhead to London Euston (Thanks, Sir Richard Branson!) just to tell me after I booked a ticket that there are no seats left (but I traveled German railsways a few weeks ago from Düsseldorf to Frankfurt sitting on the floor as well). Eurostar's website let me choose tickets through the tunnel to tell me in the final step that the ticket cannot be confirmed as there are no seats left - but the next check again showed bookable seats - must be a database from some other vendor which has no proper row level locking ... hm ...?! Finally the TGV page for the speed train to Stuttgart and then the ICE to Munich was not allowing searches for quite a while - but ultimately ... after 4.5 hours of searching, waiting, sending credit card information again and again ... So if you have a few spare fingers please keep them crossed :-) And good luck to all my colleagues traveling back from the Exadata training in Berlin. As Mike Appleyard, my colleague from the UK presales team wrote: "Dublin and Berlin aren't too bad a place to get stuck... ;-)"

    Read the article

  • Serious about Embedded: Java Embedded @ JavaOne 2012

    - by terrencebarr
    It bears repeating: More than ever, the Java platform is the best technology for many embedded use cases. Java’s platform independence, high level of functionality, security, and developer productivity address the key pain points in building embedded solutions. Transitioning from 16 to 32 bit or even 64 bit? Need to support multiple architectures and operating systems with a single code base? Want to scale on multi-core systems? Require a proven security model? Dynamically deploy and manage software on your devices? Cut time to market by leveraging code, expertise, and tools from a large developer ecosystem? Looking for back-end services, integration, and management? The Java platform has got you covered. Java already powers around 10 billion devices worldwide, with traditional desktops and servers being only a small portion of that. And the ‘Internet of Things‘ is just really starting to explode … it is estimated that within five years, intelligent and connected embedded devices will outnumber desktops and mobile phones combined, and will generate the majority of the traffic on the Internet. Is your platform and services strategy ready for the coming disruptions and opportunities? It should come as no surprise that Oracle is keenly focused on Java for Embedded. At JavaOne 2012 San Francisco the dedicated track for Java ME, Java Card, and Embedded keeps growing, with 52 sessions, tutorials, Hands-on-Labs, and BOFs scheduled for this track alone, plus keynotes, demos, booths, and a variety of other embedded content. To further prove Oracle’s commitment, in 2012 for the first time there will be a dedicated sub-conference focused on the business aspects of embedded Java: Java Embedded @ JavaOne. This conference will run for two days in parallel to JavaOne in San Francisco, will have its own business-oriented track and content, and targets C-level executives, architects, business leaders, and decision makers. Registration and Call For Papers for Java Embedded @ JavaOne are now live. We expect a lot of interest in this new event and space is limited, so be sure to submit your paper and register soon. Hope to see you there! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: ARM, Call for Papers, Embedded Java, Java Embedded, Java Embedded @ JavaOne, Java ME, Java SE Embedded, Java SE for Embedded, JavaOne San Francisco, PowerPC

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >