Search Results

Search found 3533 results on 142 pages for 'jms topic'.

Page 63/142 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • HOw TO run a c programm in ubuntu 10.10

    - by Vinay Khandalkar
    Hello I want to run c programs in ubuntu 10.10 because in my college lab i gave the advice to change os they use replace xp to ubuntu and they did it by my request to them but in our college lab all student are doing daily practices on c programming and now there is problems to run the c programs in ubuntu 10.10 so please help me.please Is there is any one to give me solution on this topic please fast. Thank You !!!!!!!!!!!

    Read the article

  • How do I add animation to an iPhone app?

    - by james p
    So I came from a Flash background where I can animate in timeline. I've completed the Beginning iPhone Development book and just realized that I still don't know how to get an animation in. I'm guessing I need to import png sequences? Can anyone point me to an appropriate place to learn more about this topic? I want to make a game and my game objects need to animate. Thanks in advance!!

    Read the article

  • Using an icon for links opening an lightbox

    - by Logistetica
    Hi, On one of our sites we decided to use lighboxes in 2 instances. I am aware of pros and cons of using lightboxes so this is not a topic for a discussion. But I'd like to get your opinion about using small icons denouncing that clicking link will open a lightbox. Something like below. What do you think?

    Read the article

  • Java threads for the beginner

    - by Boba
    I've been trying to explain Java threading to a colleague who has never been exposed to multi-threaded applications, but apparently I'm not a very good teacher. Can anyone recommend a good online or offline resource that can explain threading in a simple, step-by-step manner? I know it's a complex topic, but surely there exists an article, book, or other explanation that can result in an "Aha! I get it, finally!" moment.

    Read the article

  • JavaOne Latin America 2012 is a wrap!

    - by arungupta
    Third JavaOne in Latin America (2010, 2011) is now a wrap! Like last year, the event started with a Geek Bike Ride. I could not attend the bike ride because of pre-planned activities but heard lots of good comments about it afterwards. This is a great way to engage with JavaOne attendees in an informal setting. I highly recommend you joining next time! JavaOne Blog provides a a great coverage for the opening keynotes. I talked about all the great set of functionality that is coming in the Java EE 7 Platform. Also shared the details on how Java EE 7 JSRs are willing to take help from the Adopt-a-JSR program. glassfish.org/adoptajsr bridges the gap between JUGs willing to participate and looking for areas on where to help. The different specification leads have identified areas on where they are looking for feedback. So if you are JUG is interested in picking a JSR, I recommend to take a look at glassfish.org/adoptajsr and jump on the bandwagon. The main attraction for the Tuesday evening was the GlassFish Party. The party was packed with Latin American JUG leaders, execs from Oracle, and local community members. Free flowing food and beer/caipirinhas acted as great lubricant for great conversations. Some of them were considering the migration from Spring -> Java EE 6 and replacing their primary app server with GlassFish. Locaweb, a local hosting provider sponsored a round of beer at the party as well. They are planning to come with Java EE hosting next year and GlassFish would be a logical choice for them ;) I heard lots of positive feedback about the party afterwards. Many thanks to Bruno Borges for organizing a great party! Check out some more fun pictures of the party! Next day, I gave a presentation on "The Java EE 7 Platform: Productivity and HTML 5" and the slides are now available: With so much new content coming in the plaform: Java Caching API (JSR 107) Concurrency Utilities for Java EE (JSR 236) Batch Applications for the Java Platform (JSR 352) Java API for JSON (JSR 353) Java API for WebSocket (JSR 356) And JAX-RS 2.0 (JSR 339) and JMS 2.0 (JSR 343) getting major updates, there is definitely lot of excitement that was evident amongst the attendees. The talk was delivered in the biggest hall and had about 200 attendees. Also spent a lot of time talking to folks at the OTN Lounge. The JUG leaders appreciation dinner in the evening had its usual share of fun. Day 3 started with a session on "Building HTML5 WebSocket Apps in Java". The slides are now available: The room was packed with about 150 attendees and there was good interaction in the room as well. A collaborative whiteboard built using WebSocket was very well received. The following tweets made it more worthwhile: A WebSocket speek, by @ArunGupta, was worth every hour lost in transit. #JavaOneBrasil2012, #JavaOneBr @arungupta awesome presentation about WebSockets :) The session was immediately followed by the hands-on lab "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". The lab covers JAX-RS 2.0, Jersey-specific features such as Server-Sent Events, and a WebSocket endpoint using JSR 356. The complete self-paced lab guide can be downloaded from here. The lab was planned for 2 hours but several folks finished the entire exercise in about 75 mins. The wonderfully written lab material and an added incentive of Java EE 6 Pocket Guide did the trick ;-) I also spoke at "The Java Community Process: How You Can Make a Positive Difference". It was really great to see several JUG leaders talking about Adopt-a-JSR program and other activities that attendees can do to participate in the JCP. I shared details about Adopt a Java EE 7 JSR as well. The community keynote in the evening was looking fun but I had to leave in between to go through the peak Sao Paulo traffic time :) Enjoy the complete set of pictures in the album:

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Running Multiple WebLogic and OSB Domains

    - by jeff.x.davies
    I have any number of OSB domain created on my machine at any point in time. For example, I have different domains for different version of Oracle Service Bus and Oracle SOA Suite. I also have different domains for different purposes. I have a demo domain and another domain for the projects in my blog. Starting with OSB 11g and the Apache Derby server, there is a small "gotcha" if you want to create multiple domains on a devevelopment machine. When you create a new domain for OSB 11g it will use the same database info for all databases and this will cause an error when starting the admin server of the second domain (the first domain doesn't have to be running for this error to occur). Here is an example of the error message in the server console: ####<Mar 8, 2011 2:58:48 PM PST> <Critical> <JTA> <jeff-laptop> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1299625128464> <BEA-110482> <A logging last resource failed during initialization. The server cannot boot unless all configured logging last resources (LLRs) initialize. Failing reason: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: JDBC LLR, table verify failed for table 'WL_LLR_ADMINSERVER', row 'JDBC LLR Domain//Server' record had unexpected value 'osb11gR1PS3//AdminServer' expected 'OSBCIM//AdminServer'*** ONLY the original domain and server that creates an LLR table may access it *** The solution is to create a database instance for each of your domains and this is very simple to do. After you create a domain using the Configuration Wizard, locate the wlsbjmsrpDataSource-jdbc.xml file that is found under the DOMAIN_HOME/config/jdbc directory. Near the top of the file you will see the following entry: <url>jdbc:derby://localhost:1527/osbexamples;create=true;ServerName=localhost;databaseName=osbexamples</url> You need to modify this entry with a different and unique database name. The easiest way to do this is to substiture the name of your domain. For example: <url>jdbc:derby://localhost:1527/mydomain;create=true;ServerName=localhost;databaseName=mydomain</url> will create a database named mydomain . Now, when you restart the admin server for the domain, it will create the new database for you. Do this for each domain you create on your development machine and you'll have no troubles. The process is much simpler if you are creating a domain using the Configuration Wizard. Simply name the database when you get to the Configure JDBC Component Schema step of the Configuration Wizard, select the OSB JMS Reporting Provider and set the name in the DBMS/Service field to whatever name you like, as shown in Figure 1 below. Figure 1 – Configuring the JDBC Component Schema That is all there is to it. Now you can create as many domains on your leptop or development machine as you like and not have to worry about them conflicting with each other.

    Read the article

  • How to use Oracle AQ with Message-Driven Beans in Weblogic

    - by lukasz.romaszewski(at)oracle.com
    Welcome to the IMC blog! This post shows how to use Oracle AQ as an underlying JMS implementation with MDBs in Weblogic. MDB's can be very useful when you want to integrate your database logic with your Java application. Normally JEE application invokes the code inside the database. But in some cases you want the DB to initiate the asynchronous call and have your Java application do the actual processing. This is also very useful when you want to integrate JEE code with the Oracle Forms application.The post has been based on the following OTN documentation: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} http://download.oracle.com/docs/cd/E14571_01/web.1111/e13738/aq_jms.htm#CJACBCEJDetailed instruction is here:How to connect MDB to Oracle AQ.pdf v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} You can also download a sample JDeveloper application here:MDB_AQApplication.zipPlease feel free to ask questions and put comments.Merry Christmas and Happy New Year!

    Read the article

  • ArchBeat Link-o-Rama Top 10 - September 16-22, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of September 16-22, 2012. The Real Architects of LA: OTN Architect Day in Los Angeles - Oct 25No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. OIM-OAM-OAAM integration using TAP – Request Flow you must understand!! | Atul KumarAtul Kumar's post addresses "key points and request flow that you must understand" when integrating three Oracle Identity Management product Oracle Identity Management, Oracle Access Management, and Oracle Adaptive Access Manager. Cloud, automation drive new growth in SOA governance market | ZDNet "SOA governance tools and processes learned over the past decade are now underpinning cloud projects as they scale across enterprises," reports Joe McKendrick. But there remains a lack of understanding about SOA Governance. DevOps Basics: Track Down High CPU Thread with ps, top and the new JDK7 jcmd Tool | Frank Munz "The approach is very generic and works for WebLogic, Glassfish or any other Java application," say Frank Munz. "UNIX commands in the example are run on CentOS, so they will work without changes for Oracle Enterprise Linux or RedHat. Creating the thread dump at the end of the video is done with the jcmd tool from JDK7." Frank has captured the process in the posted video. Oracle OpenWorld 2012 Hands-on Lab: "Leading Your Everyday Application Integration Projects with Enterprise SOA" Yet another session to squeeze into your already-jammed Oracle OpenWorld schedule. This hands-on lab focuses on how "Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects." Loving VirtualBox 4.2… | The ORACLE-BASE Blog Is it wrong for a man to love a technology? Oracle ACE Director Tim Hall has several very good reasons for his feelings… ADF Create and CreateInsert Operations for ADF Table | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis answers the question, "What operation is best to use to insert a new row into an ADF table, Create or CreateInsert?" Fault Handling Slides and Q&A | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen shares the slides and a Q&A transcript from a presentation he and fellow ACE Director Guido Schmutz gave at the recent Oracle OpenWorld and JavaOne preview event organized by AMIS Technology. Why IT is a profession in 'flux' | ZDNet I usuallly don't post two items from the same person in one day, but this post from ZDNet blogger Joe McKendrick deals with some critical issues affecting those in IT. As McKendrick puts it: "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." Running RichFaces on WebLogic 12c | Markus Eisele "With all the JMS magic and the different provider checks in the showcase this has become some kind of a challenge to simply build and deploy it," says Oracle ACE Director Markus Eisele. His detailed post will help you to meet that challenge. Thought for the Day "Less is more." — Ludwig Mies van der Rohe (March 27, 1886 – August 17, 1969) Source: BrainyQuote.com

    Read the article

  • Migrating a WebLogic Domain from a 32 to a 64 bit JVM/Architecture

    - by adejuanc
    Currently on 32 bit OS which presents a physical constraint when trying to allocate memory. By limit, a 32 bit OS, will not be able to allocate more than 3 GB on a Linux environment, or 2 GB for Windows environment (not considering Physical Address Extension (PAE) implementation, which can increase the maximum allocatable memory). When this limit is reached, a migration to a 64 bit architecture is recommended. Below are the steps on how to install a 64 WebLogic Server version, along with a 64 Bit JVM and then how to migrate the domain to this new environment. Weblogic Server Version: WebLogic Server Version: 10.3.4.0 OS: Linux adejuan-cl.cl.oracle.com 2.6.18-194.el5 #1 SMP Mon Jan 01 22:10:29 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux JVM:java version "1.6.0_22" Java(TM) SE Runtime Environment (build 1.6.0_22-b04) Oracle JRockit(R) (build R28.1.1-13-139783-1.6.0_22-20101206-0136-linux-x86_64, compiled mode) To create migration template: 1. Execute $<WLS_HOME>/wlserver_10.3/common/bin/config_builder.sh 2. Create Domain Template. 3. Select Domain to migrate. 4. Enter the name of template and other info. Click next. 5. Using Add Button, Select libraries you want to add under lib folder. copy jdbc data sources you would like to have under config/jdbc folder. If you want to have log4j configuration, copy log4j.xml under domain_root folder. 6. Select data base for the domain and click next. 7. Enter Admin Server Name and port numbers. Click next. 8. Enter username and password. Click next. Select No if you don want to add users/groups/roles 9. Click next until reach button create. This will create a jar file that is needed for the next action plan. To install and migrate domain to 64 bit architecture. 1. Install JVM Jrockit R28.1.1.13 on environment. (uncompress zip folder on a known location) 2. Go to JRockit_Home/bin and execute $ java -d64 -jar wls1034_generic.jar 3. When prompt, select JRockit JVM in browse menu, and wait for installation to finish. 4. Go to <WLS_HOME>/wlserver_10.3/common/bin and execute $ ./config.sh 5. Select Create a new Weblogic domain 6. Select Base this domain on an existing template and select browse. 7. Select jar file created on previous action plan. 8. Select Name of domain. Maintain in most cases. 9. Select user name and password. Maintain in most cases. 10. Select Jrockit SDK 1.6.0_22 and click next. 11. Confirm all JMS/JDBC/security configuration. 12. On select Optional Configuration, reconfigure if necessary. 13. Check on Configuration summary for all domain configuration. 14. Click on create, to finish up domain import. Note: Always when installing a 64 bit WLS, it's necessary to install first the 64 JVM and then run the generic installer with the -d64 bit option. If this is not performed, the installation will be the 32 default version. For more information, please refer: http://download.oracle.com/docs/cd/E13188_01/jrockit/docs50/dev_faq.html#pae

    Read the article

  • Java EE 7 Roadmap

    - by Linda DeMichiel
    The Java EE 6 Platform, released in December 2009, has seen great uptake from the community with its POJO-based programming model, lightweight Web Profile, and extension points. There are now 13 Java EE 6 compliant appserver implementations today! When we announced the Java EE 7 JSR back in early 2011, our plans were that we would release it by Q4 2012. This target date was slightly over three years after the release of Java EE 6, but at the same time it meant that we had less than two years to complete a fairly comprehensive agenda — to continue to invest in significant enhancements in simplification, usability, and functionality in updated versions of the JSRs that are currently part of the platform; to introduce new JSRs that reflect emerging needs in the community; and to add support for use in cloud environments. We have since announced a minor adjustment in our dates (to the spring of 2013) in order to accommodate the inclusion of JSRs of importance to the community, such as Web Sockets and JSON-P. At this point, however, we have to make a choice. Despite our best intentions, our progress has been slow on the cloud side of our agenda. Partially this has been due to a lack of maturity in the space for provisioning, multi-tenancy, elasticity, and the deployment of applications in the cloud. And partially it is due to our conservative approach in trying to get things "right" in view of limited industry experience in the cloud area when we started this work. Because of this, we believe that providing solid support for standardized PaaS-based programming and multi-tenancy would delay the release of Java EE 7 until the spring of 2014 — that is, two years from now and over a year behind schedule. In our opinion, that is way too long. We have therefore proposed to the Java EE 7 Expert Group that we adjust our course of action — namely, stick to our current target release dates, and defer the remaining aspects of our agenda for PaaS enablement and multi-tenancy support to Java EE 8. Of course, we continue to believe that Java EE is well-suited for use in the cloud, although such use might not be quite ready for full standardization. Even today, without Java EE 7, Java EE vendors such as Oracle, Red Hat, IBM, and CloudBees have begun to offer the ability to run Java EE applications in the cloud. Deferring the remaining cloud-oriented aspects of our agenda has several important advantages: It allows Java EE Platform vendors to gain more experience with their implementations in this area and thus helps us avoid risks entailed by trying to standardize prematurely in an emerging area. It means that the community won't need to wait longer for those features that are ready at the cost of those features that need more time. Because we have already laid some of the infrastructure for cloud support in Java EE 7, including resource definition metadata, improved security configuration, JPA schema generation, etc., it will allow us to expedite a Java EE 8 release. We therefore plan to target the Java EE 8 Platform release for the spring of 2015. This shift in the scope of Java EE 7 allows us to better retain our focus on enhancements in simplification and usability and to deliver on schedule those features that have been most requested by developers. These include the support for HTML 5 in the form of Web Sockets and JSON-P; the simplified JMS 2.0 APIs; improved Managed Bean alignment, including transactional interceptors; the JAX-RS 2.0 client API; support for method-level validation; a much more comprehensive expression language; and more. We feel strongly that this is the right thing to do, and we hope that you will support us in this proposed direction.

    Read the article

  • What's new in Servlet 3.1 ? - Java EE 7 moving forward

    - by arungupta
    Servlet 3.0 was released as part of Java EE 6 and made huge changes focused at ease-of-use. The idea was to leverage the latest language features such as annotations and generics and modernize how Servlets can be written. The web.xml was made as optional as possible. Servet 3.1 (JSR 340), scheduled to be part of Java EE 7, is an incremental release focusing on couple of key features and some clarifications in the specification. The main features of Servlet 3.1 are explained below: Non-blocking I/O - Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. Non-blocking I/O allow to build scalable applications. TOTD #188 provide more details about how non-blocking I/O can be done using Servlet 3.1. HTTP protocol upgrade mechanism - Section 14.42 in the HTTP 1.1 specification (RFC 2616) defines an upgrade mechanism that allows to transition from HTTP 1.1 to some other, incompatible protocol. The capabilities and nature of the application-layer communication after the protocol change is entirely dependent upon the new protocol chosen. After an upgrade is negotiated between the client and the server, the subsequent requests use the new chosen protocol for message exchanges. A typical example is how WebSocket protocol is upgraded from HTTP as described in Opening Handshake section of RFC 6455. The decision to upgrade is made in Servlet.service method. This is achieved by adding a new method: HttpServletRequest.upgrade and two new interfaces: javax.servlet.http.HttpUpgradeHandler and javax.servlet.http.WebConnection. TyrusHttpUpgradeHandler shows how WebSocket protocol upgrade is done in Tyrus (Reference Implementation for Java API for WebSocket). Security enhancements Applying run-as security roles to #init and #destroy methods Session fixation attack by adding HttpServletRequest.changeSessionId and a new interface HttpSessionIdListener. You can listen for any session id changes using these methods. Default security semantic for non-specified HTTP method in <security-constraint> Clarifying the semantics if a parameter is specified in the URI and payload Miscellaneous ServletResponse.reset clears any data that exists in the buffer as well as the status code, headers. In addition, Servlet 3.1 will also clears the state of calling getServletOutputStream or getWriter. ServletResponse.setCharacterEncoding: Sets the character encoding (MIME charset) of the response being sent to the client, for example, to UTF-8. Relative protocol URL can be specified in HttpServletResponse.sendRedirect. This will allow a URL to be specified without a scheme. That means instead of specifying "http://anotherhost.com/foo/bar.jsp" as a redirect address, "//anotherhost.com/foo/bar.jsp" can be specified. In this case the scheme of the corresponding request will be used. Clarification in HttpServletRequest.getPart and .getParts without multipart configuration. Clarification that ServletContainerInitializer is independent of metadata-complete and is instantiated per web application. A complete replay of What's New in Servlet 3.1: An Overview from JavaOne 2012 can be seen here (click on CON6793_mp4_6793_001 in Media). Each feature will be added to the JSR subject to EG approval. You can share your feedback to [email protected]. Here are some more references for you: Servlet 3.1 Public Review Candidate Downloads Servlet 3.1 PR Candidate Spec Servlet 3.1 PR Candidate Javadocs Servlet Specification Project JSR Expert Group Discussion Archive Java EE 7 Specification Status Several features have already been integrated in GlassFish 4 Promoted Builds. Have you tried any of them ? Here are some other Java EE 7 primers published so far: Concurrency Utilities for Java EE (JSR 236) Collaborative Whiteboard using WebSocket in GlassFish 4 (TOTD #189) Non-blocking I/O using Servlet 3.1 (TOTD #188) What's New in EJB 3.2 ? JPA 2.1 Schema Generation (TOTD #187) WebSocket Applications using Java (JSR 356) Jersey 2 in GlassFish 4 (TOTD #182) WebSocket and Java EE 7 (TOTD #181) Java API for JSON Processing (JSR 353) JMS 2.0 Early Draft (JSR 343) And of course, more on their way! Do you want to see any particular one first ?

    Read the article

  • ????! ?????????????????????????????????JavaOne 2012????? ????×????

    - by ???02
    2012?9?30???10?4??4?????????????????????Java??????????????JavaOne 2012??????????????????????2???????????????Make the Future Java????????Java?????????????????????Java??????????????????????????????????????Java??????????????(Fusion Middleware??????)?????JavaOne 2012??????????(???=????[??????IT????]) Make the Future Java?????????????????????????????????? --???JavaOne????????Make the Future Java?????????????????????????... ??:?Java????????????????Java???????????????????????????????????????????????????????????????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????? ?????? Fusion Middleware?????? ???Java?????????????? --???JavaOne????????3????????????????????????????????????? ??:???Java SE?Java EE?Java ME???3?????????????(???)?????????????????????1??????????????????????????????????????????????????????????????????????????????????????????? --????????????????????????????????????????????????????????????????Java EE 7????????????????????????????????? ??:???????????????????????????????????????????????Java????????????????????????????????????????????????????????????????????????? ????????????? ???????????? ????????? ?????????????? ??????????????? ?????????? ???????????????????????????????????????? ?????????/?????????·?????HTML5?????????????????????????????????????????Java??????????? ????????????????????Java?????????????????????????????????JCP(Java Community Process)??????????????????·??????????????????????????????????????·?????????????????????????????????????????????????????????·???????????????????????????Java????????????????????????????????????????????????????? JavaFX?Java???UI????Java SE 8??? JavaOne 2012??????????????IT?????????? --2013???????????????Java SE 8??????2?????????Java SE 9???????????????????????????????????????????JavaScript?????????Java SE 8???????????????????Jigsaw??Java SE 9???????????????????Java SE 8????????????JavaScript?????Nashorn(?????)???????Rhino(????)??????????????????????????????????????????? ??:JavaScript????????JVM?????????????????? ???Web?????????JavaScript?????????????????? ????????????Java???JavaScript??????????????Java SE 7??????InvokeDynamic????????????????????Nashorn??????????????????????????????JVM????????????????????????????????????????????????????????????????????????????????JVM??JavaScript??????????JavaScript???????????????????????????JavaOne?Nashorn????????????????????????????????????????????????????????????????? --Java SE 8??JavaFX 3.0????????????????????? ??:JavaFX??????Java???????????????????Java SE 8??????????????????Java????UI?????AWT?????????Swing??????HTML5????????????Web???????????????????????JavaFX????????????GUI??????????????????????? --???JavaFX?????????????????????????????????????????????????????? ??:????????????????????JavaFX????????????????JavaFX????????·????GUI????????????????????????Visual Basic??????????????????Swing???????????????????????GUI????????? --??????????????????????JavaFX for ARM?????????????? ??:??????ARM????????????????·??????????????????????JavaFX?????????1????????????????????????????JavaFX Scene Builder?Linux??JavaFX SceneBuilder for Linux???????????????????????????????????????????????? Java EE 7??????????????????????Java EE 8?????????????? --Java EE 7?????????????????JavaOne????????????????????????????????????2013?????????????????????????????????????? ??:??????????????Java EE 8????????????????????????????????????????·????????????????????????????? ???????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????2013???????????????????????? --????????????????????????????????????????????????????????????? ??:???????????????????????????Java EE 7??HTML5????????????????????????????????????????????JMS(Java Message Service)??????·????1??????????????????????Java EE 7???Java EE 6???????????????????????????CDI(Context Dependency Injection)???????????????????????? ??????Java EE 7????????????????????????Java EE 8??????????????????????????????????????? “Java??”??????·????????? --????????JavaOne??????????????????????? ??:????????????????NetBeans??????????????Project Easel??AMD?OpenJDK??????????????Project Sumatra????????? Easel?NetBeans 7.3????????????HTML5?CSS3?JavaScript?????????????????????????????????????????????????JavaScript?????????????????????? ???Sumatra?Java??GPU?GPU/CPU?????????????????????????????GPU??HotSpot???JVM????????????????????????????/?????????Java?????????????????????????????? --????·???????????Java EE???????JavaScript??????????????????????Project Avatar????????????????? ??:JavaScript?????????????????????????????Avatar????????????????????????????????????????2???????????????????????????????????????????????????Web???????????????????Avatar?????????????????????? --???Java EE??????????????????? ??:???????????????????JavaScript??Java EE?????????????Java????????JavaScript?????????????????????JVM????????????????????????????????????JavaScript????????Java?????????????????????????????????????????????Avatar????JavaScript?????????????????????????????·??????????????????????????????? --?JavaScript?????Nashorn???????????????????JavaScript?????????????????????????????????Avatar???????·???JavaScript????????????????????????“????·??????”????????????????(?) ??:Nahorn?Node.js??????????Java???????????JavaScript??????????????????????????????Java?JavaScript??????????????????????? --????????????????????????????????????????????????? ??:????????????????????????????????????????????????????·???????????????·????????????????????????????T???????????????????????????????????... ???????????! --?????????????·?????????????????????????! JavaOne????????????????????????????????“T?????”?????????????????????????????????????????????T???????????????(?) ??:???Liquid Robotics?????????????????/????????????????????Java?????????????????????????????Java???????????????????????????????JavaOne?????????

    Read the article

  • Unknown Entity namespace alias in symfony2

    - by Zoha Ali Khan
    Hey I have two bundles in my symfony2 project. one is Bundle and the other one is PatentBundle. My app/config/route.yml file is MunichInnovationGroupPatentBundle: resource: "@MunichInnovationGroupPatentBundle/Controller/" type: annotation prefix: / defaults: { _controller: "MunichInnovationGroupPatentBundle:Default:index" } MunichInnovationGroupBundle: resource: "@MunichInnovationGroupBundle/Controller/" type: annotation prefix: /v1 defaults: { _controller: "MunichInnovationGroupBundle:Patent:index" } login_check: pattern: /login_check logout: pattern: /logout inside my controller i have <?php namespace MunichInnovationGroup\PatentBundle\Controller; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\HttpFoundation\Request; use JMS\SecurityExtraPatentBundle\Annotation\Secure; use Symfony\Component\Security\Core\Exception\AccessDeniedException; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Method; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Route; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Template; use Symfony\Component\Security\Core\SecurityContext; use MunichInnovationGroup\PatentBundle\Entity\Log; use MunichInnovationGroup\PatentBundle\Entity\UserPatent; use MunichInnovationGroup\PatentBundle\Entity\PmPortfolios; use MunichInnovationGroup\PatentBundle\Entity\UmUsers; use MunichInnovationGroup\PatentBundle\Entity\PmPatentgroups; use MunichInnovationGroup\PatentBundle\Form\PortfolioType; use MunichInnovationGroup\PatentBundle\Util\SecurityHelper; use Exception; /** * Portfolio controller. * @Route("/portfolio") */ class PortfolioController extends Controller { /** * Index action. * * @Route("/", name="v2_pm_portfolio") * @Template("MunichInnovationGroupPatentBundle:Portfolio:index.html.twig") */ public function indexAction(Request $request) { $portfolios = $this->getDoctrine() ->getRepository('MunichInnovationGroupPatentBundle:PmPortfolios') ->findBy(array('user' => '$user_id')); // rest of the method } when i try to load localhost/web/app_dev.php/portfolio It says Unknown Entity namespace alias 'MunichInnovationGroupPatentBundle'. I am unable to figure out this error please help me if anyone has any idea I googled it a lot :( Thanks in advance 500 Internal Server Error - ORMException

    Read the article

  • Grails Mail port configuration

    - by bsreekanth
    Hello, I am trying to send mail through grails mail plugin. I configured according to the documentation, and also followed few blog posts (http://blog.lourish.com/2010/04/02/sending-asynchronous-html-email-in-grails-with-activemq-jms-and-gmail/). That post mention that the closure way of declaring the configuration overrides others, but not true. Anyway I tried both approach, but seems like the port is still use the smtp default one. I get the below exception. exception: org.springframework.mail.MailSendException: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 25; nested exception is: java.net.ConnectException: Connection refused: connect Now, I wrote a small program directly using the java mail library, and I could send the mail with that. The configuration is shown below. tried additional config "mail.smtp.port":"465"", but no change.. used the parameters mentioned in the above blog post, result same grails { mail { host = "smtp.gmail.com" port = "465" username = "[email protected]" password = "mypwd" props = ["mail.smtp.auth":"true", // "mail.smtp.port":"465", "mail.smtp.socketFactory.port":"465", "mail.smtp.socketFactory.class":"javax.net.ssl.SSLSocketFactory", "mail.smtp.socketFactory.fallback":"false"] } } thanks in advance.. Update: It is not port or firewall config, as when I made a grails application from scratch, and tried with the same config, everything works. Also, asked in grails forum http://grails.1312388.n4.nabble.com/grails-mail-mailSender-does-not-have-config-values-td2237704.html#a2237704 . Hope get a lead to try.

    Read the article

  • Connecting to a Websphere MQ in Java with SSL/Keystore

    - by javaExpert
    I'd like to connect to a Websphere 6.0 MQ via Java. I have already working code for a "normal" queue, but now I need to access a new queue which is SSL encrypted (keystore). I have been sent a file called something.jks, which I assume is a certificate I need to store somewhere. I have been searching the net, but I can't find the right information. This is the code I use for the "normal" queue. I assume I need to set some property, but not sure which one. MQQueueConnectionFactory connectionFactory = new MQQueueConnectionFactory(); connectionFactory.setChannel(channel_); connectionFactory.setHostName(hostname_); connectionFactory.setPort(port_); connectionFactory.setQueueManager(queueManager_); connectionFactory.setTransportType(1); connectionFactory.setSSsetSSLCertStores(arg0) Connection connection = connectionFactory.createConnection(); connection.setExceptionListener(this); session_ = connection.createSession(DEFAULT_TRANSACTED, DEFAULT_ACKMODE); connection.start(); javax.jms.Queue fQueue = session_.createQueue(queue_); consumer = session_.createConsumer(fQueue);

    Read the article

  • java.io.FileNotFoundException: /target/test.log

    - by sword101
    Greetings all I am using Apache Camel and Apache CXF in this example: http://camel.apache.org/better-jms-transport-for-cxf-webservice-using-apache-camel.data/cxfcamelexample.zip I followed the readme and when tried to run the client & server classes i got this exception: log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /target/test.log (No such file or directory) at java.io.FileOutputStream.openAppend(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:177) at java.io.FileOutputStream.<init>(FileOutputStream.java:102) at org.apache.log4j.FileAppender.setFile(FileAppender.java:289) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:163) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:256) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:132) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:96) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:654) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:612) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:509) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:415) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:441) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:470) at org.apache.log4j.LogManager.<clinit>(LogManager.java:122) at org.apache.log4j.Logger.getLogger(Logger.java:104) at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:283) at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1040) at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:838) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:601) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:333) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:307) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:645) at org.springframework.context.support.AbstractApplicationContext.<init>(AbstractApplicationContext.java:146) at org.springframework.context.support.AbstractRefreshableApplicationContext.<init>(AbstractRefreshableApplicationContext.java:84) at org.springframework.context.support.AbstractRefreshableConfigApplicationContext.<init>(AbstractRefreshableConfigApplicationContext.java:59) at org.springframework.context.support.AbstractXmlApplicationContext.<init>(AbstractXmlApplicationContext.java:58) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:136) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:93) at com.example.customerservice.impl.CustomerServiceClient.main(CustomerServiceClient.java:34) so any ideas, how to solve this exception ?

    Read the article

  • Character encoding issues when generating MD5 hash cross-platform

    - by rogueprocess
    This is a general question about character encoding when using MD5 libraries in various languages. My concern is: suppose I generate an MD5 hash using a native Python string object, like this: message = "hello world" m = md5() m.update(message) Then I take a hex version of that MD5 hash using: m.hexdigest() and send the message & MD5 hash via a network, let's say, a JMS message or a HTTP request. Now I get this message in a Java program in the form of a native Java string, along with the checksum. Then I generate an MD5 hash using Java, like this (using the Commons Codec library): String md5 = org.apache.commons.codec.digest.DigestUtils.DigestUtils.md5Hex(s) My feeling is that this is wrong because I have not specified character encodng at either end. So the original hash will be based on the bytes of the Python version of the string; the Java one will be based on the bytes of the Java version of the string , these two byte sequences will often not be the same - is that right? So really I need to specify "UTF-8" or whatever at both ends right? (I am actually getting an intermittent error in my code where the MD5 checksum fails, and I suspect this is the reason - but because it's intermittent, it's difficult to say if changing this fixes it or not. ) Thank you!

    Read the article

  • Dealing with asynchronous control structures (Fluent Interface?)

    - by Christophe Herreman
    The initialization code of our Flex application is doing a series of asynchronous calls to check user credentials, load external data, connecting to a JMS topic, etc. Depending on the context the application runs in, some of these calls are not executed or executed with different parameters. Since all of these calls happen asynchronously, the code controlling them is hard to read, understand, maintain and test. For each call, we need to have some callback mechanism in which we decide what call to execute next. I was wondering if anyone had experimented with wrapping these calls in executable units and having a Fluent Interface (FI) that would connect and control them. From the top of my head, the code might look something like: var asyncChain:AsyncChain = execute(LoadSystemSettings) .execute(LoadAppContext) .if(IsAutologin) .execute(AutoLogin) .else() .execute(ShowLoginScreen) .etc; asyncChain.execute(); The AsyncChain would be an execution tree, build with the FI (and we could of course also build one without a FI). This might be an interesting idea for environments that run in a single threaded model like the Flash Player, Silverlight, JavaFX?, ... Before I dive into the code to try things out, I was hoping to get some feedback.

    Read the article

  • Workflow engine BPMN, Drools, etc or ESB?

    - by Tom
    We currently have an application that is based on an in-house developed workflow engine with YAML based DSL. We are looking to move parts of it to Java. I have discovered a number of java solutions like Intalio, JBPM, Drools Expert, Drools Flow etc. They appear to be aimed at businesses where the business analyst creates the workflows using a graphical editor and submits them to the workflow engine. They seem geared towards ease of use for non-technical people rather than for developers with a focus on human interaction. The workflows tend to look like. Discover-a-file -\ -> join -> process-file -> move-file -> register-file Discover-some-metadata -/ If any step fails we need to retry it X times. We also need to be able to stop the system and be able to restart it and have it continue from where it was (durable). Some of our workflows can be defined by a set of goals we need to achieve so Jess's backwards rule chaining sounds interesting but it is not open source. It might be that what we are after is a Finite State Machine engine or just an Enterprise Service Bus and do everything as JMS queues. Is there a good open source workflow engine that is both standards-based but also geared towards developers. We don't particular want to use a graphical workflow designer or write reams of XML and it should ideally be in Java or language agnostic (makes REST/Soap calls to external services). Thanks, Tom

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >