Search Results

Search found 32453 results on 1299 pages for 'osr wls oracle service re'.

Page 506/1299 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • Oracle s'associe à Nokia pour utiliser ses cartes dans ses applications d'entreprise, bonne nouvelle pour le Finlandais

    Oracle s'associe à Nokia pour utiliser ses cartes Dans ses applications d'entreprise, bonne nouvelle pour le Finlandais A mesure que la mobilité monte en puissance, les services de cartographie deviennent de plus en plus stratégique. En témoigne le récent abandon des Googles Maps par Apple dans iOS. Apple qui ne pouvait pas longtemps dépendre d'un concurrent dans ce domaine. Mais ce mouvement ne concerne pas que le grand public. Loin de là. Les CRM par exemple, sont de plus en plus portés sur tablette et les outils de BI prennent de plus en plus en compte des problématiques géospatiales (implantation de maga...

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • org.hibernate.MappingException: No Dialect mapping for JDBC type: 2002

    - by Moli
    Hi at all, I'm having a issue trying to get working a JPA nativeQuery. I'm having a org.hibernate.MappingException: No Dialect mapping for JDBC type: 2002 when a try to do a nativeQuery and get a geometry field type. I use oracle and org.hibernatespatial.oracle.OracleSpatial10gDialect. The geom fields is mapped as: @Column(name="geometry") @Type(type = "org.hibernatespatial.GeometryUserType") private Geometry geometry; List<Object> listFeatures= new LinkedList<Object>(); Query query= entityManager.createNativeQuery( "SELECT "+ slots +" , geometry FROM edtem_features feature, edtem_dades dada WHERE" + " feature."+ tematic.getIdGeomField() +" = dada."+ tematic.getIdDataField()+ " AND dada.capesid= "+ tematic.getCapa().getId() + " AND feature.geometriesid= "+ tematic.getGeometria().getId()); listFeatures.addAll( query.getResultList()); Anybody knows a solution? or how to force the type of the geometry to get wroking this... MANY Thanks in advance. Moli

    Read the article

  • Base 36 to Base 10 conversion using SQL only.

    - by EvilTeach
    A situation has arisen where I need to perform a base 36 to base 10 conversion, in the context of a SQL statement. There doesn't appear to be anything built into Oracle 9, or Oracle 10 to address this sort of thing. My Google-Fu, and AskTom suggest creating a pl/sql function to deal with the task. That is not an option for me at this point. I am looking for suggestions on an approach to take that might help me solve this issue. To put this into a visual form... WITH Base36Values AS ( SELECT '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' myBase36 FROM DUAL ), TestValues AS ( SELECT '01Z' BASE36_VALUE, 71 BASE10_VALUE FROM DUAL ) SELECT * FROM Base36Values, TestValues I am looking for something to calculate the value 71, based on the input 01Z. As a bribe, each useful answer gets a free upvote. Thanks Evil.

    Read the article

  • ORA-12705 with OracleXE & Windows 7 & GlassFish

    - by bao
    I hate this problem... pls help! I have: GlassFish v3 (build 74.2) Windows 7 Pro english Oracle XE 10.2.0 settings: SQL select * from nls_database_parameters; PARAMETER VALUE NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., NLS_CHARACTERSET WE8MSWIN1252 NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE AMERICAN NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM PARAMETER VALUE NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY $ NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE NLS_NCHAR_CHARACTERSET AL16UTF16 NLS_RDBMS_VERSION 10.2.0.1.0 20 rows selected. SQL HOST ECHO %NLS_LANG% AMERICAN_AMERICA.WE8MSWIN1252 NLS_LANG in registry is AMERICAN_AMERICA.WE8MSWIN1252 ORACLE_HOME in registry is C:\oraclexe\app\oracle\product\10.2.0\server I am creating Connection pool in Glassfish admin web GUI, trying to ping.. This error in log: [#|2010-05-15T23:19:26.958+0400|WARNING|glassfishv3.0|javax.enterprise.resource.resourceadapter.com.sun.enterprise.connectors.service|_ThreadID=29;_ThreadName=Thread-1;|RAR8054: Exception while creating an unpooled [test] connection for pool [ phut ], Connection could not be allocated because: ORA-00604: error occurred at recursive SQL level 1 ORA-12705: Cannot access NLS data files or invalid environment specified |#] HOW TO FIX??

    Read the article

  • ORA-01019 error only as an administrator

    - by Mike
    I'm having a strange problem. I've installed the Oracle 10g client on a terminal server running Windows Server 2008R2. When I try to connect to Oracle, using, say, Toad, I receive the error "ORA-01019 unable to allocate memory in the user side". But this only happens if I'm logged in as an administrator. If I connect as a normal user, I can connect without issue. Also -- if a normal user is connected, I can then connect without a problem as an administrator. Any thoughts?

    Read the article

  • Instance validation error: '2' is not a valid value for QueryType. (web service)

    - by Anthony Shaw
    I have a web service that I am passing an enum public enum QueryType { Inquiry = 1 Maintainence = 2 } When I pass an object that has a Parameter of QueryType on it, I get the error back from the web service saying that '2' is not a valid value for QueryType, when you can clearly see from the declaration of the enum that it is. I cannot change the values of the enum because legacy applications use the values, but I would rather not have to insert a "default" value just to push the index of the enum to make it work with my web service. It acts like the web service is using the index of the values rather than the values themselves. Does anybody have a suggestion of what I can do to make it work, is there something I can change in my WSDL?

    Read the article

  • How to secure a WCF service using NetNamedPipesBinding so that it can only be called by the current

    - by Samuel Jack
    I'm using a WCF service with the NetNamedPipesBinding to communicate between two AppDomains in my process. How do I secure the service so that it is not accessible to other users on the same machine? I have already taken the precaution of using a GUID in the Endpoint Address, so there's a little security through obscurity, but I'm looking for a way of locking the service down using ACL or something similar.

    Read the article

  • Where/what is the specification for odata.service meta tag?

    - by Sam Saffron
    I would like to add some tags to our web app to enable auto-discovery of our odata feeds. So for example Nerd Dinner has the following tag: <link rel="odata.service" title="NerdDinner.com OData Service" href="/Services/OData.svc" /><link rel="odata.feed" title="NerdDinner.com OData Service - Dinners" href="/Services/OData.svc/Dinners" /> The trouble is that I have 4 different feeds and am unclear if I am allowed to add multiple link rel="odata.service" to the document. Where is the specification for this meta tag? (follow on question, are there any apps that take advantage of this tag that I can use to test out behavior)

    Read the article

  • rpm installation error

    - by JiminyCricket
    im trying to install an RPM compat-db-4.1.25-9 on oracle linux enterprise, since its required to install WebCenter...however the rpm installation is throwing a warning and then not working [root@devsebl downloads]# rpm -i compat-db-4.1.25-9.rpm warning: compat-db-4.1.25-9.rpm: Header V3 DSA signature: NOKEY, key ID 9b3c94f4 [root@devsebl downloads]# rpm -q compat-db-4.1.25-9.rpm package compat-db-4.1.25-9.rpm is not installed any idea what that warning means and why its crashing there? i tried to use Yum, but its not available i guess: [root@devsebl downloads]# yum search compat-db Loaded plugins: security Warning: No matches found for: compat-db No Matches found

    Read the article

  • Calling a GWT service in a different context than the GWT Module Base?

    - by Epaga
    I have a GWT module with the X-GWT-Module-Base http://host:8080/foo/ and would like to call a (GWT) service which is located at http://host:8080/bar/. The reason is for example that I want to be able to share a GWT service between two different GWT client projects. All I've gotten to work so far is if the service is located within the module context, i.e. http://host:8080/foo/bar works fine, using @RemoteServiceRelativePath("bar") in my service interface. It seems that the @RemoteServiceRelativePath only allows a value relative to the module base URL...so is there some other way to accomplish what I'm trying to accomplish?

    Read the article

  • How to Maintain per-Client State in a WCF Service?

    - by tbischel
    I've created a WCF service in which I would like it to maintain state between calls from the client. I figured the easiest way to do this was to add this attribute to the service: [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] since this is supposed to keep a separate service alive for each client over the life of the client proxy (or timeout in the extreme case). I also added a test function that tracks a list of user inputs, and spits out a concatenated string with all the inputs over the life of the service. When I run this in the test client generated by visual studio, I find that the list I was using to hold past data is reset with each call. Is there something else I need to do to maintain state per session?

    Read the article

  • WCF Service w/ SharePoint Error: Could not find default endpoint element that references contract...

    - by Brian Clark
    Full error message: Could not find default endpoint element that references contract 'PublicationServices.IPublicationService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element. I have a SharePoint site that I have already opened a project for in Visual Studio 2010. I also created a project that contains a WCF Service Application and added it to the same solution that contains the project for my SharePoint site. I have created a visual web part in my SharePoint project that I am trying to use to consume the WCF Service. I am doing so like this, from within the user control for my web part: PublicationServiceClient proxy = new PublicationServiceClient(); Just having this line alone in OnPreRender, Page_Load, etc. will generate the above error. I've read previous posts about having to have items in the config file of the WCF service also in the config file of the consuming application. I have done this, I copied this section the Web.config file in my WCF service and have placed it in the system.serviceModel tags of my SharePoint project's app.config file: In other words, this is in both of my config files. When I add this web part to the front page of my SharePoint site though, I get the above error every time. I should also note that I have created a console app that I was able to use with no problems to consume data from this very same WCF service. Any help would be appreciated!

    Read the article

  • ADO.NET Known Issues?

    - by Israel Rodriguez
    So, I'm starting a new Project in my company, and it's kinda big. We are going to use .NET 3.5, and I wish to known if there are any know bugs or perfomance issues that could give weird behaviour for my project? I'm reading some things about EFv4 and all they say is that EFv3.5 have too many problems. After all, what's the best and fastest way, ADO.NET Entities or extract the data from my DB directly to a DataReader? The EF Oracle provider is stable? The project will be .NET 3.5 and Oracle.

    Read the article

  • Would OpenID or OAuth work for authorization/authentication on a distributed web service?

    - by David Eyk
    We're in the early stages of designing a RESTful/resource-oriented web service API for a computational lingustics application. Because many of the resources we plan to serve are rights-encumbered, a key design decision has been to specify the platform so that each resource provider can expose their own web service that complies with the API spec. This way, the rights owner maintains control over their content (and thus the ability to throttle or deny access at will) and a direct relationship with the consumer, while still being able to participate in in the collaborative network. At the same time, to simplify the job of writing a client for this service, we want to allow a client access to the distributed service through one end-point, with the server handling content negotiation and retrieval from the appropriate providers. Right now, we're at an impasse on authentication/authorization schemes. One of our number has argued for the (technical) simplicity of a central authentication registry, but others are concerned about the organizational complexity of such a scheme. It seems to me, based on an albeit limited understanding of the technologies, that a combination of OpenID and OAuth would do the trick, with a client authenticating with the end-point via OpenID, and the server taking action on the user's behalf with the various content providers using OAuth. I've only ever seen implementations (e.g. stackoverflow, twitter, etc.) where a human was present to intervene, and I still need to do more research on these technologies. Would a scheme like this work for an automated web service, or would it make the client too difficult to implement and operate?

    Read the article

  • How to use an OSGi service from a web application?

    - by Jaime Soriano
    I'm trying to develop a web application that is going to be launched from a HTTP OSGi service, this application needs to use other OSGi service (db4o OSGi), for what I need a reference to a BundleContext. I have tried two different approaches to get the OSGi context in the web application: Store the BundleContext of the Activator in an static field of a class that the web service can import and use. Use FrameworkUtil.getBundle(this.getClass()).getBundleContext() (being this an instance of MainPage, a class of the web application). I think that first option is completely wrong, but anyway I'm having problems with the class loaders in both options. In the second one it raises a LinkageError: java.lang.LinkageError: loader constraint violation: loader (instance of org/apache/felix/framework/ModuleImpl$ModuleClassLoader) previously initiated loading for a different type with name "com/db4o/ObjectContainer" Also tried with Equinox and I have a similar error: java.lang.LinkageError: loader constraint violation: loader (instance of org/eclipse/osgi/internal/baseadaptor/DefaultClassLoader) previously initiated loading for a different type with name "com/db4o/ObjectContainer" The code that provokes the exception is: ServiceReference reference = context.getServiceReference(Db4oService.class.getName()); Db4oService service = (Db4oService)context.getService(reference); database = service.openFile("foo.db"); The exception is raised in the last line, database class is ObjectContainer, if I change the type of this variable to Object exception is not raised, but It's not useful as an Object :)

    Read the article

  • How to avoid this very heavy query that slows down the application?

    - by Juan Paredes
    Hi, We have a web application running in a production enviroment and at some point the client complained about how slow the application got. When we checked what was going on with the application and the database we discover this "precious" query that was being executed by several users at the same time (thus inflicting an extremely high load on the database server): SELECT NULL AS table_cat, o.owner AS table_schem, o.object_name AS table_name, o.object_type AS table_type, NULL AS remarks FROM all_objects o WHERE o.owner LIKE :1 ESCAPE :"SYS_B_0" AND o.object_name LIKE :2 ESCAPE :"SYS_B_1" AND o.object_type IN(:"SYS_B_2", :"SYS_B_3") ORDER BY table_type, table_schem, table_name Our application does not execute this query, I believe it is an Hibernate internal query. I've found little information on why Hibernate does this extremely heavy query, so any help in how to avoid it very much appreciated! The production enviroment information: Red Hat Enterprise Linux 5.3 (Tikanga), JDK 1.5, web container OC4J (whitin Oracle Application Server), Oracle Database 10.1.0.4, JDBC Driver for JDK 1.2 and 1.3, Hibernate version 3.2.6.ga, connection pool library C3P0 version 0.9.1. Thank you.

    Read the article

  • ORA-12705: invalid or unknown NLS parameter value specified

    - by viky
    I have a j2ee application hosted on jboss and linux platform. When I try to access the application , I see following error in server.log file. ORA-12705: invalid or unknown NLS parameter value specified When I point the same jboss instance to a different schema, the application works fine. I tried to go through few forum and found that the NLS parameter settings are fine. Can anyone help. Jboss version = 4.0.2 DB version = oracle 10.2 output of locale command on linux $ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=

    Read the article

  • How to call WCF Service Method Asycroniously from Class file?

    - by stackuser1
    I've added WCF Service reference to my asp.net application and configured that reference to support asncronious calls. From asp.net code behind files, i'm able to call the service methods asyncroniously like the bellow sample code. protected void Button1_Click(object sender, EventArgs e) { PageAsyncTask pat = new PageAsyncTask(BeiginGetDataAsync, EndDataRetrieveAsync, null, null); Page.RegisterAsyncTask(pat); } IAsyncResult BeiginGetDataAsync(object sender, EventArgs e, AsyncCallback async, object extractData) { svc = new Service1Client(); return svc.BeginGetData(656,async, extractData); } void EndDataRetrieveAsync(IAsyncResult ar) { Label1.Text = svc.EndGetData(ar); } and in page directive added Async="true" In this scenario it is working fine. But from UI i'm not supposed to call the service methods directly. I need to call all service methods from a static class and from code behind file i need to invoke the static method. In this scenario what exactlly do i need to do?

    Read the article

  • Database server: Small quick RAM or large slow RAM?

    - by Josh Smeaton
    We are currently designing our new database servers, and have come up with a trade off I'm not entirely sure of how to answer. These are our options: 48GB 1333MHz, or 96GB 1066MHz. My thinking is that RAM should be plentiful for a Database Server (we have plenty and plenty of data, and some very large queries) rather than as quick as it could be. Apparently we can't get 16GB chips at 1333MHz, hence the choices above. So, should we get lots of slower RAM, or less faster RAM? Extra Info: Number of DIMM Slots Available: 6 Servers: Dell Blades CPU: 6 core (only single socket due to Oracle licensing).

    Read the article

  • How can I access an ASP.Net 2.0 web service using VB Script?

    - by Steve Hiner
    I'm trying to find a way to access a web service from a VB Script .vbs file running under wscript.exe. I pulled some sample code from Microsoft but it gives me an error. Dim SOAPClient, Response Set SOAPClient = createobject("MSSOAP.SOAPClient") SOAPClient.mssoapinit("https://www.domain.com/Folder/Service.asmx?WSDL") On that last line I get an error message: WSDLReader: No valid schema specification was found. This version of the SOAP Toolkit only supports 1999 and 2000 XSD schema specifications After getting that message I installed the SOAP 3.0 SDK to make sure I have the most recent version (since it's now deprecated for .Net) but I still get the same error. The reason it needs to be in VB Script is because it's going to be used in a program over which I have no control and it only supports VB Script. Is there a way to get VB Script to be able to parse a newer WSDL file? I do have the source code for the web service. Is there something I can change in the web service to make it schema compatible with the SOAP toolkit?

    Read the article

  • How to resolve this very heavy query that slows down the application?

    - by Juan Paredes
    Hi, We have a web application running in a production enviroment and at some point the client complained about how slow the application got. When we checked what was going on with the application and the database we discover this "precious" query that was being executed by several users at the same time (thus inflicting a extremely high load on the database server): SELECT NULL AS table_cat, o.owner AS table_schem, o.object_name AS table_name, o.object_type AS table_type, NULL AS remarks FROM all_objects o WHERE o.owner LIKE :1 ESCAPE :"SYS_B_0" AND o.object_name LIKE :2 ESCAPE :"SYS_B_1" AND o.object_type IN(:"SYS_B_2", :"SYS_B_3") ORDER BY table_type, table_schem, table_name Our application does not execute this query, I believe it is an Hibernate internal query. I've found little information on why Hibernate does this extremely heavy query, so any help is very much appreciated! The production enviroment information: Red Hat Enterprise Linux 5.3 (Tikanga), JDK 1.5, web container OC4J (whitin Oracle Application Server), Oracle Database 10g Release 10.0.0.1, JDBC Driver for JDK 1.2 and 1.3, Hibernate version 3.2.6.ga. Thank you.

    Read the article

  • VS 2010 Server Explorer Database Showing No Tables

    - by Andy
    I'm working on a .Net application that needs to read from an Oracle 10g database behind Siebel. In VS 2010 Server Explorer, I've created a connection using the OracleClient type connector with a reference to the Oracle TNS service name as the "server name." The "Test Connection" button shows that the connection is successful. However, in the Server Explorer, when I go to expand the Tables, no tables are shown. I know for a fact that there are 3000+ tables in the database (thanks Siebel). Anyone know what's happening here? I'd like to create an Entity Framework 4.0 Entity Data Model... Thanks for the help! Andy

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >