Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 439/925 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Product Support Webcast for Existing Customers: WebCenter Content 11.1.1.8 Overview and Support Information

    - by John Klinke
    Register for our upcoming Advisor Webcast 'WebCenter Content 11.1.1.8 Overview and Support Information' scheduled for 11am Eastern, November 21, 2013 (10am Central, 9am Mountain, 8am Pacific, 17:00 Europe Time (Paris, GMT+01:00)).This 1-hour session is recommended for technical and functional users who have installed or plan to upgrade to WebCenter Content 11.1.1.8 or would just like more information on the latest release.Topics will include: Overview of new features and enhancements Installation of the new WebCenter Content web UI Upgrading from older WebCenter Content versions Support issues including latest patches Roadmap of proposed additional features Make sure you register and mark this date on your calendar. Register at: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=590991341&t=aOnce the host approves your request, you will receive a confirmation email with instructions for joining the call on November 21st.

    Read the article

  • Google I/O 2010 - What's hot in Java for App Engine

    Google I/O 2010 - What's hot in Java for App Engine Google I/O 2010 - What's hot in Java for App Engine App Engine 201 Toby Reyelts, Don Schwarz Learn what's new with Java on App Engine. We'll take a whirlwind tour through the changes since last year, walk through a code sample for task queues and the new blobstore service, and demonstrate techniques for improving your application's performance. We'll top it off with a glimpse into some new features that we've planned for the year ahead. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 01:02:10 More in Science & Technology

    Read the article

  • Google I/O 2010 - Writing zippy Android apps

    Google I/O 2010 - Writing zippy Android apps Google I/O 2010 - Writing zippy Android apps Android 201 Brad Fitzpatrick Come hear tips & war stories on making fast, responsive (aka "non-janky") Android apps. No more ANRs! Eliminate event loop stalls! Fast start-ups! Optimized database queries with minimal I/O! Also, learn about the tools and techniques we use to find performance problems across the system and hear what's coming in the future. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 3 0 ratings Time: 57:38 More in Science & Technology

    Read the article

  • Advanced Analytics Oracle Data Mining - NEW 2-Day Training Course

    - by Mike.Hallett(at)Oracle-BI&EPM
    A NEW 2-Day Oracle University (OU) Instructor Led Course on Oracle Data Mining has been developed for partners and customers to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database. Oracle Data Mining, provides data mining algorithms that run native for high performance in-database model building and model deployment. This OU course is a great way to learn the advantages and benefits of "big data analytics"; mining data, building and deploying "predictive analytics" all inside the Oracle Database and to work with OBI. To register for a class, click here, then click on View Schedule to see the latest scheduled classes and/or submit your information expressing interest in attending a class.

    Read the article

  • Simplifying ASP.NET Demos By Switching Web Server

    Starting with the DXperience v2010.1 release, our ASP.NET demos will no longer use the IIS web server. Instead, were switching to use the built-in ASP.NET Development Server (formerly known as Cassini web server). Why The Change? During the ASP.NET European training tour, we learned that many developers had issues with their IIS installations. While some were easy to fix and some were more, um, challenging, it's still a roadblock to appreciating our ASP.NET demos. The easiest fix is to use...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reasons why crontab does not work

    - by Adam Matan
    Many a time and oft, crontab scripts are not executed as expected. There are numerous reasons for that, for example: wrong crontab notation, permissions, environment variables and many more. This community wiki aims to aggregate the top reasons for crontab scripts not being executed as expected. Write each reason in a separate answer. Please include one reason per answer - details about why it's not executed - and fix(es) for that one reason. Please write only cron-specific issues, e.g. commands that execute as expected from the shell but execute erroneously by cron.

    Read the article

  • Changes to 'resolveconf' discarded when connecting to a new network

    - by sudheer
    I have upgraded to 12.10 from 12.04 recently and I am having issues with connecting to the Internet. I got an IP address and am able to ping other LAN IPs in the local network but I am unable to connect to the Internet and am even unable to ping www.google.com from a terminal. Somehow making changes in /etc/resolv.conf and restarting resolvconf service and rebooting works but I need to do this every time I connect to a new network. How do I make these changes permanent? Can someone suggest a solution to this issue?

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • MAAS fails to Commission nodes

    - by user3644848
    I'm evaluating MaaS/Juju. I followed instructions in this link http://maas.ubuntu.com/docs1.5/install.html to setup MaaS. I'm running this in Oracle VirtualBox environment so Power options are not configured for the VMs. PXE booting VM works fine. The node shuts down itself and registers with MaaS in Declared state. Issues: a) When I Commission the VM (which is in the Declared state), first it gets IP-Config: no response after 60 secs - giving up errors. (See link below for a screenshot). b) Fails to mount the boot device and drops into initramfs prompt. I've copied logs here: https://www.dropbox.com/sh/5gy9nnonbnccufo/AAD9o4awSOtyaCCmRe5q7rBva Any help to get past this is greatly appreciated. Thanks!

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Is there now any way to convert mp3 files to m4a or aac 192kbit?

    - by piedro
    Since about two years now I am trying to find a way to convert high quality mp3 files to m4a or aac files with a fixed bitrate 192k. Please don't suggest using another format - i thought this through as far as it goes. The problem here is: ffmpeg obvioulsy can't convert to a higher bitrate than 152k. Even when it says it does so the resulting files still have 152k instead of 192k. ffmpeg also has/had a bug not writing the bitrate into the audio file tags which means when testing you have to calculate the bitrate manually by dividing the filesize by the length of the audio in seconds (resulting in 152k - see above) choosing faac as converter gets me the same results other programs don't work reliably (see this thread Howto convert audio files to *.m4a? I know that this is not an original new problem but I am wondering if there is still no way to convert with ubuntu/kubuntu 12.04 after a lot time passed and I can't find some of the bug issues mentioned in the other thread anymore. So: Is there a solution after all?

    Read the article

  • Arabic disappeared after 12.04 upgrade!

    - by Aboubakr
    Well, I was amongst the 12.04 Beta upgraders, and since then I've lost the ability to write in Arabic, and I've been using Ubuntu since 2008 as an only OS without any issues, and been upgrading since then as well, except on this machine, which received one upgrade from 11.10 to 12.04 and it got messed up. I've added Arabic like usual, but it doesn't change with the keyboard's short-cut, and when I do it manually with the Mouse, then it just doesn't work, and it keeps writing in English instead. I've tried to install some iBus things, and added Arabic-kbd (m17n) but it still remains messy, let alone not having the same layout, and all I want is to get to NORMAL. So, please, is there any way to reset or initialize these keyboard related settings, so I can get back to normal and stop using the Mac just to type in Arabic, or so often using XP over Vbox? And please, no Re-install option! I just can't backup all my work right now, and there are a lot of tasks waiting for me to get them done. Thanks for any kind of support :)

    Read the article

  • What's the progress on Haskell records?

    - by mmh
    Recently I stumbled once again on the issues of Haskells records, in particular the uniqueness of field names (it's a pain ...) I already read A proposal for records in Haskell from SPJ and Greg Morrisett but it's last update was 2003. Another paper Lightweight Extensible Records for Haskell from SPJ and Mark Jones is even older: It's from a Haskell workshop in 1999. Now I wonder if the process of giving Haskell new records made any progress. Does anybody know something about it or can point me to some further reading ?

    Read the article

  • Is Clojure, Scala and other restrained by the JVM vs CLR

    - by jia93
    The Java implementors seem slow to adopt language improvements, for example compare C# with full closures, expression trees, LINQ etc.. to Java, and even the push back of some stuff to Java 8 will still leave it behind the current implementation of C#. However since I dont intend to use either Java or C# that particular language war isnt of interest too much, im more concerned with the JVM vs CLR. Is this lagging-behind also applicable to the JVM? Will Scala, Clojure etc.. will they be able to continue to innovate or score optimal performance in the face of slowly progressing underlying VM such as JVM? Is Clojure/Scala restrained at present by JVM limitations?

    Read the article

  • How to completely remove wicd from 12.04?

    - by danijelc
    I had issues with WiFi so I removed Network Manager. I booted Windows OS (dual booting Windows 7 and 12.04) to download wicd 1.7.2.4. Extracted it and installed. However, wcid worked properly with Wired Connection but could not connect to WiFi. So, I re-installed Network Manager which works properly now, and wicd connects to WiFi too. As soon as I disconnect from Network Manager, wicd connects only to wired. At this point, I wanted to remove wicd but apt-get returns no wicd installed, can't see it in Synaptic or Software Center, but only in Applications list under Dash home. Of course wcid icon shows connections, and resulting connected, so my system shows both applets for wicd and network manager in the top panel. When running dpkg commands, wicd is not listed as installed package. Any suggestion how to remove wicd? Consider that I started with Ubuntu just few months ago so my knowledge base is limited.

    Read the article

  • DotNetNuke 5.4.1 Released

    I am happy to announce the release of DotNetNuke 5.4.1 which corrects the major issues which slipped through the QA process for 5.4. While we try to do a good job in testing our releases, our recent efforts for 5.3 and 5.4 have fallen short of the mark. We are currently working with a small team of commercial module developers and the core team to put a better public beta testing process in place that will help augment our own internal testing. Ultimately, community testing is the only testing that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Throttle and overheating on Dell XPS Studio 1645

    - by Ross
    I realise there is an older thread on the very subject but that seems to be pretty dead. I just got a Dell Studio XPS 1645 laptop and the fan noise and overheating is pretty ridiculous. This is actually a well known problem with the laptop that is apparently solved with the combination of a BIOS update and the purchase of their 130w charger. I plan on buying this charger as soon as possible, however I've noticed that since installing Ubuntu the fan noise has became more permanent and the overheating is quite a bit worse too. I've had to turn it off twice to let it cool down for an hour or so because it starts seriously affecting the performance. It makes watching things, listening to music or leaving the laptop on while I sleep a real pain. If anyone has some new information on this issue or could help out in anyway at all I'd be very grateful. Thanks.

    Read the article

  • Trim on encrypted SSD--Urandom first?

    - by cb474
    My understanding (I'm not sure I'm getting this all right) is that if one uses Trim on an encrypted SSD, it defeats some of the security benefits, because the drive will write zeros to empty space (as files are deleted). See: http://www.askubuntu.com/questions/115823/trim-on-an-encrypted-ssd And: http://asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html My question is: From the perspective of the performance of the SSD and the functioning of Trim, would it therefore be better to simply zero out the SSD, before setting up an encrypted system, rather than writing random data to the drive, with urandom, as one usually does? Would this basically leave one with the same level of security anyway? And more importantly, would it better enable the Trim functionality to work as intended, with the encrypted SSD?

    Read the article

  • How URL Redirection affects SEO?

    - by Costa
    The following paragraph is from SEO Google Guide Google is good at crawling all types of URL structures, even if they're quite complex, but spending the time to make your URLs as simple as possible for both users and search engines can help. Some webmasters try to achieve this by rewriting their dynamic URLs to static ones; while Google is fine with this, we'd like to note that this is an advanced procedure and if done incorrectly, could cause crawling issues with your site. What makes URL re-writing implementation incorrect for GoogleBot? I am using Asp.net 3.5 framework. Thanks

    Read the article

  • Clip Grab for Ubuntu 12.04

    - by John Bush
    Having issues downloading the newest version of Clip Grab http://clipgrab.de/en, I tried to install it in the terminal, tried finding the software in Ubuntu Software Center, I have even tried to extract the folder for which the software is in at, no luck... When trying to install in the terminal it gives me an error about my ppa. No luck what-so-ever. I love this program, it has worked for several months, and this is annoying that I can not use this program now. Please respond back to me, I also have had no luck with response either.

    Read the article

  • Database Insider Newsletter: February 2011 Edition Available

    - by jenny.gelhausen
    The February edition of the Database Insider Newsletter is now available. This edition covers the upcoming IOUG's Day of Real World Performance Tour What's coming for Collaborate 2011 How Oracle helps you steer clear of security pitfalls and much more... Enjoy! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Setting coding priorities

    - by dotnetdev
    Hi, In the dev shops I've worked in, nobody has ever mentioned "coding priorities". I read this in a book or site somewhere, and sets the expectation of what priority should be first in the code. In places where this is not specified, what should the first priority be? It may sound simple to say "do what the business need requires", but that could be at the expense of performance/maintainability. Many people say maintainability first, regardless, some say fulfill the need regardless. I am a young developer, so I am probably missing the point somewhere. Of course, programming is engineering and tough because you can never have the perfect solution. Thanks

    Read the article

  • Lower your Applications Infrastructure Cost with Oracle Database 11g

    - by john.brust
    If you missed our live Oracle Database 11g Release 2 webcast last Friday, the replay is available. So, join us for the on demand free Webcast in which Mark Townsend, Vice President of Oracle Database Product Management, discusses how running your Oracle applications (Oracle eBusiness Suite, Oracle's PeopleSoft, and Oracle's Siebel ) on Oracle Database 11g can improve performance and scalability, eliminate downtime, and reduce IT infrastructure costs. In the Q&A segment, Mark answers questions about compression, virtual machines, Oracle Active Data Guard, online application upgrades, and much more. Note: Turn off pop-up blockers if the slides do not advance automatically.

    Read the article

  • How to properly URL/domain forward

    - by NRGdallas
    No clue on a title for this, someone feel free to suggest an edit. I have a client that has a website. He owns around 200 domains, and wants each domain to contain content from the main website. The header, footer, and navigation bars will remain the same for each domain, but the actual page content will vary (obviously duplicate content issues, open to suggestions) He wants each individual page to be its own separate domain, rather than a url within the main domain. (page1.com page2.com etc - NOT site.com/page1.html, however the file is actually hosted at site.com/page1.html - all links will direct to site.com/whatever accordingly) What would be the best place to start reading / learning on how to do this, and what concerns/considerations should be taken into mind?

    Read the article

  • Tracking referrals between profiles on the same domain in Google Analytics

    - by doctororange
    I have a website at mydomain.com that uses Analytics. I have a blog that resides at mydomain.com/blog/, which also uses Analytics They are on different profiles. The main site uses something like: _gaq.push(['_setAccount', 'UA-XXXXXXXX-6']); While the blog uses: _gaq.push(['_setAccount', 'UA-XXXXXXXX-7']); _gaq.push(['_setCookiePath', '/blog/']); My issues is that this seems not to track referrals from the blog through to the main site when, for instance, the logo which links to the main site is clicked. Ideally, I would like the clicks of this logo to report that the source was mydomain.com/blog/, but because they are at the same domain they seem to register as direct traffic. Have I missed a step in my configuration, or will I have to resort to linking to something like mydomain.com?ref=blog? Thank you.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >