Search Results

Search found 21813 results on 873 pages for 'oracle cloud office'.

Page 401/873 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • October 2012 Security "Critical Patch Update" (CPU) information and downloads released

    - by user12244672
    The October 2012 security "Critical Patch Update" information and downloads are now available from My Oracle Support (MOS). See http://www.oracle.com/technetwork/topics/security/alerts-086861.html and in particular Document 1475188.1 on My Oracle Support (MOS), http://support.oracle.com, which includes security CVE mappings for Oracle Sun products. For Solaris 11, Doc 1475188.1 points to the relevant SRUs containing the fixes for each issue.  SRU12.4 was released on the CPU date and contains the current cumulative security fixes for the Solaris 11 OS. For Solaris 10, we take a copy of the Recommended Solaris OS patchset containing the relevant security fixes and rename it as the October CPU patchset on MOS.  See link provided from Doc 1475188.1 Doc 1475188.1 also contains references for Firmware, etc., and links to other useful security documentation, including information on Userland/FOSS vulnerabilities and fixes in https://blogs.oracle.com/sunsecurity/

    Read the article

  • A Myriad of Options

    - by Mark Hesse
    I am currently working with a customer that is close to outgrowing their Exadata X2-2 half rack in both compute and storage capacity.  The platform is used for one of their larger data warehouse applications and the move to Exadata almost two years ago has been a resounding success, forcing them to grow the platform sooner than anticipated. At a recent planning meeting, we started looking at the options for expansion and have developed five alternatives, all of which meet or exceed their growth requirements, yet have different pros and cons in terms of the impact to their production and test environments. The options include an in-rack upgrade to a full rack of Exadata using the recently released X3-2 platform (an option that even applies to an older V2 rack), multi-rack cabling the existing X2-2 to another full rack or half rack X2-2 (and utilizing both compute and storage capacity in the other rack), or simply adding a new X3-2 half rack (and taking advantage of the added compute and flash performance in the X3-2). While the decision is yet to be made, it had me thinking that one of the benefits of Exadata over a traditional database deployment is that when the time comes to expand the platform, there are a myriad of options.

    Read the article

  • Amazon - install a complete server on EBS

    - by user1169575
    is it possible to install a full working OS with a webserver, db, and all needed stuff on an EBS storage? If so, would I immediatly gain benefits by mounting this EBS on a better instance? Otherwise (if I cannot install a complete image, or if you don't think it's reasonable) can I install the software so that I only need to mount the EBS on a new instance to have it working? I purchased a Medium Reserved Instance, but when there will be the need to get a better instance I'd like to move the whole db/website, I'd simply buy a better instance and then attach the EBS. Is it possibile? I'm imaging about it like an hard drive that would be mounted on a better server. Of course, more RAM would allow me to increase caching limits, and that's ok, but I don't want to reinstall anything (the main website is a magentocommerce and it's pretty painful to move it). P.S. is the Standard EBS (100 IOPS) valid or do I need to choose a Provisioned IOPS (up to 1000 IOPS)?

    Read the article

  • Create .exe and .msi Installers for your JavaFX Apps

    - by mikew_co
    Well I wanted to figure out how the new JavaFX native packaging worked, so with a little work, I have written up my findings. Create a Windows Native Installer and EXE with JavaFX and NetBeans 7.2All the information is in the articles I have linked to at the end of the page. However, I hopefully have pulled together all the key facts in one place. In addition, I tried to document all the problems I ran into in the troubleshooting section. So what is the end result?  With everything installed, building an application in NetBeans creates an EXE installer, an MSI installer, and an EXE file to execute your application. Really slick professional stuff. This is a great addition to the whole Java platform.

    Read the article

  • Amazon EC2 as load balanced/failover solution

    - by sugiggs
    Hi All, I'm thinking of an idea but not sure the pros/cons of it. At the moment, we are hosting our website on a dedicated server. As a failover/load balanced solution, I'm thinking to use Amazon EC2+EBS. The files can be rsync and mysql can be setup as master-master replication When the load is high, I can up the machine, given sometime to "sync" and load balanced the traffic there. is it do-able? any link I can read more on this?

    Read the article

  • Logs show lots of user attempts from unknown IP

    - by rodling
    I lost access to my instance which I host on AWS. Keypairing stopped to work. I detached a volume and attached it to a new instance and what I found in logs was a long list of Nov 6 20:15:32 domU-12-31-39-01-7E-8A sshd[4925]: Invalid user cyrus from 210.193.52.113 Nov 6 20:15:32 domU-12-31-39-01-7E-8A sshd[4925]: input_userauth_request: invalid user cyrus [preauth] Nov 6 20:15:33 domU-12-31-39-01-7E-8A sshd[4925]: Received disconnect from 210.193.52.113: 11: Bye Bye [preauth] Where "cyrus" is changed by hundreds if not thousands of common names and items. What could this be? Brute force attack or something else malicious? I traced IP to Singapore, and I have no connection to Singapore. May thought is that this was a DoS attack since I lost access and server seemed to stop working. Im not to versed on this, but ideas and solutions for this issue are welcome.

    Read the article

  • Where can someone store >100GB of pictures online? [closed]

    - by sbi
    A person who is not very computer-savvy needs to store 130GB of photos. The key parameters are: an non-negligible probability that the company selling the storage will be existing, and the data accessible, for at least five years data should be considered safe once uploaded reasonable terms of service: google drive reserving the right to literally do anything they want with their user's data is not acceptable; the possibility that the CIA might look at those pictures is not considered a threat easy to use from Windows, preferably as a drive no nerve-wracking limitations ("cannot upload 10GB/day" or "files 500MB" etc.) that serve no purpose other than pushing the user to the next-higher price plan some upgrade plan: there's currently 10-30GB of new photos per year, with a tendency to increase, which might bust a 150GB limit next January ability to somehow sort the pictures: currently they are sorted into folders, but something alike (tags) would be just as good, if easy enough to apply of course, the pricing is important (although there's a reason this is the last bullet; reasonable data safety is considered more important) Nice to have, but not necessary features would be: additional features related to photos (thumbnail generation, album sharing etc.) access from web and other platforms than Windows (smart phones) Let me stress this again: The person in need of that is able to copy pictures from the camera to the computer, can copy files in the explorer, and uses a web email service. That's about it, there's almost no understanding of what happens under the hood.

    Read the article

  • Mounting an Amazon EC2 instance on Mac OS X

    - by hinghoo
    I've got public key authentication working between my Mac OS X and an Amazon EC2 instance so that from the command-line I can just type the following and it works: ssh root@[IPAddressOfEC2Instance] The strange thing is that I can't seem to mount the instance using "Connect to Server" in the Finder. I've tried typing the following server addresses into the "Connect to Server" dialog: ftps://[IPAddressOfEC2Instance] ftps://root@[IPAddressOfEC2Instance] But all I get is You entered an invalid username or password. Please try again. The root user on the EC2 instance has a blank password and I'm wondering if it has to do with that. However, I can't change the password for the root user. I can use an SFTP client to connect to the machine, I just can't mount it with "Connect to server". It asks for a username and password (for a registered user) and it's root/[blank] which it doesn't accept. The other option is "Guest" which brings up an empty folder in the Finder.

    Read the article

  • 12.1.3 Spares Management Enhancements Transfer of Information (TOI)

    - by Oracle_EBS
    Transfer of Information (TOI) presentation is available. It covers the following enhancements made to the EBS Spares Management Product: Restrict Sources with no Shipping Network definition Create Internal Order when Source is Manned Warehouse Display Delivery status in Parts Requirement UI Order Sources by distance when Shipping cost remains same Calculate Parts Shipping Distances using Navteq Data Consider Warehouse Calendar to calculate Parts Arrival Date Create Requisitions in Operating Unit of Destination Inventory Org Uptake of HZ address structure in Parts Requirement UI

    Read the article

  • What kind of hosting is used for *tube sites?

    - by playcat
    Hello, When a site grows from a just-a-fun project to a site with bigger load of visitor, and you want to enable them to upload videos, you might find yourself in a need of a better hosting, including dedicated server and a no-limit web traffic (or some reasonable limit). So, if people can upload their videos, and if page has around 1000-10000 visitors per day, what kind of hosting is there to choose from? What is needed in that case? Thx

    Read the article

  • Tell us what you want in 6.6

    - by Reggie
    Now that we have finished 6.5 it's time to really start gearing up for 6.6.  We have  many great feature ideas but we want to hear from you.  To help with that, we're running a poll on the website where you can vote on the features you would like to see in the next version of the best MySQL ADO.Net Connector on the planet.  You can find the poll at http://dev.mysql.com and scrolling near the bottom of the page.   Thanks for your time and please go vote.  

    Read the article

  • Problems with Ranking when Using Sourcing Rules And ASLs From Blanket Agreements?

    - by LisaO
    Are you using Sourcing Rules and Approved Supplier List with Blanket Purchase Agreements (BPA) and it seems like Ranking is not working correctly? For example:  The Sourcing Rule being used, has effective dates from 01-APR to 31-MAR for 2013, 2014 and 2015. One BPA is defined for Supplier A, which was originally set to Rank 1 with 100% allocation. A new BPA was created for the same item and with same effective dates as the current BPA. The BPA is for a different Supplier. When Generate Sourcing Rules is run it adds the new BPA/Supplier to the Sourcing rule, but its added as Rank 1, with the old rule changed to Rank 2. For complete information refer to  Doc ID 1678447.1 Generate Sourcing Rules And ASLs From Blanket Agreements Ranking not Behaving As Expected. Still have Questions? Access the Procurement Community and, using the 'Start a Discussion' link, post your question.

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Garbage Collection Basics

    - by mikew_co
    Java Is an awesome programming language and platform. One of its better features is automatic garbage collection. Ever wondered how that works? I have written an online web course outlining the basics. Much of what is included has been published before in various white papers and such. However, this is updated for JDK 7 and includes some nice illustrations of the steps involved. Hope you like it. Garbage Collection Basics. A follow-on web course on the G1 garbage collector should follow in a week or so.

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • Webcast: CRM Foundations - Notes, Attachments and Folder Technology

    - by LuciaC
    Webcast: CRM Foundations - Notes, Attachments and Folder Technology Date: November 21, 2013 at 11am ET, 10am CT, 8am PT, 4pm GMT, 9.30pm IST Don't miss this webcast if you want to know how to get the most out of using Notes and learn how to leverage best practices for Folder technology and Attachments.  This session will help users who are struggling with any of these topics understand how to use them better and more efficiently. TOPICS WILL INCLUDE: Set up and use of Notes Notes Security Attachments and their use throughout CRM Folder Technology Any new functionality related to these topics in release 12.2Set up and use of Notes. For more details and how to register see Doc ID 1592459.1 Remember that you can access a full listing of all future webcasts as well as replays from Doc ID 7409661.1.

    Read the article

  • EC2 Image to start

    - by HD.
    I'm starting to test EC2 for a couple of new projects. I need to choose an AMI (Amazon Machine Image) and Amazon offered me as first option Fedora Core 8, which is a very old version of one of my favorites distributions. There is a lot of choices, but it's not clear for me which one is the better option. I have my own reasons in order to choice a distro and a version when I need to install a new server but I don't know If I can apply the same for EC2. I know there is a beta for RHEL, how stable is this beta?, How can I choose between all the CentOS AMIs in the list? So this is my question: Do you recommend an AMI to start with EC2? Thanks

    Read the article

  • Iterative and Incremental Principle Series 5: Conclusion

    - by llowitz
    Thank you for joining me in the final segment in the Iterative and Incremental series.  During yesterday’s segment, I discussed Iteration Planning, and specifically how I planned my daily exercise (iteration) each morning by assessing multiple factors, while following my overall Implementation plan. As I mentioned in yesterday’s blog, regardless of the type of exercise or how many increment sets I decide to complete each day, I apply the 6 minute interval sets and a timebox approach.  When the 6 minutes are up, I stop the interval, even if I have more to give, saving the extra energy to apply to my next interval set.   Timeboxes are used to manage iterations.  Once the pre-determined iteration duration is reached – whether it is 2 weeks or 6 weeks or somewhere in between-- the iteration is complete.  Iteration group items (requirements) not fully addressed, in relation to the iteration goal, are addressed in the next iteration.  This approach helps eliminate the “rolling deadline” and better allows the project manager to assess the project progress earlier and more frequently than in traditional approaches. Not only do smaller, more frequent milestones allow project managers to better assess potential schedule risks and slips, but process improvement is encouraged.  Even in my simple example, I learned, after a few interval sets, not to sprint uphill!  Now I plan my route more efficiently to ensure that I sprint on a level surface to reduce of the risk of not completing my increment.  Project managers have often told me that they used an iterative and incremental approach long before OUM.   An effective project manager naturally organizes project work consistent with this principle, but a key benefit of OUM is that it formalizes this approach so it happens by design rather than by chance.    I hope this series has encouraged you to think about additional ways you can incorporate the iterative and incremental principle into your daily and project life.  I further hope that you will share your thoughts and experiences with the rest of us.

    Read the article

  • KVM vs Hyper-V. Which hypervisor is best for windows guests?

    - by user198851
    I am currently testing openstack for windows guests (XP and 7). I have deployed openstack "all in one" on system with following specs Processor corei5. (4 physical cores and 8 Threads with HT Technology) RAM 8 GB. HD 500 GB. I have created 4 windows xp guests with 512MB RAM and 1VCPU. On each windows guest i have installed visual studio 2008 only. In nova.conf CPU Over-Commit ratio is 2 for better performance (as mentioned in openstack operation guide). Using KVM as hyerpvisor. I have observed poor performance when simultaneously using visual studio in four windows instances. How i can improve performance ? Should i use KVM or Hyper-V ? or any other suggestion ?

    Read the article

  • Interesting fact #123423

    - by Tim Dexter
    Question from a customer on an internal mailing list this, succintly answered by RTF Template God, Hok-Min Q: Whats the upper limit for a sum calculation in terms of the largest number BIP can handle? A: Internally, XSL-T processor uses double precession.  Therefore the upper limit and precision will be same as double (IEEE 754 double-precision binary floating-point format, binary64). Approximately 16 significant decimal digits, max is 1.7976931348623157 x 10308 . So, now you know :)

    Read the article

  • Building MySQL with boost on windows

    - by user13177919
    As you've probably heard already MySQL needs boost to build. However, in the good ol' MySQL tradition, the above link does give you only the instructions on how to build it on linux. And completely ignores the fact that there're other OSes too that people develop on. To fill in that gap, I've compiled a small step by step guide on how to do it on windows. Note that I always, as a principle, build out-of-source. The typical setup I have is : bzr clone lp:~mysql/mysql-server/5.7 mysql-trunkcd mysql-trunkmkdir bldcd bldcmake -DWITH_DEBUG=1 -DMYSQL_PROJECT_NAME=mysql-trunk ..devenv /build debug mysql-trunk.sln This has been tested to work on a 32 bit compile using VS2013 on a Windows7 64 bit build. Note that you'll need other things too (bison, eventually openssl etc) that I will assume you already have set up. Steps: Download Boost 1.55.0. It's the *only* version that is known to work currently. Extract boost_1_55_0/ from the zip to c:\boost\boost_1_55_0 Go to Control Panel/System/Environment variables and set WITH_BOOST=C:\boost\boost_1_55_0 in User variables. Make sure you restart your open command line terminal windows after this !  If you're upgrading from non-boost build, remove your bld/ directory and create a new one. run cmake as you'd typically do. You should get: -- Local boost dir C:/boost/boost_1_55_0 -- Local boost zip LOCAL_BOOST_ZIP-NOTFOUND -- BOOST_VERSION_NUMBER is #define BOOST_VERSION 105500 -- BOOST_INCLUDE_DIR C:/boost/boost_1_55_0 Build as normal (devenv /build debug ...). It should work.

    Read the article

  • Universal Work Queue Quick Filter Examples

    - by LuciaC-Oracle
    If you use Universal Work Queue then it's likely that you will want to define and use your own Quick Filters.  Quick Filters allow you to focus on specific work classes based on assigned criteria in a node. This makes it much easier for Agents to view their work grouped in a meaningful way.How to create Universal Work Queue - Quick Filters (Doc ID 803163.1) gives two worked examples to help you understand how to create your own Quick Filters:     Adding a 'Resource Group' filter     Adding an Overdue Amounts filter for use in Collections. We hope you find these examples useful.  Let us know by providing feedback on the document itself or, why not post to the MOS Service Community with your experience and suggestions.

    Read the article

  • (My) Sun Ray 3i

    - by user13346636
    Last week, some Sun Ray devices were shown at the LASDEC exhibition. Afterward, they were brought back to the Aoyama Center, but not all of them found a place to be stored. So, two days ago, Iwasaki-san, one of the co-workers I've been close to (and who was at the exhibition), put a Sun Ray 3i (all-in-one with 21.5" screen) on my (shared) desk. Yay! I managed to get a Japanese keyboard, and now I can access my card and cardless sessions from Germany, and the performance is just great, as good as when I work from home in Hamburg. That's the way my deskt looks now,almost as messy as my desk in Hamburg: And my back is very grateful.

    Read the article

  • Tutorial on Hudson, JUnit and Ant

    - by Grant Ronald
    Often when discussing ADF we often show the features for developing applications. However, writing applications is only one part.  Building in a team, integrating code, testing it...these are equally important to the success of the project.  If you would like to find out how features in JDeveloper can help you build, maintain, integrate and test your application then check out this tutorial.

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >