Search Results

Search found 27181 results on 1088 pages for 'oracle desktop virtualization'.

Page 457/1088 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • What does VNC reflectors basically do?

    - by honeybadger
    I am confused about what vnc reflector does. From the documentation http://sourceforge.net/projects/vnc-reflector/ I got that it's proxysitting between real VNC server (a host) and a number of VNC clients. My confusion is : 1. Does it make any changes to the coming stream from server 2. Does it make 1 connection with server to connect to many client or the connections are proportionate. Documentation is not clear in this. If anyone can help me?

    Read the article

  • Garbage Collection Basics

    - by mikew_co
    Java Is an awesome programming language and platform. One of its better features is automatic garbage collection. Ever wondered how that works? I have written an online web course outlining the basics. Much of what is included has been published before in various white papers and such. However, this is updated for JDK 7 and includes some nice illustrations of the steps involved. Hope you like it. Garbage Collection Basics. A follow-on web course on the G1 garbage collector should follow in a week or so.

    Read the article

  • Aplication for generating cross table

    - by Ajtak
    I need advice on whether there is an application for generating cross tables. I would imagine it so that I entered at the beginning of team names. Subsequently I wrote results, but always one and the second correct field would enrolled opposite result. I want it to count the total score etc. .. I hope I have expanded well. I would like recommendations for any program. In Excel I dont. Thank you a thousand times for any advice. http://imageshack.us/a/img849/9100/d70m.png

    Read the article

  • Problems with Ranking when Using Sourcing Rules And ASLs From Blanket Agreements?

    - by LisaO
    Are you using Sourcing Rules and Approved Supplier List with Blanket Purchase Agreements (BPA) and it seems like Ranking is not working correctly? For example:  The Sourcing Rule being used, has effective dates from 01-APR to 31-MAR for 2013, 2014 and 2015. One BPA is defined for Supplier A, which was originally set to Rank 1 with 100% allocation. A new BPA was created for the same item and with same effective dates as the current BPA. The BPA is for a different Supplier. When Generate Sourcing Rules is run it adds the new BPA/Supplier to the Sourcing rule, but its added as Rank 1, with the old rule changed to Rank 2. For complete information refer to  Doc ID 1678447.1 Generate Sourcing Rules And ASLs From Blanket Agreements Ranking not Behaving As Expected. Still have Questions? Access the Procurement Community and, using the 'Start a Discussion' link, post your question.

    Read the article

  • Slow Transfer Speeds from KVM host to client

    - by indian maiden
    I am trying to isolate the root cause of slow transfer speeds from my host OS to a KVM client. Both are Linux. Rsync on the host 192.168.1.72 rsync -auv --progress rut3.img /tmp/ [54.09MB/s] Rsync to the client: rsync -auv --progress rut3.img 192.168.1.80:/tmp/ [25.52MB/s] I realize that there will be some TCP overhead on the transfer but over 50%? Can someone enlighten me on what could be slowing down the transfers so much?

    Read the article

  • Webcast: CRM Foundations - Notes, Attachments and Folder Technology

    - by LuciaC
    Webcast: CRM Foundations - Notes, Attachments and Folder Technology Date: November 21, 2013 at 11am ET, 10am CT, 8am PT, 4pm GMT, 9.30pm IST Don't miss this webcast if you want to know how to get the most out of using Notes and learn how to leverage best practices for Folder technology and Attachments.  This session will help users who are struggling with any of these topics understand how to use them better and more efficiently. TOPICS WILL INCLUDE: Set up and use of Notes Notes Security Attachments and their use throughout CRM Folder Technology Any new functionality related to these topics in release 12.2Set up and use of Notes. For more details and how to register see Doc ID 1592459.1 Remember that you can access a full listing of all future webcasts as well as replays from Doc ID 7409661.1.

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • KVM guest storage difference with NBD and NFS

    - by WojonsTech
    I am setting up my own little private cloud for my own use maybe for a project or to. I am using linux kvm on debian 6. I have 3 servers 2 of them for compute nodes and 1 storage node. I would I have already installed kvm made a few test machines got my networking setup. I have 2 nics on each server 1 nic is for web traffic other nic is for network traffic. My first Idea was to use NFS for storing the guest machines which can range in size, maybe 8gb maybe 100gb, it just depends. I was doing have heard of nbd before seems like it could work but I dont know what the performance differences are and if it will effect my enviroment, nfs looks like it will be easier to use.

    Read the article

  • ubuntu 12.04 kvm virtual server network setup, can't get the machine to be connectable

    - by xyious
    I have worked on my Ubuntu Server host for weeks now and I just can not manage to get the virtual machines into the network.... here's what I need to do: I need to be able to create virtual machines that have IP addresses that can be reached from the outside (192.168 network). I need to be able to connect to the virtual machines through ssh, ftp, http and preferably https, anything else doesn't matter that much. So far everything seems simple enough and I have a lot of leeway in terms of IP address range and server/client configuration. I have the option of taking part of a /24 net as most IPs aren't used, and if it's absolutely necessary I have the option of creating a new /24 subnet. Also have the option of reformatting and reinstalling OS on the host and recreating the virtual machines as nothing has been done other than trying to get virtual machines to work. I would prefer if the virtual machines were just part of the normal network which would be 192.168.5.0/24. The host machine has 2 network cards so I don't even necessarily need the Host to be connectable in the same /24 network. I have tried (I think) just about everything from about 5 different tutorials on bridging (giving br0 the same IP that eth0 used to have (Host is able to connect to VM and vice versa, VM doesn't have outside network access), having eth0 set up like it always was and having br0 have a different IP (same as above), NAT with port forwarding (which I would have preferred not to use but will if it works), turning off one of the hosts network cards and just using one of them, different subnets.... etc. I do know my way around iptables fairly well.... Host is 64bit Ubuntu Server 12.04, using libvirt/kvm. edits: Local network is 192.168.5.0/24, host has static ip 192.168.5.254, GW .5.1 which is also nameserver. We have a second Local network at 192.168.10.0/24 with .10.1 GW, but both hosts and VMs were supposed to go into the .5 subnet. The .10 subnet isn't required, but it wouldn't be horrible if the Host were only accessible in the .10 subnet.

    Read the article

  • Iterative and Incremental Principle Series 5: Conclusion

    - by llowitz
    Thank you for joining me in the final segment in the Iterative and Incremental series.  During yesterday’s segment, I discussed Iteration Planning, and specifically how I planned my daily exercise (iteration) each morning by assessing multiple factors, while following my overall Implementation plan. As I mentioned in yesterday’s blog, regardless of the type of exercise or how many increment sets I decide to complete each day, I apply the 6 minute interval sets and a timebox approach.  When the 6 minutes are up, I stop the interval, even if I have more to give, saving the extra energy to apply to my next interval set.   Timeboxes are used to manage iterations.  Once the pre-determined iteration duration is reached – whether it is 2 weeks or 6 weeks or somewhere in between-- the iteration is complete.  Iteration group items (requirements) not fully addressed, in relation to the iteration goal, are addressed in the next iteration.  This approach helps eliminate the “rolling deadline” and better allows the project manager to assess the project progress earlier and more frequently than in traditional approaches. Not only do smaller, more frequent milestones allow project managers to better assess potential schedule risks and slips, but process improvement is encouraged.  Even in my simple example, I learned, after a few interval sets, not to sprint uphill!  Now I plan my route more efficiently to ensure that I sprint on a level surface to reduce of the risk of not completing my increment.  Project managers have often told me that they used an iterative and incremental approach long before OUM.   An effective project manager naturally organizes project work consistent with this principle, but a key benefit of OUM is that it formalizes this approach so it happens by design rather than by chance.    I hope this series has encouraged you to think about additional ways you can incorporate the iterative and incremental principle into your daily and project life.  I further hope that you will share your thoughts and experiences with the rest of us.

    Read the article

  • VirtualBox without X server

    - by nccc
    I want to run a guest operating system under a Linux host with VirtualBox, but I don't want to run from within X. I don't want a headless configuration, I don't want to run VirtualBox in the background, I don't want any remote protocols. I just want the guest OS to take control of my console (keyboard, mouse and monitor) and render to the framebuffer directly, not from within an X window. Is this possible?

    Read the article

  • Remote screen control software (such as VNC) with permission mechanism

    - by xuhdev
    Does anyone know a kind of software that allows controlling of other computer, but the server can configure permission control? For example, the server can define five users, one of them can control, but the other four of them can only view. Also, file transfer support is required. I've tried TightVNC, it works fine with anything else, but it just does not support user permission control. RAdmin can serve this job, but I wish to find some software which is free. Thank you!

    Read the article

  • KVM vs Hyper-V. Which hypervisor is best for windows guests?

    - by user198851
    I am currently testing openstack for windows guests (XP and 7). I have deployed openstack "all in one" on system with following specs Processor corei5. (4 physical cores and 8 Threads with HT Technology) RAM 8 GB. HD 500 GB. I have created 4 windows xp guests with 512MB RAM and 1VCPU. On each windows guest i have installed visual studio 2008 only. In nova.conf CPU Over-Commit ratio is 2 for better performance (as mentioned in openstack operation guide). Using KVM as hyerpvisor. I have observed poor performance when simultaneously using visual studio in four windows instances. How i can improve performance ? Should i use KVM or Hyper-V ? or any other suggestion ?

    Read the article

  • VMworkstation Windows 7 vm from physical partition?

    - by rich
    Hi All, i have a machine with 2 disks. my secondary drive has two partitions, one of which is a windows 7 64 boot partition. I have VM workstation and i would like to make a VM from the physical partition (described above). Ideally this would boot from the live disk, but if i can make a vmdk from the two partitions on the secondary drive that would be fine. 1 issue is the drive is 140gig raptor of which the two partitions i want are 40g and 30g partitions. the rest of the space is unallocated. So if i make a vmdk i really need it to be fixed at say 80 gig. I have converter but i don't understand how i can make the vmdk using this... specs Drive 1: this drive is a 120 SSD, running the host OS (Windows 7 64bit) - i've got 95 gigs free on this Drive 2: 140 gig raptor, partition 1 40g is also a windows 7 64bit install, partition 2 is 35 gig with program files folder on it.. sorta of needed to get the vm to work. There is 65gig unallocated on this disk. Drive 1 will host drive 2 as a VM.. my hope.

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Interesting fact #123423

    - by Tim Dexter
    Question from a customer on an internal mailing list this, succintly answered by RTF Template God, Hok-Min Q: Whats the upper limit for a sum calculation in terms of the largest number BIP can handle? A: Internally, XSL-T processor uses double precession.  Therefore the upper limit and precision will be same as double (IEEE 754 double-precision binary floating-point format, binary64). Approximately 16 significant decimal digits, max is 1.7976931348623157 x 10308 . So, now you know :)

    Read the article

  • Building MySQL with boost on windows

    - by user13177919
    As you've probably heard already MySQL needs boost to build. However, in the good ol' MySQL tradition, the above link does give you only the instructions on how to build it on linux. And completely ignores the fact that there're other OSes too that people develop on. To fill in that gap, I've compiled a small step by step guide on how to do it on windows. Note that I always, as a principle, build out-of-source. The typical setup I have is : bzr clone lp:~mysql/mysql-server/5.7 mysql-trunkcd mysql-trunkmkdir bldcd bldcmake -DWITH_DEBUG=1 -DMYSQL_PROJECT_NAME=mysql-trunk ..devenv /build debug mysql-trunk.sln This has been tested to work on a 32 bit compile using VS2013 on a Windows7 64 bit build. Note that you'll need other things too (bison, eventually openssl etc) that I will assume you already have set up. Steps: Download Boost 1.55.0. It's the *only* version that is known to work currently. Extract boost_1_55_0/ from the zip to c:\boost\boost_1_55_0 Go to Control Panel/System/Environment variables and set WITH_BOOST=C:\boost\boost_1_55_0 in User variables. Make sure you restart your open command line terminal windows after this !  If you're upgrading from non-boost build, remove your bld/ directory and create a new one. run cmake as you'd typically do. You should get: -- Local boost dir C:/boost/boost_1_55_0 -- Local boost zip LOCAL_BOOST_ZIP-NOTFOUND -- BOOST_VERSION_NUMBER is #define BOOST_VERSION 105500 -- BOOST_INCLUDE_DIR C:/boost/boost_1_55_0 Build as normal (devenv /build debug ...). It should work.

    Read the article

  • (My) Sun Ray 3i

    - by user13346636
    Last week, some Sun Ray devices were shown at the LASDEC exhibition. Afterward, they were brought back to the Aoyama Center, but not all of them found a place to be stored. So, two days ago, Iwasaki-san, one of the co-workers I've been close to (and who was at the exhibition), put a Sun Ray 3i (all-in-one with 21.5" screen) on my (shared) desk. Yay! I managed to get a Japanese keyboard, and now I can access my card and cardless sessions from Germany, and the performance is just great, as good as when I work from home in Hamburg. That's the way my deskt looks now,almost as messy as my desk in Hamburg: And my back is very grateful.

    Read the article

  • KVM XML config file

    - by awmusic12635
    I have a KVM server that I expanded the LV of then rebooted. However now when booting I get the error: error: Failed to create domain from /home/kvm/kvmx/kvmx.xml error: (domain_definition):1: Document is empty (null) ^ It appears the config file still exits however it is now empty. I attempted to replace the contents of the file with the correct previous information however it continues to wipe it on attempt to boot and fails again with the same error. How would I got about solving this so the file doesn't get wiped on every reboot? OS: Centos 6 64bit I would appreciate any help you may have.

    Read the article

  • Universal Work Queue Quick Filter Examples

    - by LuciaC-Oracle
    If you use Universal Work Queue then it's likely that you will want to define and use your own Quick Filters.  Quick Filters allow you to focus on specific work classes based on assigned criteria in a node. This makes it much easier for Agents to view their work grouped in a meaningful way.How to create Universal Work Queue - Quick Filters (Doc ID 803163.1) gives two worked examples to help you understand how to create your own Quick Filters:     Adding a 'Resource Group' filter     Adding an Overdue Amounts filter for use in Collections. We hope you find these examples useful.  Let us know by providing feedback on the document itself or, why not post to the MOS Service Community with your experience and suggestions.

    Read the article

  • domain screensaver control software

    - by Pec
    I'm looking to buy a screensaver control product with granular control. I have about 2000 workstations which require dozens of different timeout values, lock/not locked on resume settings, different screen saver files (that can be frequently updated depending on department), etc. It's looking to be quite an undertaking accomplishing this with domain group policies so I'm hoping you guys have some suggestions of products to use. Hopefully such products would integrate with AD. Thanks

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >