Search Results

Search found 8673 results on 347 pages for 'kvm switch'.

Page 96/347 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • How do I get keyboard to write hiragana instead of katakana?

    - by Lisandro
    I added the Japanese Layout in Keyboard preferences, however all the layouts look like being in katakana. I suspect that the key that is left of number 1 (above Tab and under Esc) could be the one to switch from katakana to hiragana however I don't have that key in my keyboard and my other Toshiba laptop does not have it either. I really don't know what to do, I simply want to be able to write in hiragana in Ubuntu. Thanks

    Read the article

  • VoIP Phone Service - For The Every Communicator

    If you have already come across the term VoIP, but you are still not aware what does it mean what it has to offer you. You will be wondering whether it is beneficial to switch over from the old tradi... [Author: Dennis Smith - Computers and Internet - April 22, 2010]

    Read the article

  • Geforce GT240M: How to disable Notebook screen and enable external Monitor

    - by bdr529
    When i attach my Notebook (Nvidia Geforce GT 240M) to my Panasonic-TV via HDMI, both screens are activated. I use Nvidia-Driver Version 260.19.06. I wonder how to deactivate my notebook display automatically or with one Script and switch to the external Monitor when attached. The TV needs an overscan-correction of 100px to display the whole screen. I also want to close the notebook without deactivating the external monitor, what still happens now. Is this possible?

    Read the article

  • DHCP-server doesn't start at boot because of wrong startup order

    - by stolsvik
    Apparently the isc-dhcp-server is started too early in the boot sequence, it states that it has nothing to do. If I just log directly in as root and start it using the init.d-script, it starts normally. My setup is basically an utterly standard router, with an eth0 on the inet side, and an eth1 on the lan side. However, I've defined a bridge instead of the eth1 for the lan-side. Thus, the lan-part of the network isn't up until the bridge is up. I currently believe that the dhcp server is brought up before the bridge is brought up, probably because the bridge is brought up with the 'networking' task, while the eth's are taken up with the 'network-interface' tasks - which are run earlier. (also, the bridge takes a small age to get up compared to the eth's). If I do take away the bridge config, instead using eth1 directly for the lan side, things work. (However, judging by syslog, things are still tight.) Ideas of how the get DHCP to start later? (The reason for the bridge, is to be able to use KVM with bridged networking..)

    Read the article

  • Understanding how OpenGL blending works

    - by yuumei
    I am attempting to understand how OpenGL (ES) blending works. I am finding it difficult to understand the documentation and how the results of glBlendFunc and glBlendEquation effect the final pixel that is written. Do the source and destination out of glBlendFunc get added together with GL_FUNC_ADD by default? This seems wrong because "basic" blending of GL_ONE, GL_ONE would output 2,2,2,2 then (Source giving 1,1,1,1 and dest giving 1,1,1,1). I have written the following pseudo-code, what have I got wrong? struct colour { float r, g, b, a; }; colour blend_factor( GLenum factor, colour source, colour destination, colour blend_colour ) { colour colour_factor; float i = min( source.a, 1 - destination.a ); // From http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendFunc.xml switch( factor ) { case GL_ZERO: colour_factor = { 0, 0, 0, 0 }; break; case GL_ONE: colour_factor = { 1, 1, 1, 1 }; break; case GL_SRC_COLOR: colour_factor = source; break; case GL_ONE_MINUS_SRC_COLOR: colour_factor = { 1 - source.r, 1 - source.g, 1 - source.b, 1 - source.a }; break; // ... } return colour_factor; } colour blend( colour & source, colour destination, GLenum source_factor, // from glBlendFunc GLenum destination_factor, // from glBlendFunc colour blend_colour, // from glBlendColor GLenum blend_equation // from glBlendEquation ) { colour source_colour = blend_factor( source_factor, source, destination, blend_colour ); colour destination_colour = blend_factor( destination_factor, source, destination, blend_colour ); colour output; // From http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendEquation.xml switch( blend_equation ) { case GL_FUNC_ADD: output = add( source_colour, destination_colour ); case GL_FUNC_SUBTRACT: output = sub( source_colour, destination_colour ); case GL_FUNC_REVERSE_SUBTRACT: output = sub( destination_colour, source_colour ); } return output; } void do_pixel() { colour final_colour; // Blending if( enable_blending ) { final_colour = blend( current_colour_output, framebuffer[ pixel ], ... ); } else { final_colour = current_colour_output; } } Thanks!

    Read the article

  • Top 10 Plugins For Wordpress

    WordPress is the best content management system. It was use to empower self hosted blogs, but now it's extended functionality motivate webmasters to switch their website to WordPress.

    Read the article

  • Oracle E-Business Suite Tip : SQL Tracing

    - by Giri Mandalika
    Issue: Attempts to enable SQL tracing from concurrent request form fails with error: Function not available to this responsibility. Change Responsibilities or contact your System Administrator Resolution: Switch responsibility to "System Administrator". Navigate to System - Profiles, and query for "%Diagnostics% ("Utilities : Diagnostics")". Once found the profile, change its value to "Yes". Restart web browser and try enabling SQL trace again.

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • How to change Ubuntu from 64 to 32 bit [closed]

    - by Shmaleb
    Possible Duplicate: Can I switch from ubuntu amd64 to ubuntu 32bit My brother installed 64 bit Ubuntu on my moms old PC (dualboot with winXP) and it runs very crappily. It is on Ubuntu 11.10 and I was wondering if there is any easy way of updating to 32 bit 12.04. I know on fedora when you insert instillation media on a computer that already has fedora it asks you if you want to install it over that fedora, is there anything like this for Ubuntu. THANKS!

    Read the article

  • Virtual environment firewall with CSF + iptables rules on VM?

    - by luison
    We are getting into virtualization with a Proxmox VE (OpenVZ + KVM) server. Our plan for firewall is to have CSF (http://configserver.com/cp/csf.html) running on the host machine as we've had a reasonable good experience with it in the past. Apart from that we plan simple firewall rules on the VM machines (mostly OpenVZ containers with same kernel) and maybe fail2ban simple specific rules. I would appreciate comments with anyone with similar experiences? I understand all traffic comes via the host machine so a combined firewall there with specific firewalling on the VM should work, alltough some iptables rules are hard to get to work on OpenVZ containers.

    Read the article

  • virtual install from ISO not getting virtual kernel

    - by Pete
    I have a KVM host (12.04.5) that I have been installing guests on in variety of ways. I just noticed recently one of my guests was running a generic kernel when I'm fairly certain I specified minimum virtual machine during install from a 12.04.2 server iso. From what I understand it should be running a stripped down kernel "optimized" for VMs. I set up another server to test, this time using a 14.04.1, and sure enough I ended up with uname -r returning 3.13.0-32-generic. It seems that if I use an .iso to install, I end up with generic regardless. However building with the vmbuilder ... --flavour virtual --suite precise ... (I don't have trusty available yet) script gives me an ubuntu 12.04.5 LTS system running kernel 3.2.0-67-virtual. The server FAQ mentions I should be getting the virtual kernel. What are practical advantages of using linux-image-virtual kernel? gives me the impression that it doesn't really matter functionally (in my case I only have a couple VMs running). I first thought was maybe I was somehow not applying the correct options because the installer F4 menu doesn't really give great feedback if the mode has been selected or not. Looking in the log /var/log/installers/syslog I see Command line: file=/cdrom/preceed/ubuntu-server-minimalvm.seed ... I know that I can install the virtual kernel package down the road, but why am I not, or should I be getting the virtual flavor of kernel from an ISO install when doing an minimum VM install?

    Read the article

  • Step-by-Step: Implementing Hyper-V Network Virtualization with Windows Server 2012

    - by KeithMayer
    True network and virtual machine portability - that's the ultimate goal of Hyper-V Network Virtualization - allowing you, as an IT Pro, to align changing business needs with the best physical resource locations to run your VMs and network services - easily, without the sweeping network, router, switch, firewall and DNS changes with which we'd traditionally be plagued when merely attempting the feat of relocating VMs to a new rack, subnet or data center ... WOW!

    Read the article

  • How can I set up VLANs in a way that won't put me at risk for VLAN hopping?

    - by hobodave
    We're planning to migrate our production network from a VLAN-less configuration to a tagged VLAN (802.1q) configuration. This diagram summarizes the planned configuration: One significant detail is that a large portion of these hosts will actually be VMs on a single bare-metal machine. In fact, the only physical machines will be DB01, DB02, the firewalls and the switches. All other machines will be virtualized on a single host. One concern that has been is that this approach is complicated (overcomplicated implied), and that the VLANs are only providing an illusion of security, because "VLAN hopping is easy". Is this a valid concern, given that multiple VLANs will be used for a single physical switch port due to virtualization? How would I setup my VLANs appropriately to prevent this risk? Also, I've heard that VMWare ESX has something called "virtual switches". Is this unique to the VMWare hypervisor? If not, is it available with KVM (my planned hypervisor of choice)?. How does that come into play?

    Read the article

  • How to find out if my hosting's speed is good enough?

    - by Mert Nuhoglu
    There are lots of different online performance tests: Google PageSpeed Insights iWebTool Speed Test AlertFox Page Load Time WebPageTest Also there are several desktop/client software such as: ping tool YSlow Firebug's Net console Fiddler Http Watch I just want to decide if my hosting provider has a good enough performance or if I need to switch my hosting to another provider. So, which tool should I use to compare my hosting provider with other hosting providers?

    Read the article

  • Best Ruby Git library?

    - by Jeff Welling
    Which is the best Git library in Ruby to use? Git, Grit, Rugged, Other? Background: I'm the current maintainer of TicGit-ng which is a distributed offline ticket system built on git, and I've read and heard over and over again that Grit is the one I should use because it supersedes the Git gem, but there seems to be either a lack of documentation or a lack of features because myself and others have failed in trying to switch from the deprecated-but-functional Git to the newer Grit gem.

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • How are you coping with Ubuntu's Unity app launcher? (It auto-hides, can't minimize apps)

    - by Bad Learner
    [Firstly, let me tell you that this cannot be subjective in anyway, as I think at least Ubuntu beginners will have these questions boggling in their mind; and yes, this is a question that has a definite answer - - so, I am completely within the rules.] Okay, coming to the point, I see that Ubuntu uses Unity since v10.xx (netbook edition?) and carried the same to v11.04 & v11.10. As someone who's stuck to Windows for all these years, it's somewhat difficult to cope with Ubuntu's Unity, for the following reasons: [1] The Unity app launcher (to the screen's left) auto-hides when a window is maximized. [2]- And once launched, apps cannot be minimized by clicking the app's icon in the launcher. I have to go to the top-left of the screen and click the "_" button. I do know I can fix these issues by installing some configuration tool. But the thing is, if that's how it's meant to work, Canonical/Ubuntu would have designed it that way. But they didn't. Why? w.r.t above points [1], [2]: [1] EDITED: So, does it mean, it's good to work without maximizing the windows? Because if I maximize the window, the app launcher hides. And I need to hover the mouse to the left of the screen, wait a bit (even if it's a sec or even less, I can still feel the lag), and then click on the next app icon in the launcher to switch to it. I do know, I can use Alt+TAB to switch, but I am not sure which window comes next. This, I feel, isn't productive. Also, this makes me feel, Ubuntu is designed for large screens (it's nice on my 1920x1080p screen), because I can have two windows side-by-side or something like that on a large screen. This is not possible on smaller screens. [2]- Being able to minimize an application's window by clicking on its icon in the launcher (just like it works on Windows & probably elsewhere) would have been great, rather than having to go to the top-left and clicking the _ (minimize) button which brings up the app launcher itself (from hiding) most of the time. It's too tiring to have these small issues in the UI. I really would like to know how you are coping with these issues the way they are?

    Read the article

  • Preventing Users/Groups from accessing certain Domains

    - by ncphillips
    I have created a Study account which I use when doing anything school related work. It's purpose is to remove the distractions of my normal account, such as social media and news websites. I know /etc/hosts can be edited to block certain domains from being accessed, but this is for all Users, and I don't want to have to switch in and out of Admin to change it every time I want to focus. Is there any way to block these domains for specific Users or Groups?

    Read the article

  • Easier way to create floppy disk images?

    - by Bryan
    I'm using Vyatta routers with KVM and want to attach a floppy drive with a config file for Vyatta when I boot the image. I'll be doing this over and over again, and as such am looking for an automated way of creating the floppy images. Right now, I'm doing the following: Create floppy image with qemu-img create Format floppy image with mkdosfs Mount floppy image with mount -t fat /tmp/floppy.img /media/floppy Populate floppy image with cp -r /tmp/configs/ /media/floppy/ Unmount floppy image with umount /media/floppy Save floppy image with mv /tmp/floppy.img ~/floppies/ Any chance there's an easier way to do this?! Perhaps a shortcut application that I can give a directory to and it will do all this for me w/out having to mount the image?

    Read the article

  • Set up a root server using Ubuntu and Virtualization

    - by Daniel Völkerts
    Hello, I'd like to setup a fresh root server and install a linux based virtualization on it. My thoughts are on: Intel VTs Hardware Ubuntu 9.10 KVM based virt. The access to the root server will only be SSH for Administration. Has anybody done this before, what was your glues discovered in the daily use? My requirements are: very secure, so the root server only has ssh to the dom-0 and minimalistic ports for the guest (e.g. http/s). good monitoring of host/guest (my idea is to using zabbix for it) easy and fast administration (how are the command line tools working for you? cryptiv? high learning curve?) I'm pleased to learn from your suggestions. Regards, Daniel Völkerts

    Read the article

  • Some lenses not working in 12.04

    - by Jazz
    I have just started experimenting with extra lens in Unity. I installed some that I found listed in an AskUbuntu question. Some of them don't seem to work. The "Cooking" lens is supposed to show recipes, but I just get a blank result - nothing is found. I have another lens which I assume is the youtube lens I installed, that sits and searches forever when I switch to it, and even type a search into it.

    Read the article

  • How to I change from ubuntu to xubuntu?

    - by GUI Junkie
    I've read this Q&A and I'm ready to try it with Xubuntu. That is, I'll go from Ubuntu to Xubuntu. At this moment, my laptop is slow, even after the various optimizations. My question is whether this is the correct way to proceed. sudo apt-get upgrade # upgrade all existing packages to newest version sudo do-release-upgrade # upgrade system (takes some hours) sudo apt-get xubuntu-desktop # switch to Gnome on login Remove the ubuntu-desktop package (Which command should I use?)

    Read the article

  • What is wiser for me? Eclipse or IntelliJ aka not another IDE war. [closed]

    - by Xorty
    I hope you guys won't take this as attempt of flamewar :) I am quite happy Eclipse user atm, but I am having little dilema: All big companies in my area are using Eclipse, and I quite like Eclipse, but IntelliJ is simply smarter and comfortable in some things. What would you do? If I decide to switch to IDEA I am afraid that I'll regret it someday ... And I don't want to remember like all shortcuts in both eclipse & IDEA, that's simply too much :)

    Read the article

  • How to remove outlines around windows when switching with Alt+Tab?

    - by Oxwivi
    When I switch windows on current Unity using Alt+Tab, I get an ugly outline of size of the selected window occasionally showing on random places. Is there a way to change this outline to something better looking or eliminate it altogether? NB These outlines appear on Ubuntu Desktop without any effects enabled. Outlined Firefox Outlined Nautilus Outlined Terminal Click on the images to better view the outlines in question.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >