Search Results

Search found 16001 results on 641 pages for 'convention over configuration'.

Page 400/641 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Multiple servers using same nameservers

    - by Robsimm
    I have 2x servers with the following OS's and Control Panels: Windows Server 2003 running Plesk 9 Centos 5.3 running WHM/cPanel The Windows server 2003 server is hosting the two nameservers: ns1.domain.com ns2.domain.com I have BIND running on the Centos 5.3 server, but I wish for my customers to use the same nameservers ns1.domain.com and ns2.domain.com (as per the Windows Server 2003 server). My first question is - is this possible? And if so, how would I go about configuring both servers to enable such a configuration? Thanks very much.

    Read the article

  • How can I debug Cisco Firewall ASA "Dispatch Unit" very high CPU utilisation from ASDM?

    - by Andy
    I have recently had my first firewall installed so I am very new to this whole situation. I am finding that Dispatch unit is becoming overloaded and it would appear to be the reason I get serious bouts of lag on my server. The firewall has had little configuration apart from me blocking all the ports in "Access Rules" and allowing only the ones the server needs and from where it needs them. I guess what I am after is assistance with locating the issues causing "Dispatch Unit" to take up all the CPU Regards --Edit-- With ASDM statistics I found that packets inbound (peak of 70-100k/sec from <1k/sec normal), traffic inbound (peak of 40-50kbits/sec from <1kbits/sec normal) and CPU all peak at the same time so I am pretty sure it is an attack of some sort but as a beginner with ASA I am not sure how to resolve

    Read the article

  • Routing Traffic on Ubuntu to give Raspberry PI Internet Access

    - by Scruffers
    I'm hoping someone can point me in the right direction for setting up my Linux (Ubuntu 12.04) box to route traffic from eth0 to wlan0. I'll try and explain the problem I am trying to solve: I currently have two separate networks: [RaspberryPi/eth0] 192.168.2.2 / 255.255.255.0 ^ | v [Ubuntu/eth0] 192.168.2.1 / 255.255.255.0 And: [Ubuntu/wlan0] 192.168.1.100 / 255.255.255.0 ^ | v [ADSL router] 192.168.1.1 / 255.255.255.0 So currently if I want to access the RaspberryPI I can SSH from the Ubuntu box to the PI. And if I want to use the Internet, I have full access from the Ubuntu box, but nothing from the RaspberryPI - the two networks are partitioned. What I would like to do is configure things so that the RaspberryPI has Internet access via the Ubuntu box and out to the Internet. I tried to create a bridge, but got the message "wlan0: operation not supported" (wireless chipset is Ralink RT3062). I'm sure giving the Raspberry PI Internet access should be easy to do in this configuration, but I am a bit lost - can someone point me in the right direction please?

    Read the article

  • custom domain point to tumblr blog

    - by Julius
    My domain mydomain.com is registered with godaddy. I wish to host my tumblr blog on this domain with nearlyfreespeech.net hosting. My active nameservers at godaddy already point to my authoritative ones at NFS.net which is working. However i'm baffled of the correct configuration to set to point to my Tumblr. Preferably id like (A) my domain http://mydomain.com to host the blog and have http://www.mydomain.com redirect also to http://mydomain.com If this is too difficult my next preference is (B) to have http://www.mydomain.com host the blog whilst http://mydomain.com redirects to http://www.mydomain.com My 3rd preference is to have (C) a sub-domain like http://tumblr.mydomain.com or http://tumblr.mydomain.com to host the blog and i guess have http://mydomain.com and http://www.mydomain.com both redirect to it. I've tried having two aliases mydomain.com and www.mydomain.com pointing to my permanent NFS ip at mydomain.nfshost.com and when i try to add: (1) an A record pointing mydomain.com to the ip 66.6.44.4 as per Tumblr's instructions it tells me i already have the bare domain as an alias so i cant do that. (2) the A record on the www.mydomain.com alias. I can do this with either www.mydomain.com set as an alias or not. But when i tried this with mydomain.com set as the canonical name the result when visiting either mydomain.com or www.mydomain.com was them both continually redirecting to eachother until an error was thrown. So i was wondering if there is a ninja that could save me some hairpulling and tell me the correct way to config A, or else B, or else C.

    Read the article

  • VMware two vSwitches Guests can't communicate between them

    - by Aaron R.
    I have some servers in this configuration: And I am not able, from VMGuest1, to ping either VMGuest3 or VMGuest4. I can, however, ping Host1 and Host2, which are attached to pSwitch1. The behavior is the same with VMGuest3 or 4 trying to ping VMGuest 1 or 2. I don't have promiscuity enabled for any of these switches, nor do I have a bridge set up inside ESXi for the virtual switches. I know that one of these options is usually necessary when trying to get connectivity between two virtual switches. These switches are connected, however, through their respective physical switches which are bridged together. Ping just times out, arp request looks like this: [root@vmguest1:~]# arp -a vmguest3 vmguest3.example.com (1.2.3.4) at <incomplete> on eth0 [root@vmguest1:~]# arp -a host1 host1.example.com (1.2.3.5) at 00:0C:64:97:1C:FF [ether] on eth0 VMGuest1 can reach hosts on pSwitch1, so why can't it get to hosts on vSwitch1 through pSwitch1 the same way?

    Read the article

  • Windows 7 Default Gateway problem

    - by Matt
    I have a strange problem (or at least seems strange to me) the below are IP configurations for two laptops on my home network which consists of a main router 192.168.11.1 and a connected wireless router (i know this can cause problems but has always worked until I got the win7 machine) at 192.168.11.2 with DHCP disabled. Laptop 1 - Win XP IP: Dynamically assigned by main router default gateway: 192.168.11.1 (main router) This machine gets perfect connectivity. Laptop 2 - Win7 IP: dynamically assigned by main router Default Gateway: 192.168.11.2 THIS IS THE PROBLEM... I cannot seem to get this machine to default to the main router for the gateway UNLESS I go to a static configuration which I would rather not do since I regularly go between my home and public networks. Why is my Win7 machine not finding the main gateway the same way that the other laptop is? I believe that the rest of my setup is fine as it has always worked and it works perfectly when set as static ip and gateway. Please help! Thanks

    Read the article

  • Reverse DNS for two ADs in the same subnet

    - by SpacemanSpiff
    I currently have two separate AD forests that exist within the same subnet. The two forests have independent copies of the reverse lookup zone for that subnet. Example: Domain A DC1: 10.1.1.1/24 Domain A DC2: 10.1.1.2/24 Domain A AppServer1:10.1.1.3/24 Domain B DC1: 10.1.1.11/24 Domain B DC2: 10.1.1.12/24 Domain B Appserver1:10.1.1.13/24 What I'm after, is a configuration that allows this reverse zone to be shared between them so that both sets of DNS servers can make updates to the zone. This kind of thing is a little far from my everday work, so a kick in the right direction is a welcome suggestion as well. Decoupling one AD into new segments is a possibility I'm open to but would like to avoid if possible. If there is a DNS related solution I'd prefer that.

    Read the article

  • Unable to connect to Cygwin from Mac OS X by ssh

    - by skyjack
    I've started ssh server on Windows 7 using Cywgin and I'm trying to connect to it by ssh from Mac OS X Mavericks. It fails with next error: ./ssh username@hostname -v OpenSSH_6.6, OpenSSL 1.0.1g 7 Apr 2014 debug1: Reading configuration data /usr/local/etc/ssh/ssh_config debug1: Connecting to hostname [my ip] port 22. debug1: Connection established. debug1: identity file /Users/skyjack/.ssh/id_rsa type -1 debug1: identity file /Users/skyjack/.ssh/id_rsa-cert type -1 debug1: identity file /Users/skyjack/.ssh/id_dsa type -1 debug1: identity file /Users/skyjack/.ssh/id_dsa-cert type -1 debug1: identity file /Users/skyjack/.ssh/id_ecdsa type -1 debug1: identity file /Users/skyjack/.ssh/id_ecdsa-cert type -1 debug1: identity file /Users/skyjack/.ssh/id_ed25519 type -1 debug1: identity file /Users/skyjack/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6 ssh_exchange_identification: read: Connection reset by peer Meanwhile I can connect successfully from Red Hat. OpenSSH version on Cygwin: OpenSSH_6.4p1, OpenSSL 1.0.1f 6 Jan 2014 OpenSSH version on MAC OS X: OpenSSH_6.6p1, OpenSSL 1.0.1g 7 Apr 2014 Please advice.

    Read the article

  • Change permission to /proc/net/ip_conntrack on Ubuntu server 9.10

    - by bjarkef
    Hi I have a script that needs to extract certain information form the /proc/net/ip_conntrack file once in a while. I do not wish to run this script as the root user. Default permissions for the file is: $ ls -lah /proc/net/ip_conntrack -r--r----- 1 root root 0 2010-03-28 12:18 /proc/net/ip_conntrack I can change it with: sudo chmod o+r /proc/net/ip_conntrack But that does not stick after a reboot. Is there some configuration file for file-permissions in the /proc directory in Ubuntu Server 9.10? Or do I just have to stick a chmod line in some startup script?

    Read the article

  • Connecting server to more than one switch.

    - by Jake
    I have 3 switches, one NEATGEAR GS748T and two JGS524. Initially I wanted to connect them in a triangle loop, but later suspect that the JG524 does not have spanning tree support. Now I have a Dell R710 with 2 NIC at the back. If I connect the server in between the two JGS524, which both in turn connect to the GS748T, will that consitute a loop? According to my limited understanding, with 2 NICs there will be 2 IPs for the server. Will the file server even work or not? Theoretically speaking, will this configuration increase the access speeds for clients? Thanks.

    Read the article

  • glassfish - Unknown error when trying port 4848

    - by Majid Azimi
    I'm installing glassfish 3.1 on Windows XP service pack 3. but in configuration step it gives this error: PERFORMING THE REQUIRED CONFIGURATIONS ______________________________________ CREATING DOMAIN _______________ Executing command :C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile C:\DOCUME~1\MAJIDA~1\LOCALS~1\Temp\glassfish-3.1-windows-ml.exe6\asadminTmp1079044298673991344.tmp create-domain --savelogin --checkports=false --adminport 4848 --instanceport 8080 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 domain1 C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile C:\DOCUME~1\MAJIDA~1\LOCALS~1\Temp\glassfish-3.1-windows-ml.exe6\asadminTmp5898014821156752751.tmp create-domain --savelogin --checkports=false --adminport 4848 --instanceport 8080 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 domain1Unknown error when trying port 4848. Try a different port number. Command create-domain failed. CLI130 Could not create domain, domain1 I change 4848 to any other port. but it doesn't work. firewall is completely disabled. Could anyone help?

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • VMware Fusion / New Machine / verify VMs?

    - by drewk
    I just got a 15" MacBook Pro with 8GB RAM, 2.66 i7 and 500GB SSD. Very sweet and fast! My old machine was 2.66 Core 2 Duo. I used Migration Assistant to migrate my old MBP to the new. When I started VMware to run Windows and Ubuntu I got the message first This VM has been moved or copied to a new machine. Did you move or copy it? I answered Copied. Then I got the message this CPU is a different configuration than the machine that created the VM. You may get unpredictable results. I answered Open Anyway. Windows XP, Windows 7, and Ubuntu all seem to work OK, but how can I verify? Is there some form of test harness for a VM? I have had a VM degrade and become unusable with very catastrophic data loss (on Parallels...), so I am a bit paranoid. Thanks,

    Read the article

  • .htaccess enabled, but not working.

    - by Aristotle
    I been having some very frustrating issues with getting .htaccess to work on my new server. I'm not very experienced in managing a server, but I've spent the last three days pouring over documentation and every resource I can get my hands on. I've attempted to try a very basic application of .htaccess, and yet it fails to give the expected results. I've setup a test over at http://173.201.185.217/test/. Immediately within this directory is an .htaccess file with the following command: deny from all If I'm not mistaken, this should deny access to /test, but it doesn't. It just sits there allowing anybody to view the contents. What could be wrong with my server/configuration? The server is running CentOS.

    Read the article

  • Ubuntu firefox: some web-pages stuck on loading

    - by kristaps.skujins
    Just installed Ubuntu 10.04 beta, installation went fine and all programs are working smooth. Just this one weird network problem. Some sites simply are not opening and loading forever. For example, google.com and youtube are working well, but ubuntu.com and many other are not opening at all, or are loaded partly. One thing i noticed is that on all of those pages on firefox status bar is message "Looking up for www.google-analytics.com" (or similar remote resources) message appearing all the time (even on this page, but it somehow has loaded and working). I should mention that i tried those pages to open on windows OS on this same machine, and they opened without problems. So i am guessing that it has to be some sort of network configuration problems on Ubuntu. What could cause such problem?

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • Connecting/Adding a private network on windows server 2008

    - by WhyMe
    Hey all, I have a dual server configuration on a host provider using VPS. I was told by my Host provider that in order to use free bandwidth between my two servers (they are in the same location) I need to add a alias "subnet" to a specific ip (A private network, VPN). How do I add an aliased ip in widnwos? in Linux the relevant command is supposed to be (By my search in blogs) "ifconfig eth0:1 10.129.175.165 netmask 255.255.255.0" They also said that another way to connect between the servers (should also be faster) is to use "private lan", but as it happens I don't know how to define one :(. Is there a windows equivalent or another way to do this? I have checked my ip config and found no indication of the private lan or the VPN ip.

    Read the article

  • Where can I find details on installing and using Solr on Tomcat using the solr-common and solr-tomcat packages?

    - by danieltalsky
    I'm seeing lots of Solr/Tomcat installation instructions, but all of them install at least one of these packages from source. Since there's tomcat6, solr-common, and solr-tomcat packages (in Ubuntu 10.4), I'd love to use them, but I can't find any kind of documentation on installing using them. I can't even tell what directory solr is stored in. I get a "Welcome to Solr!" page at http://localhost:8080/solr/, but have no idea where the catalina home is, or where the solr configuration files are in this case. Can anyone point me to documentation?

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

  • Sandcastle Help File Builder - October 2010 release

    - by TATWORTH
    At http://shfb.codeplex.com/releases/view/92191, the latest Sandcastle has been released. I am pleased to say that it incorporates the generic version of a fix, I originated that allows projects including Crystal Reports to be documented.Here is the relevant passage from the help file:"The default configuration for MRefBuilder has been updated to ignore the Crystal Reports licensing assembly (BusinessObjects.Licensing.KeycodeDecoder) if it fails to get resolved. This assembly does not appear to be available and ignoring it prevents projects that include Crystal Reports assemblies from failing and being unbuildable."There are many other fixes. Here are the release notes:IMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog.This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release supports Visual Studio 2010 and 2012 solutions and projects as documentation sources, and adds support for projects targeting the .NET 4.5 Framework, .NET Portable Library 4.5, and .NET for Windows Store Apps.See the Sandcastle 2.7.1.0 Release Notes for details on all of the changes made to the underlying Sandcastle tools and presentation styles.This release uses the Sandcastle Guided Installation package. Download and extract to a folder and then runSandcastleInstaller.exe to run the guided installation of Sandcastle, the various extra items, the Sandcastle Help File Builder core components, and the Visual Studio extension package.What's IncludedHelp 1 compiler check and instructions on where to download it and how to install it if neededHelp 2 compiler check and instructions on where to download it and how to install it if neededSandcastle October 2012 2.7.1.0An option to install the MAML schemas in Visual Studio to provide IntelliSense for MAML topicsSandcastle Help File Builder 1.9.5.0SHFB Visual Studio Extension PackageFor more information about the Visual Studio extension package, see the Visual Studio Package help file topic.

    Read the article

  • Apache + plesk vhost problem: .htaccess ignored!

    - by DaNieL
    Hi guys, i have a problem with a simple apache configuration. When the user ask for https://mydomain.com i have to redirect it to https://www.mydomain.com, becose my https certificate is valid just for the domain with www. I create the vhost.conf into my /var/www/vhosts/mydomain.com/conf/ directory, with inside: <Directory /var/www/vhosts/mydomain.com/httpsdocs> AllowOverride All </Directory> And my .htaccess file into the /var/www/vhosts/mydomain.com/httpsdocs/ is: RewriteEngine on RewriteCond %{HTTPS_HOST} ^mydomain\.com RewriteRule ^(.*)$ https://www.mydomain.com/$1 [R=301,L] But seem like the .htaccess is completely ignored. Any idea?

    Read the article

  • How do you run XBMC on nvidia dual screen and stop it from taking over the keyboard and mouse?

    - by Paul Swartout
    I have set up dual screen under Ubuntu 12.04. I have a GeForce 8500 GT and have used the nVidia control panel to set up dual screen in "Separate screen mode". Here's the resulting xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:38:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Maxdata/Belinea B1925S1W" HorizSync 31.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor1" VendorName "Unknown" ModelName "CRT-1" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "CRT-1: 1280x768 +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-1: 1360x768_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection All well and good and I have a nice blank XWindow displayed on my TV (the 2nd monitor). I then fire up XBMC from a terminal on the PC monitor using DISPLAY=:0.1 xbmc XBMC fires up quite nicely on the TV however I can no longer use the main PC monitor / mouse / keyboard as XBMC on the TV screen seems to have focus. I was hoping to have XBMC running on the TV and let the kids use the MCE remote whilst I get on with my work on the PC monitor. Does anyone have any idea how to overcome this? I'm presuming there's some xorg.conf fun and games needed but I've no idea where to start to be honest.

    Read the article

  • Staggering java linux process startup to prevent OOM

    - by ctennis
    I am running a number of java processes on a single Linux machine. From a memory and computing standpoint, everything is fine when things are static. However, periodically we use a configuration management package up upgrade the jar or war files, and restart the java process. The problem is, that is restarts them all relatively quickly, and so we get 10 or so java VMs restarting all at the same time (we use daemontools for the service stops/starts), which wreaks havoc on the machine, in terms of OOMs or just really slow. This is because it's spawning the JVM 10x at the same time. Other than trying to stagger the startups, is there a smarter way of handling this? Maybe a sysctl tuning performance parameters, or a JVM parameter?

    Read the article

  • is my xml cached by IIS7?

    - by user22817
    Hi guys, I have a flashplayer on my website. Behind it, there is an xml file which keeps the configuration of the products shown on that flashplayer. I do not understand why, if i modify something in the xml config file, the flash player is not updated at that moment, but after hours... Do you know why? I have not defined any rule for output caching in IIS7 so there is probably something default? Of course, i've tried Ctrl+f5 but does not work. Thanks.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >