Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 280/312 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Configuring Windows 2003 As A Router

    - by Sean M
    I am trying to configure a Windows 2003 server to act as a router, so that the two subnetworks that I'm dealing with can communicate with one another without NAT. I am mostly sure that I have configured Windows 2003 incorrectly, and I'm finding it very difficult to drill down through Google results to something helpful. I have a 192.168.1.0/24 network that is my "production" network (in the sense that I'm in trouble if I screw it up) and a 10.0.0.0/8 network that is my test network. The 192.168.1.0 network is ruled by a gateway whose routing table looks like this (my address redacted): The Windows 2003 server, "prime," is multihomed. Its network adapters are at 192.168.1.122, (as seen above), 10.0.0.1, and 10.0.0.2. I added the Routing and Remote Access role to it, and enabled LAN routing. I do not have it using RIP or other routing protocols. Its current routing table is shown below. To me, it looks like all of the right routes are there for traffic to pass between the 192.168.1.0 network and the 10.0.0.0 network. However, traffic does not pass. The 10.0.0.11 and .12 clients cannot be contacted from the 192.168.1.0 network. When I use traceroute to try to get to them, the trace gets to the Windows 2003 server's 192.168.1.122 address, then produces nothing but "* * *" timeouts. When I try to traceroute to 192.168.1.1 from a 10.0.0.0-network client, I get "destination host unreachable." However, I know that the routing is working at least a little, because from the 192.168.1.0 network, I can connect to the Windows server just fine by referring to it as 10.0.0.1. What static routes would allow me to contact 10.0.0.11 and .12 from the 192.168.1.0 network? Is it possible to tell the Windows server "since you are a DHCP/DNS server, you already know routes to get to machines that are getting IP addresses from you, please add those to your routing table" ? Will using RIP or OSPF on the Windows server actually be helpful in this situation?

    Read the article

  • Link aggregation with freebsd8 and a cicso 3550, what am i doing wrong?

    - by Flamewires
    Hey, I am trying to setup Link Aggrigation with LACP (well, anything that provides increased bandwidth and failover using my setup will work). I'm running FreeBSD 8.0 on 3 machines. M1 is running 2 10/100 ethernetcards setup for link aggrigation using lagg. for reference: ifconfig em0 up ifconfig tx0 up ifconfig create lagg0 ifconfig lagg0 laggproto lacp laggport tx0 laggport em0 192.168.1.16 netmask 255.255.255.0 I plugged them into ports 1 and 2 of a Cicso 3550. then ran: configure terminal interface range Fa0/1 - 2 switchport mode access switchport access vlan 1 channel-group 1 mode active (everythings in vlan 1) Now Im able to connect the other computers to other ports on the switch and failover works great, i can unplug cables in the middle of a transfer and the traffic gets rerouted. However, im not noticing any speed increase. My test setup: load balancing: i tried dst and src on the switch, neither seemed to give me a speed increase. I am SCPing 2 500 meg files from the lagg computer to other computers (one each) which are also running 10/100 full duplex cards. I get transfer speeds of about 11.2-11.4 Mbps to a single host, and about half that (5.9-6.2) Mbps when transferring to both at the same time. From what I understood with destination load balancing the router was suppose to balance traffic headed for 1 computer over 1 port and traffic headed for another over a diff(in this case) the other port. With destination-MAC address forwarding, when packets are forwarded to an EtherChannel, the packets are distributed across the ports in the channel based on the destination host MAC address of the incoming packet. Therefore, packets to the same destination are forwarded over the same port, and packets to a different destination are sent on a different port in the channel. For the 3550 series switch, when source-MAC address forwarding is used, load distribution based on the source and destination IP address is also enabled for routed IP traffic. All routed IP traffic chooses a port based on the source and destination IP address. Packets between two IP hosts always use the same port in the channel, and traffic between any other pair of hosts can use a different port in the channel. (Link) What am i doing wrong/what would i need to do to see a speed increase beyond what i could do with just a single card?

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • Ubuntu box static routing problem

    - by Rafael
    Hello, I'm trying to configure a ubuntu server to be a router. This is my interface configuration (eth2 connects to my WAN, eth0 to my LAN): auto eth2 iface eth2 inet static address 192.168.0.249 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 auto eth0 iface eth0 inet static address 192.168.100.1 netmask 255.255.255.0 This is the router information: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth2 And this is dhcp configuration: subnet 192.168.100.0 netmask 255.255.255.0 { range 192.168.100.101 192.168.100.254; option domain-name-servers 201.70.86.133; option routers 192.168.100.1; authoritative; } I'm then connecting a mac os x by cable on eth0. This is en0 interface configuration: en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:26:bb:5d:82:b0 inet6 fe80::226:bbff:fe5d:82b0%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.101 netmask 0xffffff00 broadcast 192.168.100.255 media: autoselect (100baseTX <full-duplex>) status: active And this is the routing table: Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.100.1 UGSc 139 32 en0 10.37.129/24 link#8 UC 2 0 vnic1 10.37.129.2 0:1c:42:0:0:9 UHLWI 0 839 lo0 10.37.129.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vnic1 10.211.55/24 link#7 UC 2 0 vnic0 10.211.55.2 0:1c:42:0:0:8 UHLWI 0 840 lo0 10.211.55.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vnic0 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 3 507924 lo0 169.254 link#4 UCS 0 0 en0 172.16.42/24 link#10 UC 2 0 vmnet8 172.16.42.1 0:50:56:c0:0:8 UHLWI 0 839 lo0 172.16.42.255 link#10 UHLWbI 1 24 vmnet8 192.168.100 link#4 UC 2 0 en0 192.168.100.1 0:e0:7c:7e:f:99 UHLWI 139 0 en0 777 192.168.100.101 127.0.0.1 UHS 0 0 lo0 192.168.100.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 en0 192.168.116 link#9 UC 2 0 vmnet1 192.168.116.1 0:50:56:c0:0:1 UHLWI 0 839 lo0 192.168.116.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vmnet1 When I ping 192.168.100.1, it works. When I ping 192.168.0.249, it also works. However, when I try to ping 192.168.0.1 it does not. Does anyone has any way to solve this? Is there a way to debug it? Thanks,

    Read the article

  • how to diagnosis and resolve: /usr/lib64/libz.so.1: no version information available

    - by matchew
    I had a hell of a time installing lxml for python2.7 on centOs5.6. For some background, python2.7 is an alternative installation of python on centOS5.6 which comes with python2.4 installed. it was bulit from source per its instrucitons ./configure make make altinstall However, after about 20 hours of trying I managed to find a workable solution and was able to install lxml. Until, I notice the following error at the top of the interpreter: python2.7: /usr/lib64/libz.so.1: no version information available (required by python2.7) Python 2.7.2 (default, Jun 30 2011, 18:55:26) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print 'Sheeeeut!' this error is printed out everytime I run a script. For example: $ ./test.py /usr/local/bin/python2.7: /usr/lib64/libz.so.1: no version information available (required by /usr/local/bin/python2.7) the script runs flawlessly, but this error is bothersome. After some digging I have seem to believe I have a wrong version of libz installed, that it is either an older version or built for a different platform. I'm not quite sure how, I've only installed libz through yum, as far as I know. Although, I can't quite remember every little thing I tried in my twenty hours of trying. You may also be intereted in what my lib64 folder looks like, here is some information $ ls -ltrh libz* -rwxr-xr-x 1 root root 84K Jan 9 2007 libz.so.1.2.3 -rwxr-xr-x 1 root root 107K Jan 9 2007 libz.a -rwxr-xr-x 1 root root 154K Feb 22 23:30 libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Apr 20 20:46 libz.so.1 -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jun 30 18:43 libzdb.so.7 -> libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Jul 1 11:35 libz.so -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jul 1 11:35 libzdb.so -> libzdb.so.7.0.2 notice: the items that Say Jul 1st or Jun 30th are from me. I had initially moved these files into a backup folder as they seeemed to be 1. duplicates and 2. had a date after/during my problems I alluded to earlier that I had with lxml One inclination is to completely remove python2.7 and re-install. I think having it install to /usr/local/ was a poor default choice. However, without the make uninstall option being present it seems to be a time consuming task for a solution I am not quite sure would solve my problem.

    Read the article

  • Installing OpenLDAP: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • How do I combine static and dynamic DHCP leases on a Cisco router?

    - by Brad
    Basically, what I need is super similar to the unanswered cisco forum question below: https://supportforums.cisco.com/message/3139749#3139749 I have a Cisco 850 Series router. I have configured a DHCP pool for the 10.0.0.0/24 network. I have excluded 10.0.0.1 - 10.0.0.99 from the DHCP pool. I want to add a static DHCP pool for stuff and I want DHCP to statically assign them the addresses of my choice below 100. Actually, I don't care what addresses I statically assign. They can be anything in the pool for all I care, I just want it to work. Why are you doing this? Just statically assign the IPs on the devices! I don't want to do this because I have some laptop users. They could obviously only use that static IP here. This isn't a problem if they could be bothered to change any location setting or something. They can't. So it HAS to be DHCP. It also has to be static IPs because I need to forward ports to them. I know, I know, this is weird but it's an apartment LAN/WLAN so this isn't exactly a typical use case. Relevant sections of config below: ip dhcp excluded-address 10.0.0.1 10.0.0.99 ! ip dhcp pool Internal-net import all network 10.0.0.0 255.255.255.0 default-router 10.0.0.1 domain-name 1770.local lease 7 ! ip dhcp pool static-pool import all origin file flash://staticmap default-router 10.0.0.1 domain-name 1770.local Contents of staticmap: *time* Aug 5 2010 09:00 AM *version* 2 !IP address Type Hardware address Lease expiration 10.0.0.100/24 1 001f.5b3e.d50a Infinite *end* You can see here I was trying addresses outside the excluded-address range to see if that would make any difference. My testing machine's MAC: mainframe:~ brad$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:3e:d5:0a What shows up in the DHCP binding table: basestar#show ip dhcp binding Bindings from all pools not associated with VRF: IP address Client-ID/ Lease expiration Type Hardware address/ User name 10.0.0.112 0100.1f5b.3ed5.0a Aug 12 2010 10:06 AM Automatic What's up with the funny looking MAC in the DHCP binding table?? Is what I'm trying to accomplish basically impossible? Am I going about this the wrong way? All I want to to be able to port forward some ports to specific devices. The way I would do this with a consumer router is to do what I'm trying to do here; assign static DHCP to those devices then configure PAT for ports on those addresses.

    Read the article

  • Authenticate to VM using vagrant up

    - by utrecht
    Authentication failure during Vagrant Up, while vagrant ssh and ssh vagrant@localhost -p2222 works I would like to execute a shell script using Vagrant at boot. Vagrant is unable to Authenticate, while the VM has been started using vagrant up: c:\temp\helloworld>vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'helloworld'... ==> default: Matching MAC address for NAT networking... ==> default: Setting the name of the VM: helloworld_default_1398419922203_60603 ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 => 2222 (adapter 1) ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant default: SSH auth method: private key default: Error: Connection timeout. Retrying... default: Error: Authentication failure. Retrying... default: Error: Authentication failure. Retrying... default: Error: Authentication failure. Retrying... default: Error: Authentication failure. Retrying... ... After executing CTRL + C it is possible to authenticate to the VM using vagrant ssh and ssh vagrant@localhost -p2222 Vagrant file I use the default Vagrantfile and I only changed the hostname: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| # All Vagrant configuration is done here. The most common configuration # options are documented and commented below. For a complete reference, # please see the online documentation at vagrantup.com. # Every Vagrant virtual environment requires a box to build off of. config.vm.box = "helloworld" ... Vagrant version c:\temp\helloworld>vagrant --version Vagrant 1.5.1 Question How to authenticate to VM using vagrant up?

    Read the article

  • Vagrant-aws not provisioning

    - by SuperCabbage
    I'm trying to spin up and provision an EC2 instance with Vagrant, it successfully creates the instance up and I can then use vagrant ssh to SSH into the it but Puppet doesn't seem to carry out any provisioning. Upon running vagrant up --provider=aws --provision I get the following output Bringing machine 'default' up with 'aws' provider... WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1 [default] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [default] Launching an instance with the following settings... [default] -- Type: m1.small [default] -- AMI: ami-a73264ce [default] -- Region: us-east-1 [default] -- Keypair: banderton [default] -- Block Device Mapping: [] [default] -- Terminate On Shutdown: false [default] Waiting for SSH to become available... [default] Machine is booted and ready for use! [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/ => /vagrant [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/manifests/ => /tmp/vagrant-puppet/manifests [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/modules/ => /tmp/vagrant-puppet/modules-0 [default] Running provisioner: puppet... An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'default' machine. Please handle this error then try again: No error message I can then SSH into the instance by using vagrant ssh but none of my provisioning has taken place, so I'm assuming that errors have occured but I'm not being given any useful information relating to them. My Vagrantfile is as following; Vagrant.configure("2") do |config| config.vm.box = "ubuntu_aws" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" config.vm.provider :aws do |aws, override| aws.access_key_id = "REDACTED" aws.secret_access_key = "REDACTED" aws.keypair_name = "banderton" override.ssh.private_key_path = "~/.ssh/banderton.pem" override.ssh.username = "ubuntu" aws.ami = "ami-a73264ce" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end My Puppet manifest is as following; package { [ 'build-essential', 'vim', 'curl', 'git-core', 'nano', 'freetds-bin' ]: ensure => 'installed', } None of the packages are installed.

    Read the article

  • Avoid read-write access to bad sectors on HDD to continue working on the HDD

    - by goldenmean
    I have a HP Pavilion dv6446 notebook. It had Windows Vista Home premium. After 4.5+ years of usage, just recently it started malfunctioning. While working fine, its screen goes white or sometimes some thin black lines horizontally. Laptop freezes. Hard reboot works. Again it works for some 2 hrs or so, same error. To diagnose I did run the Memory and Hard disk check which is present in the Bios Setup. Memory test passed. Hard disk test returned an error saying something like - "Replace the hard disk". Bad.. Some sectors or platters have gone bad on the disk. (I confirmed this later by further tests mentioned below) Then I tried installing a Ubuntu 11.10. It listed 3 partitions /dev/sda1, sda2, sda2. It again gave error and could not install grub loader on /dev/sda1. Bad sectors. Then redid the Ubuntu installation, this time asked to to install the Ubuntu on /dev/sda3. and kept /dev/sda1 for /home. Installed fine, and works fine as well. Due to unavailability of WiFi/ Ethernet driver for that adapters under Ubuntu( at least I could not configure them and get the networking working at all), I decided to go back to reinstall windows Vista. It did install fine. I did not have to format one data partition which has my data. I just formatted one partition which installed Windows So in effect HDD has not undergone a full format here. Worked ok for 1 day. But same white screen and freeze happened. Looks like while it is in use, it accesses the bad sectors for storing some data and that's when it bombs. I am inclined to think HDD has not failed fully or crashed but has developed bad sectors. Else if it was a HDD crash, it would have refused to boot at all let alone install on it. Questions: Is there any HDD test check under windows or any such tools windows/linux based ewhere which can identify the bad sectors of the HDD and 'lock/isolate' them from further read-write access of any kind. If not what are my options, if any to salvage this laptop HDD without replacing it. EDIT: Would the Disk Error checking tool under windows help in any way?

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help! edit: 1/19/2011 Well, this is truly bizarre. To my knowledge I've made no changes to either system - the laptop or the server. But today after starting up the server for the first time in a few days, and making sure that nxserver was running, I was able to connect with the nxclient from my laptop with no problems. I have a full desktop in a single window and I am able to interact with it normally. This is really weird, but the problem seems to be resolved.

    Read the article

  • Trouble with Debian Lenny and Sphinx

    - by Ando
    I've very basic understanding of linux systems, but I've a server which was setup a while ago to host some web apps. Recently I decided to test out and implement Sphinx but unfortunately I cant get the install to work. I'm running a Debian Lenny distro and when I try to install sphinx it says - checking MySQL include files... configure: error: missing include files. ****************************************************************************** ERROR: cannot find MySQL include files. Check that you do have MySQL include files installed. The package name is typically 'mysql-devel'. If include files are installed on your system, but you are still getting this message, you should do one of the following: 1) either specify includes location explicitly, using --with-mysql-includes; 2) or specify MySQL installation root location explicitly, using --with-mysql; 3) or make sure that the path to 'mysql_config' program is listed in your PATH environment variable. To disable MySQL support, use --without-mysql option. ****************************************************************************** I do have mysql 5.1 installed but I can't find the include files, AND one more thing.. I read around the net that I probably need libmysqlclient15-dev but when I try to install that using apt-get i receive the following error. The following packages were automatically installed and are no longer required: libxcb-aux0 libts-0.0-0 libxcb-atom1 ttf-dejavu-extra hunspell-en-us g++-4.3 libmysql++3 libnspr4-0d libdirectfb-1.0-0 libxcb-event1 libasound2 libstdc++6-4.3-dev libhunspell-1.2-0 ttf-dejavu libmozjs2d conkeror-spawn-process-helper libnss3-1d Use 'apt-get autoremove' to remove them. The following NEW packages will be installed: libmysqlclient15-dev 0 upgraded, 1 newly installed, 0 to remove and 276 not upgraded. Need to get 7590 kB of archives. After this operation, 26.3 MB of additional disk space will be used. WARNING: The following packages cannot be authenticated! libmysqlclient15-dev Install these packages without verification [y/N]? Y Err http://ftp.us.debian.org/debian/ lenny/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 35.9.37.225 80] Err http://security.debian.org/ lenny/updates/main libmysqlclient15-dev amd64 5.0.51a-24+lenny5 404 Not Found [IP: 149.20.20.6 80] Failed to fetch http://security.debian.org/pool/updates/main/m/mysql-dfsg-5.0/libmysqlclient15-dev_5.0.51a-24+lenny5_amd64.deb 404 Not Found [IP: 149.20.20.6 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Can you help me out by suggesting how to install the required packages and run the Sphinx.

    Read the article

  • IE8 Refuses to run Javascript from Local Hard Drive

    - by Josh Stodola
    I have a problem that just started at work recently and the network manager is certain he did not change anything with the group policy. Anyways, here is a detailed description of the problem. My machine is Windows XP SP3, and I use IE8 to browse. We have McAffee anti-virus software that I am unable to configure. I use the following file to test... <!DOCTYPE html> <html> <head> <title>Javascript Test</title> </head> <body> <script type="text/javascript"> document.write("<h1>PASS</h1>"); </script> <noscript> <h1>FAIL</h1> </noscript> </body> </html> When I open this file from the C: drive, it fails every time. If I execute it anywhere else (local/remote web server or on a mapped network drive), it works just fine. When I am simply browsing the Internet, Javascript on web sites works just fine. It is only failing on files running from my C: drive. Additionally, I have had a couple other programmers in the department try this file on their C: drive, and it works fine for them. So I don't believe it is a group policy thing. I need to fix this because I do extensive testing from my C: drive, and I am accustomed to doing so. I don't want to get into the habit of moving files to a different drive just to test. Things I have tried: Enabled "Allow Active Content to Run Files on My Computer" in Options | Advanced | Security Enabled "Allow Active Scripting" in Options | Security | Custom Level Verified that "Script" was not checked as disabled in Developer Toolbar Added localhost to Trusted Sites in Options Disabled McAffee completely (momentarily, with help from network admin) Used an older DOCTYPE in my test HTML page Re-installed IE8 completely Ran regsvr32 on the JScript.dll Slammed keyboard I am sure that there is a setting somewhere that will fix this problem, possibly in the registry. I would not be surprised if it was related to the developer toolbar. At this point I do not know where else to look. Can anyone help me resolve this problem? EDIT: Regardless of the bounty, this issue is still ongoing.

    Read the article

  • After compiling PHP, I get mod_fcgid: error reading data from FastCGI server

    - by user34295
    I'm trying to add multiple PHP version in Plesk 12. Switching my domain to the new version PHP 5.4.29 result in this error: (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server Here is phpinfo() of the complied PHP version, obtained running php54-cgi index.php from the terminal. The same script placed under document root doesn't work in FastCGI. How can I debug/try to figure out what's the error? Currently running CentOS 6.5 x64, Plesk v12.0.18_build1200140529.2, PHP 5.5.13. I've downloaded PHP 5.4.29: cd /usr/local/src curl -O http://it1.php.net/distributions/php-5.4.29.tar.gz cd php-5.4.29 And configured with: ./configure \ --prefix=/usr/local/php54 \ --with-bz2 \ --with-config-file-path=/usr/local/php54/etc \ --with-config-file-scan-dir=/usr/local/php54/etc/php.d \ --with-curl \ --with-gd \ --with-gettext \ --with-iconv \ --with-layout=PHP \ --with-libxml-dir=/usr/local/php54 \ --with-mhash \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-openssl \ --with-pdo-mysql=mysqlnd \ --with-readline \ --with-xsl \ --with-zlib \ --enable-calendar \ --enable-cgi \ --enable-exif \ --enable-ftp \ --enable-intl \ --enable-mbstring \ --enable-pcntl \ --enable-shmop \ --enable-sockets \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --enable-wddx \ --enable-zip Then: make && make install Installing PHP CLI binary: /usr/local/php54/bin/ Installing PHP CLI man page: /usr/local/php54/php/man/man1/ Installing PHP CGI binary: /usr/local/php54/bin/ Installing PHP CGI man page: /usr/local/php54/php/man/man1/ Installing build environment: /usr/local/php54/lib/php/build/ Installing header files: /usr/local/php54/include/php/ Installing helper programs: /usr/local/php54/bin/ program: phpize program: php-config Installing man pages: /usr/local/php54/php/man/man1/ page: phpize.1 page: php-config.1 Installing PEAR environment: /usr/local/php54/lib/php/ [PEAR] Archive_Tar - installed: 1.3.11 [PEAR] Console_Getopt - installed: 1.3.1 warning: pear/PEAR requires package "pear/Structures_Graph" (recommended version 1.0.4) warning: pear/PEAR requires package "pear/XML_Util" (recommended version 1.2.1) [PEAR] PEAR - installed: 1.9.4 Wrote PEAR system config file at: /usr/local/php54/etc/pear.conf You may want to add: /usr/local/php54/lib/php to your php.ini include_path [PEAR] Structures_Graph- installed: 1.0.4 [PEAR] XML_Util - installed: 1.2.1 /usr/local/src/php-5.4.29/build/shtool install -c ext/phar/phar.phar /usr/local/php54/bin ln -s -f /usr/local/php54/bin/phar.phar /usr/local/php54/bin/phar Installing PDO headers: /usr/local/php54/include/php/ext/pdo/ Copied php.ini-production to /usr/local/php54/etc/php.ini and added a new handler in Plesk: /usr/local/psa/bin/php_handler --add -displayname 5.4.29 -path /usr/local/php54/bin/php-cgi -phpini /usr/local/php54/etc/php.ini -type fastcgi -id php54 Symbolic linking: ln -s /usr/local/php54/bin/php /usr/local/bin/php54 ln -s /usr/local/php54/bin/php-cgi /usr/local/bin/php54-cgi New installed version: php54-cgi -m [PHP Modules] bz2 calendar cgi-fcgi Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext hash iconv intl json libxml mbstring mhash mysql mysqli mysqlnd openssl pcntl pcre PDO pdo_mysql pdo_sqlite Phar posix readline Reflection session shmop SimpleXML sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tokenizer wddx xml xmlreader xmlwriter xsl zip zlib [Zend Modules]

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • Powershell: Cannot connect via SSL

    - by JSWork
    Am following "secrets to powershell remoting" to setup an SLL account and seem to be missing a step. I ran Winrm create winrm/config/Listener?Address=*+Transport=HTTPS @{Hostname="redacted";CertificateThumbprint="redacted"} and got PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1184937132 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1184937132 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 5985 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_1187163138 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_1187163138 Name Value Type ---- ----- ---- Address * System.String Transport HTTP System.String Port 80 System.String Hostname System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String PS WSMan:\localhost&gt; dir wsman:\localhost\listener\Listener_220862350 WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener\Listener_220862350 Name Value Type ---- ----- ---- Address * System.String Transport HTTPS System.String Port 5986 System.String Hostname redacted System.String Enabled true System.String URLPrefix wsman System.String CertificateThumbprint redacted System.String ListeningOn_756355952 10.0.0.54 System.String ListeningOn_1201550598 127.0.0.1 System.String Trouble is when i do this PS C:\Users\redacted> enter-pssession -Computername redacted -Credential redacted\redacted -UseSSL I get this Enter-PSSession : Connecting to remote server failed with the following error message : The client cannot connect to th e destination specified in the request. Verify that the service on the destination is running and is accepting requests . Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or Win RM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + enter-pssession <<<< -Computername redacted -Credential redacted\redacted -UseSSL + CategoryInfo : InvalidArgument: (redacted:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed This happens even when the firewall is off completely and when the machine tires to connect to itself locally. On top of that, despite the listners eing lsited on wsman, when I run PS WSMan:\localhost&gt; Get-PSSessionConfiguration I get Name PSVersion StartupScript Permission ---- --------- ------------- ---------- Microsoft.PowerShell 2.0 PS WSMan:\localhost&gt; Any ideas what I'm missing/doing wrong? edit: Windows 2003. Powershell v2.0

    Read the article

  • Help debugging Sendmail/Mailman configuration issue

    - by inxilpro
    Hi folks, I'm trying to configure a server with Sendmail and Mailman. I've been getting "Broken pipe" errors for a while, and have slowly been debugging. I fixed some permission issues, and changed the user that Mailman expects to be called from, among other things. Finally, I'd gone through everything I could think of, so I added a new test to see if it's the Mailman script or Sendmail that's causing the problem. Here's the error I'm getting now (stripped of timestamps and identifying information): <-- MAIL FROM:[email protected] Authentication-Warning: xxxxx.org: xxxxxxxxxxxxxx.net [xx.xx.xxx.xxx] didn't use HELO protocol --- 250 2.1.0 [email protected]... Sender ok <-- RCPT TO: [email protected] --- 250 2.1.5 [email protected]... Recipient ok <-- DATA --- 354 Enter mail, end with "." on a line by itself [email protected], size=20, class=0, nrcpts=1, msgid=<[email protected]>, proto=SMTP, relay=xxxxxxxxxxxxxx.net [xx.xx.xxx.xxx] --- 250 2.0.0 o6KMg2xZ025804 Message accepted for delivery alias [email protected] => "|/bin/echo foo" SYSERR(root): putbody: write error: Broken pipe 0: fl=0x0, mode=20660: CHR: dev=0/15, ino=776, nlink=1, u/gid=0/0, size=0 1: fl=0x1, mode=20660: CHR: dev=0/15, ino=776, nlink=1, u/gid=0/0, size=0 2: fl=0x1, mode=20660: CHR: dev=0/15, ino=776, nlink=1, u/gid=0/0, size=0 3: fl=0x2, mode=140777: SOCK localhost->[[UNIX: /dev/log]] 5: fl=0x0, mode=100600: dev=8/3, ino=486765, nlink=1, u/gid=0/51, size=5 6: fl=0x8000, mode=100640: dev=8/3, ino=65501, nlink=1, u/gid=0/0, size=12288 7: fl=0x8000, mode=100640: dev=8/3, ino=65501, nlink=1, u/gid=0/0, size=12288 8: fl=0x8000, mode=100640: dev=8/3, ino=65510, nlink=1, u/gid=0/0, size=12288 9: fl=0x8000, mode=100640: dev=8/3, ino=65510, nlink=1, u/gid=0/0, size=12288 10: fl=0x8000, mode=100640: dev=8/3, ino=64814, nlink=1, u/gid=0/51, size=12288 11: fl=0x8000, mode=100640: dev=8/3, ino=64814, nlink=1, u/gid=0/51, size=12288 12: fl=0x1, mode=100600: dev=8/3, ino=486767, nlink=1, u/gid=0/51, size=754 13: fl=0x1, mode=10600: FIFO: dev=0/5, ino=7649785, nlink=1, u/gid=0/51, size=0 14: fl=0x0, mode=10600: FIFO: dev=0/5, ino=7649786, nlink=1, u/gid=0/51, size=0 MCI@0x0: NULL MCI@0x0: NULL to="|/bin/echo foo", [email protected] (8/0), delay=00:00:08, xdelay=00:00:00, mailer=prog, pri=30476, dsn=5.0.0, stat=Service unavailable o6KMsnxX025948: DSN: Service unavailable done; delay=00:00:08, ntries=1 The alias in /etc/aliases is: cmtest: "|/bin/echo foo" As you can see, even when trying to pipe to /bin/echo I still get the same error. But I can't for the life of me figure out what else to check. Normal aliases work fine. Any ideas? Thanks!

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • amd gpu but display on intel integrated graphics

    - by pitseeker
    On my Ubuntu 12.04 I connected my monitor to the onboard intel graphics. I'd like to use my ati radeon 6770 for opencl tasks (e.g. bitcoin mining). So far I couldn't figure out how to get the ati driver working. When calling "aticonfig --initial -f" it always writes a new xorg.conf that ignores the intel graphics. At boot time it works only when I attached the monitor to the ati card. So I manually tampered with the xorg.conf and got this: Section "ServerLayout" Identifier "Default Monitor" Screen 0 "myscreen" 0 0 Screen 1 "deadscreen" RightOf "myscreen" EndSection Section "Module" EndSection Section "Monitor" Identifier "Default Monitor" Option "VendorName" "Monitor Vendor" Option "ModelName" "Monitor Name" Option "DPMS" "true" EndSection Section "Monitor" Identifier "null Monitor" Option "Enable" "false" EndSection Section "Device" Identifier "Intel Integrated Graphics" Driver "intel" BusID "PCI:0:2:0" Screen 0 EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Identifier "myscreen" Device "Intel Integrated Graphics" Monitor "Default Monitor" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "deadscreen" Device "aticonfig-Device[0]-0" Monitor "null Monitor" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I think this might be the right way since I see that X tries to start both drivers in /var/log/Xorg.0.log. However the fglrx driver seems crash (end of xorg.0.log): Backtrace: [ 6.625] 0: /usr/bin/X (xorg_backtrace+0x26) [0x7fb5cd41b846] [ 6.625] 1: /usr/bin/X (0x7fb5cd293000+0x18c6ea) [0x7fb5cd41f6ea] [ 6.625] 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fb5cc5b9000+0xfcb0) [0x7fb5cc5c8cb0] [ 6.625] 3: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so (xdl_xs111_atiddxGetGPUMapInfo+0x1b1) [0x7fb5c88e16b1] [ 6.625] 4: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so (atiddxGetGPUMapInfo+0xd) [0x7fb5c87bcc0d] [ 6.625] 5: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so (0x7fb5ca12d000+0x1ab29) [0x7fb5ca147b29] [ 6.625] 6: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so (0x7fb5ca12d000+0x1cf8c) [0x7fb5ca149f8c] [ 6.625] 7: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so (0x7fb5ca12d000+0x1ee55) [0x7fb5ca14be55] [ 6.626] 8: /usr/bin/X (InitExtensions+0x99) [0x7fb5cd350069] [ 6.626] 9: /usr/bin/X (0x7fb5cd293000+0x3d605) [0x7fb5cd2d0605] [ 6.626] 10: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xed) [0x7fb5cb44e76d] [ 6.626] 11: /usr/bin/X (0x7fb5cd293000+0x3daad) [0x7fb5cd2d0aad] [ 6.626] Segmentation fault at address 0x14 [ 6.626] Caught signal 11 (Segmentation fault). Server aborting [ 6.626] I'd be very happy if someone can give me a hint on how to configure my ATI card while using the integrated graphics for display. Update I used most of jjhughes57 config and successfully booted the X server on intel (keyboard layout is changed though, funnily). Unfortunately the 2nd X server (fglrx) doesn't fully start. It shuts itself down right after starting [ 6.265] (II) fglrx(0): Restoring Recent Mode via PCS is not supported in RANDR 1.2 capable environments [ 6.296] (II) UnloadModule: "mouse" [ 6.296] (II) Unloading mouse [ 6.296] (II) UnloadModule: "kbd" [ 6.296] (II) Unloading kbd [ 6.298] (II) fglrx(0): Shutdown CMMQS [ 6.298] (II) fglrx(0): [uki] removed 1 reserved context for kernel [ 6.298] (II) fglrx(0): [uki] unmapping 8192 bytes of SAREA 0x2000 at 0x7fbef8209000 [ 6.337] (II) fglrx(0): Interrupt handler Shutdown. [ 6.470] ddxSigGiveUp: Closing log [ 6.470] Server terminated successfully (0). Closing log file. Thanks for any hints what is wrong here.

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • sub domains with /etc/hosts and apache for gitorious

    - by QLands
    I managed to have a local install of Gitorious. Now I need to finalize the apache integration using a virtual server but nothing seems to work. See for example my /etc/hosts file: 127.0.0.1 localhost 172.26.17.70 darkstar.ilri.org darkstar 172.26.17.70 git.darkstar.ilri.org My vhosts.conf has the following entries: # # Use name-based virtual hosting. # NameVirtualHost *:80 <VirtualHost *:80> <Directory /srv/httpd/htdocs> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> ServerName darkstar.ilri.org DocumentRoot /srv/httpd/htdocs ErrorLog /var/log/httpd/error_log AddHandler cgi-script .cgi </VirtualHost> <VirtualHost *:80> <Directory /srv/httpd/git.darkstar.ilri.org/gitorious/public> Options FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from All </Directory> AddHandler cgi-script .cgi DocumentRoot /srv/httpd/git.darkstar.ilri.org/gitorious/public ServerName git.darkstar.ilri.org ErrorLog /var/www/git.darkstar.ilri.org/log/error.log CustomLog /var/www/git.darkstar.ilri.org/log/access.log combined AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> ExpiresActive On ExpiresDefault "access plus 1 year" </FilesMatch> FileETag None RewriteEngine On RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] </VirtualHost> Now, when I go with Firefox to darkstar.ilri.org it shows the default Apache screen: "It works!". but when I go to git.darkstar.ilri.org it waits for few seconds then falls to darkstar.ilri.org and the default apache page. No error is reported. If I run httpd -S I get: VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:21) port 80 namevhost git.darkstar.ilri.org (/etc/httpd/extra/httpd-vhosts.conf:37) Syntax OK The funny thing is that if I configure gotirious in a host called gitrepository, add 127.0.0.1 gitrepository and go with Firefox to gitrepository.. Gitorious works... But why not with git.darkstar.ilri.org? Many thanks in advance.

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >