Search Results

Search found 14187 results on 568 pages for 'dell mini 12'.

Page 448/568 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • Dual Nic, one keeps dropping

    - by user1215018
    I'm running windows server 2008 r2 on a dell poweredge 2850. I have 2 NICs, one is configured behind a firewall with a dhcp server on the main local LAN and another one has it's own dedicated connection to one of our 13 static IPs. So in a nutshell we have 2 of our static IPs going to this server, one indirectly through a firewall/dhcp server, and the other directly. I am trying to reach IIS on port 80 and port 443. The problem is that the NIC with the direct connection (NIC2) keeps dropping and says either "No internet connection" or "Unauthenticated". However, the NIC behind the firewall (NIC1) has no problems at all. Update: This is the second time this has happened in 3 days and each time the fix has been enabling the dhcp client on the NIC, allowing it to error out to a 169.x.x.x address, then re-enabling the nic with it's static IP assignment.

    Read the article

  • Solaris ldap Authentication

    - by Tman
    Hi everyone Iv been having a trouble trying to get my Solaris 10 server to authenticate against an eDir server.im managed to Set up my linux(RHeL,SLES) servers to authenticate against the ldap Server.which works fine. Here is my configuration Files. ldapclient list: NS_LDAP_FILE_VERSION= 2.0 NS_LDAP_BINDDN= cn=proxyuser,o=AEDev NS_LDAP_BINDPASSWD= {NS1}ecfa88f3a945c22222233 NS_LDAP_SERVERS= 192.168.0.19 NS_LDAP_SEARCH_BASEDN= ou=auth,o=AEDev NS_LDAP_AUTH= simple NS_LDAP_SEARCH_SCOPE= sub NS_LDAP_CACHETTL= 0 NS_LDAP_CREDENTIAL_LEVEL= anonymous NS_LDAP_SERVICE_SEARCH_DESC= group:ou=Groups,ou=auth,o=AEDev NS_LDAP_SERVICE_SEARCH_DESC= shadow:ou=users,ou=auth,o=AEDev?sub?objectClass=shadowAccount NS_LDAP_SERVICE_SEARCH_DESC= passwd:ou=auth,o=AEDev?sub?objectClass=posixAccount NS_LDAP_BIND_TIME= 10 NS_LDAP_SERVICE_AUTH_METHOD= pam_ldap:simple getent passwd works fine: root:x:0:0:Super-User:/:/sbin/sh daemon:x:1:1::/: bin:x:2:2::/usr/bin: sys:x:3:3::/: adm:x:4:4:Admin:/var/adm: lp:x:71:8:Line Printer Admin:/usr/spool/lp: uucp:x:5:5:uucp Admin:/usr/lib/uucp: nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico smmsp:x:25:25:SendMail Message Submission Program:/: listen:x:37:4:Network Admin:/usr/net/nls: gdm:x:50:50:GDM Reserved UID:/: webservd:x:80:80:WebServer Reserved UID:/: postgres:x:90:90:PostgreSQL Reserved UID:/:/usr/bin/pfksh svctag:x:95:12:Service Tag UID:/: nobody:x:60001:60001:NFS Anonymous Access User:/: noaccess:x:60002:60002:No Access User:/: nobody4:x:65534:65534:SunOS 4.x NFS Anonymous Access User:/: tlla:x:2012:100::/home/tlla: test:x:2011:100::/home/test: thato:x:2010:100::/home/thato: pam.conf login auth sufficient pam_unix_auth.so.1 #server_policy login auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass login auth required pam_dial_auth.so.1 rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_authtok_get.so.1 rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth sufficient pam_unix_auth.so.1 rlogin auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 rsh auth sufficient pam_unix_auth.so.1 #server_policy rsh auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other auth requisite pam_authtok_get.so.1 other auth required pam_dhkeys.so.1 other auth required pam_unix_cred.so.1 other auth sufficient pam_unix_auth.so.1 other auth sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass passwd auth required pam_passwd_auth.so.1 passwd auth sufficient pam_unix_auth.so.1 ssh account sufficient pam_unix.so.1 ssh account sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other account requisite pam_roles.so.1 other account sufficient pam_unix_account.so.1 other account sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass other password required pam_dhkeys.so.1 other password requisite pam_authtok_get.so.1 other password requisite pam_authtok_check.so.1 other password required pam_authtok_store.so.1 other password sufficient pam_unix.so.1 other password sufficient /usr/lib/security/pam_ldap.so.1 try_first_pass Local Authentication Works But LDAP Authentication Doesn't Work.

    Read the article

  • Wireless and Wired Network Access at same time?

    - by grasper
    At work, I use a laptop. It is a Dell Latitude D630 with Windows XP. I work in a lab environment where I need to use the Ethernet Port as a Static IP to interact with a local network (which cannot talk to the outside world). What I would like to do is use the Wireless as the internet connection so i can check email, etc at the same time I am using the ethernet network... It seems like this is not possible. Is there a piece of software or a way to configure it to allow me to do this?

    Read the article

  • DSL Modem with Wireless Router

    - by David
    I have a D-Link WBR-1310 wireless router and a TP-Link TD-8616 DSL modem. My old DSL modem died recently and I got the TP-Link as a replacement. With my old DSL modem, I plugged it into the WAN port on my D-Link and I could reach the internet through wireless and through the network. However, when I plugged the new TP-Link into the WAN port, I was not able to get any internet connectivity (either on the network ports or through wireless). So I plugged my labtop directly into the TP-Link DSL modem and I was able to get internet connectivity. I'm trying to figure out why my labtop can see the internet connection, but not the D-Link router. I think that the problem is due to the IP networking. My D-Link was originally set to have IP address 192.168.1.1. According to the documentation for the TP-Link DSL modem, it uses 192.168.1.1 as its IP address. I do not believe that my old DSL modem had an IP address. I logged into my D-Link router and changed its IP address to 192.168.1.2 and restarted it. Unfortunately, I still could not see the internet from my wireless devices. I've read a few forum postings which implied that I needed to setup a "bridge" between the two networks. Does that sound correct? Why didn't my old DSL modem require a bridge? I read pg. 12-13 of my D-Link's manual and they suggest that I need to disable UPnP, DHCP, and then plug the DSL modem into one of the LAN ports on my router. I'm concerned about doing this since I don't think that the firewall will work if I plug my DSL modem into one of the LAN ports. I also have a home NAS on my network and I wouldn't want that to be available over the internet. Does anyone have any advice about how I can get my TPLink DSL modem to work with my D-Link router? Thanks!

    Read the article

  • Why does Ubuntu 11.10 hang frequenty?

    - by ParveenArora
    I've been using Ubuntu from a long time and recently I have shifted to its newest version 11.10. Its interface is nice but my touch-pad hangs frequently in short intervals of time. When that happens, no key or key combination works. The only option is to restart or logout my laptop. I am using a Dell Inspiron N4010. I have tried a lot to solve this problem but haven't found any satisfactory help till now. Most of my friends who are using Ubuntu 11.10 reported the same to me, seems like it's a bug. How can I get out of this problem?

    Read the article

  • Coldfusion autorestart

    - by Comcar
    Coldfusion is automatically restarting, a lot. It comes in waves, everything seems fine for a while then the server struggles for a few minutes, restarts a lot then settles down again. I have Fusion Reactor installed, but when CF goes down FR stops logging so it's not really helping. Looking through the archived logs just shows gaps in the logs. These are all the occourances of the phrase "Coldfusion started" today. [root@server2 logs]# grep -i "Coldfusion started" server.log | grep "11/27/12" "Information","main","11/27/12","01:49:35",,"ColdFusion started" "Information","main","11/27/12","01:50:46",,"ColdFusion started" "Information","main","11/27/12","01:52:39",,"ColdFusion started" "Information","main","11/27/12","01:54:08",,"ColdFusion started" "Information","main","11/27/12","01:55:12",,"ColdFusion started" "Information","main","11/27/12","01:56:29",,"ColdFusion started" "Information","main","11/27/12","01:57:36",,"ColdFusion started" "Information","main","11/27/12","01:58:57",,"ColdFusion started" "Information","main","11/27/12","01:59:56",,"ColdFusion started" "Information","main","11/27/12","02:01:38",,"ColdFusion started" "Information","main","11/27/12","02:03:11",,"ColdFusion started" "Information","main","11/27/12","02:04:41",,"ColdFusion started" "Information","main","11/27/12","02:07:53",,"ColdFusion started" "Information","main","11/27/12","02:10:45",,"ColdFusion started" "Information","main","11/27/12","02:11:49",,"ColdFusion started" "Information","main","11/27/12","02:13:09",,"ColdFusion started" "Information","main","11/27/12","02:14:18",,"ColdFusion started" "Information","main","11/27/12","02:15:44",,"ColdFusion started" "Information","main","11/27/12","02:17:06",,"ColdFusion started" "Information","main","11/27/12","02:34:19",,"ColdFusion started" "Information","main","11/27/12","03:01:20",,"ColdFusion started" "Information","main","11/27/12","05:25:59",,"ColdFusion started" "Information","main","11/27/12","06:30:48",,"ColdFusion started" "Information","main","11/27/12","06:36:20",,"ColdFusion started" "Information","main","11/27/12","09:34:07",,"ColdFusion started" "Information","main","11/27/12","09:35:39",,"ColdFusion started" "Information","main","11/27/12","09:36:41",,"ColdFusion started" "Information","main","11/27/12","09:39:15",,"ColdFusion started" "Information","main","11/27/12","09:40:42",,"ColdFusion started" "Information","main","11/27/12","09:42:55",,"ColdFusion started" "Information","main","11/27/12","09:44:23",,"ColdFusion started" "Information","main","11/27/12","09:46:18",,"ColdFusion started" "Information","main","11/27/12","09:47:35",,"ColdFusion started" "Information","main","11/27/12","09:48:53",,"ColdFusion started" "Information","main","11/27/12","09:50:04",,"ColdFusion started" "Information","main","11/27/12","09:51:51",,"ColdFusion started" "Information","main","11/27/12","09:53:05",,"ColdFusion started" "Information","main","11/27/12","09:54:24",,"ColdFusion started" "Information","main","11/27/12","09:55:28",,"ColdFusion started" "Information","main","11/27/12","09:56:38",,"ColdFusion started" "Information","main","11/27/12","09:58:03",,"ColdFusion started" "Information","main","11/27/12","09:59:03",,"ColdFusion started" "Information","main","11/27/12","10:04:37",,"ColdFusion started" "Information","main","11/27/12","12:04:02",,"ColdFusion started" I've been looking at the live server metrics in FR on a second screen all day, the CPU, Memory and requests all seemed fine about 12 midday, then the server rebooted. Looking at the logs for the hour between 9am and 10am (more than 15 restarts in the hour), the CPU never went over 44% usage and the Memory never exceeded 53% usage - in the recorded stats at least. There is no JDBC tracking at the moment, so I'll add that to tracking and see if it's MySQL causing a problem, but can anyone help me narrow down the problem, what would cause Cold Fusion to auto restart, and I'm assuming the auto restart is only happening because Fusion Reactor is installed. It's a Red Hat 5 LAMP stack running Coldfusion 9 and Fusion Reactor 4.5.2

    Read the article

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • VirtualBox guest responds to ping but all ports closed in nmap

    - by jeremyjjbrown
    I want to setup a test database on a vm for development purposes but I cannot connect to the server via the network. I've got Ubuntu 12.04vm installed on 12.04 host in Virtualbox 4.2.4 set to - Bridged network mode - Promiscuous Allow All When I try to ping the virtual guest from any network client I get the expected result. PING 192.168.1.209 (192.168.1.209) 56(84) bytes of data. 64 bytes from 192.168.1.209: icmp_req=1 ttl=64 time=0.427 ms ... Internet access inside the vm is normal But when I nmap it I get nothin! jeremy@bangkok:~$ nmap -sV -p 1-65535 192.168.1.209 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:39 CST Nmap scan report for jeremy (192.168.1.209) Host is up (0.0032s latency). All 65535 scanned ports on jeremy (192.168.1.209) are closed Service detection performed. Please report any incorrect results at http://nmap.org/submit/ Nmap done: 1 IP address (1 host up) scanned in 0.88 seconds ufw and iptables on VM... jeremy@jeremy:~$ sudo service ufw stop [sudo] password for jeremy: ufw stop/waiting jeremy@jeremy:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have scanned around and have no reason to believe that my router is blocking internal ports. jeremy@bangkok:~$ nmap -v 192.168.1.2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:44 CST Initiating Ping Scan at 18:44 Scanning 192.168.1.2 [2 ports] Completed Ping Scan at 18:44, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 18:44 Completed Parallel DNS resolution of 1 host. at 18:44, 0.03s elapsed Initiating Connect Scan at 18:44 Scanning 192.168.1.2 [1000 ports] Discovered open port 445/tcp on 192.168.1.2 Discovered open port 139/tcp on 192.168.1.2 Discovered open port 3306/tcp on 192.168.1.2 Discovered open port 80/tcp on 192.168.1.2 Discovered open port 111/tcp on 192.168.1.2 Discovered open port 53/tcp on 192.168.1.2 Discovered open port 5902/tcp on 192.168.1.2 Discovered open port 8090/tcp on 192.168.1.2 Discovered open port 6881/tcp on 192.168.1.2 Completed Connect Scan at 18:44, 0.02s elapsed (1000 total ports) Nmap scan report for 192.168.1.2 Host is up (0.0017s latency). Not shown: 991 closed ports PORT STATE SERVICE 53/tcp open domain 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 3306/tcp open mysql 5902/tcp open vnc-2 6881/tcp open bittorrent-tracker 8090/tcp open unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds Answer... Turns out all of the ports were open to the network. I installed open ssh and confirmed it. Then I edited my db conf to listen to external IP's and all was well.

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • WD1000FYPS harddrive is marked 0 mb in 3ware (and no SMART)

    - by osgx
    After reboot my SATA 1TB WD1000FYPS (previously is was "Drive error") is marked 0 mb in 3ware web gui. Complete message: Available Drives (Controller ID 0) Port 1 WDC WD1000FYPS-01ZKB0 0.00 MB NOT SUPPORTED [Remove Drive] SMART gives me only Device Model and ATA protocol version 1 (not 7-8 as it must be for SATA) What does it mean? Just before reboot, when is was marked only with "Device Error", smart was: Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: WD-WCASJ1130*** Firmware Version: 02.01B01 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Mar 7 18:47:35 2010 MSK SMART support is: Available - device has SMART capability. SMART support is: Enabled SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 188 186 021 Pre-fail Always - 7591 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 229 5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 3 7 Seek_Error_Rate 0x000e 193 193 000 Old_age Always - 125 9 Power_On_Hours 0x0032 078 078 000 Old_age Always - 16615 10 Spin_Retry_Count 0x0012 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 77 192 Power-Off_Retract_Count 0x0032 198 198 000 Old_age Always - 1564 193 Load_Cycle_Count 0x0032 146 146 000 Old_age Always - 164824 194 Temperature_Celsius 0x0022 117 100 000 Old_age Always - 35 196 Reallocated_Event_Count 0x0032 199 199 000 Old_age Always - 1 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 What can be wrong with he? Can it be restored? PS new smart is === START OF INFORMATION SECTION === Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: [No Information Found] Firmware Version: [No Information Found] Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 1 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Mar 8 00:29:44 2010 MSK SMART is only available in ATA Version 3 Revision 3 or greater. We will try to proceed in spite of this. SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported. Checking for SMART support by trying SMART ENABLE command. Command failed, ata.status=(0x00), ata.command=(0x51), ata.flags=(0x01) Error SMART Enable failed: Input/output error SMART ENABLE failed - this establishes that this device lacks SMART functionality. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. PPS There was a rapid grow of " 192 Power-Off_Retract_Count " before dying. The hard was used in raid, with several hards from the same fabric packaging box (close id's). The hard drives were placed identically. Rapid means almost linear grow from 300 to 1700 in 6-7 hours. Maximal temperature was 41C. (thanks to munin's smart monitoring)

    Read the article

  • tproxy squid bridge very slow when cache is full

    - by Roberto
    I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s. When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable. Any hints of what may be happening? I'm using: linux-2.6.30.10 iptables-1.4.3.2 squid-3.1.1 compiled with these options: ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish My squid.conf: cache_mem 100 MB memory_pools off acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl net-g1 src xxx.xxx.xxx.xxx/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow net-g1 from where browsing should be allowed http_access allow localnet http_access allow localhost http_access deny all http_port 3128 http_port 3129 tproxy hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 8000 16 256 access_log none cache_log /var/log/squid/cache.log coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . I have this issue when the cache is full, but do not really know if it is because of that. Thanks in advance and sorry my english. roberto

    Read the article

  • enabling gzip with htaccess...why is it hit or miss?

    - by adam-asdf
    I have shared hosting through Justhost. I use the HTML5 Boilerplate .htaccess (have tried other methods from here and there without luck) the compression part is as follows: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # Compress all output labeled with one of the following MIME-types <IfModule mod_filter.c> AddOutputFilterByType DEFLATE application/atom+xml \ application/javascript \ application/json \ application/rss+xml \ application/vnd.ms-fontobject \ application/x-font-ttf \ application/xhtml+xml \ application/xml \ font/opentype \ image/svg+xml \ image/x-icon \ text/css \ text/html \ text/plain \ text/x-component \ text/xml </IfModule> </IfModule> However, it isn't working—at least I don't think—My home page (html) isn't compressing, the CSS and some of the JS aren't gzipped. It is failing on HTML, CSS and JS. However, some things are (or were, who knows what it will look like when you check) gzipped. My domain is http://adaminfinitum.com/ What is weird is that the (Google) PageSpeed browser extension for Firefox (whatever the current version is [Nov. 2012]) gives me a 95% speed rating (and no warnings about compression), yet YSlow and Chrome developer tools both flag me about gzip, as does a tool I found on here while researching this. To reduce cookies I set up a subdomain on my site and I thought maybe that was it so I added an .htaccess there also, but no luck. To reduce http requests I embedded some of webfonts and images in CSS (HTML5 BP stipulates not to compress images, and apparently '.woff' files are already compressed) so I thought maybe that was it and I spent all day separating and asynchronously loading those portions (via Modernizr.load) but that hasn't helped either...if anything it made it worse due to increasing http requests (I realize speed scores of async resources may be misleading). Researching this, it seems to be a fairly common issue but I haven't found an explanation/solution. I don't think it is a MIME-type issue, I have quadruple checked (and thrice edited) my .htaccess files. My hosting company said they run Apache 2.2.22 and I have looked at everything I can find. What gives?

    Read the article

  • External Hard-Drive Randomly Ejects; Stays On

    - by Kaleb F.
    My 250GB I/O Magic USB external hard-drive randomly disconnects / ejects from the computer after between 2-30 minutes of use. When this happens, the blinking activity light on the front of the hdd turns off; however, the disks can still be heard spinning. Unplugging & replugging in the USB does not reconnect the device and the activity light remains unlit. The only way to continue using it is to flip off then on the power switch of the hdd. The hard-drive was formatted with MBR partition table and 2 NTFS volumes. I recently tried switching to GUID with two Mac OS Extended (Journaled), but the problem remains. This error occurs with my new Macbook Pro with Snow Leopard as well as with my DELL E520 with Windows 7 Ultimate.

    Read the article

  • Flash Player in Windows 7 X64 problems

    - by Flupkear
    I'm having a lot of problems with flash player in my win 7 X64 laptop, in YouTube the videos are completely black, you can only hear the sound and in Vimeo the videos are played well in normal size but completely black in fullscreen. Also some days ago YouTube played normal size videos but with controls not visible. I'm sure that the problem isn't the browser since this happen in Chrome, Firefox and IE 8. My laptop is a Dell XPS L501X, the flash version is 10,2,152,26 and the OS is Windows 7 Home Premium with SP1. I'll appreciate any help.

    Read the article

  • Why does limiting my virtual memory to 512MB with ulimit -v crash the JVM?

    - by Narinder Kumar
    I am trying to enforce maximum memory a program can consume on a Unix system. I thought ulimit -v should do the trick. Here is a sample Java program I have written for testing : import java.util.*; import java.io.*; public class EatMem { public static void main(String[] args) throws IOException, InterruptedException { System.out.println("Starting up..."); System.out.println("Allocating 128 MB of Memory"); List<byte[]> list = new LinkedList<byte[]>(); list.add(new byte[134217728]); //128 MB System.out.println("Done...."); } } By default, my ulimit settings are (output of ulimit -a) : core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited When I execute my java program (java EatMem), it executes without any problems. Now I try to limit max memory available to any program launched in the current shell to 512MB by launching the following command : ulimit -v 524288 ulimit -a output shows the limit to be set correctly (I suppose): core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) 524288 file locks (-x) unlimited If I now try to execute my java program, it gives me the following error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. Ideally it should not happen as my Java program is only taking around 128MB of memory which is well within my specified ulimit parameters. If I change the arguments to my Java program as below: java -Xmx256m EatMem The program again works fine. While trying to give more memory than limited by ulimit like : java -Xmx800m EatMem results in expected error. Why the program fails to execute in the first case after setting ulimit ? I have tried the above test on Ubuntu 11.10 and 12.0.4 with Java 1.6 and Java 7

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Apache2 & .htaccess : Apache ignoring AccessFile

    - by Elyx0
    Hi there here is my server configuration: DEBIAN 32Bits / PHP 5 / Apache Server version: Apache/2.2.3 - Server built: Mar 22 2008 09:29:10 The AccessFiles : grep -ni AccessFileName * apache2.conf:134:AccessFileName .htaccess apache2.conf:667:AccessFileName .httpdoverride All the AllowOverride statements in my apache2/ folder. mods-available/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-available/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit mods-enabled/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-enabled/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit sites-enabled/default:8: AllowOverride All sites-enabled/default:14: AllowOverride All sites-enabled/default:19: AllowOverride All sites-enabled/default:24: AllowOverride All sites-enabled/default:42: AllowOverride All The sites-enabled/default file : 1 <VirtualHost *> 2 ServerAdmin [email protected] 3 ServerName mysite.com 4 ServerAlias mysite.com 5 DocumentRoot /var/www/mysite.com/ 6 <Directory /> 7 Options FollowSymLinks 8 AllowOverride All 9 Order Deny,Allow 10 Deny from all 11 </Directory> 12 <Directory /var/www/mysite.com/> 13 Options Indexes FollowSymLinks MultiViews 14 AllowOverride All 15 Order allow,deny 16 allow from all 17 </Directory> 18 <Directory /var/www/mysite.com/test/> 19 AllowOverride All 20 </Directory> 21 22 ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ 23 <Directory "/usr/lib/cgi-bin"> 24 AllowOverride All 25 Options ExecCGI -MultiViews +SymLinksIfOwnerMatch 26 Order allow,deny 27 Allow from all 28 </Directory> 29 30 ErrorLog /var/log/apache2/error.log 31 32 # Possible values include: debug, info, notice, warn, error, crit, 33 # alert, emerg. 34 LogLevel warn 35 36 CustomLog /var/log/apache2/access.log combined 37 ServerSignature Off 38 39 Alias /doc/ "/usr/share/doc/" 40 <Directory "/usr/share/doc/"> 41 Options Indexes MultiViews FollowSymLinks 42 AllowOverride All 43 Order deny,allow 44 Deny from all 45 Allow from 127.0.0.0/255.0.0.0 ::1/128 46 </Directory> 47 48 49 50 51 52 53 54 </VirtualHost> If i change any Allow from all in Deny from all , it works whenever i put it. I've got one .htaccess at /mysite.com/.htaccess & one at /mysite.com/test/.htaccess with: Order Deny,Allow Deny from all Neither of them work i can still see my website. I've got mod_rewrite enabled but i don't think it does anything here. I've tried almost everything :/ It works on my local environnement (MAMP) but fails when on my Debian server.

    Read the article

  • What influences the kind of RAM a desktop or laptop can support?

    - by Albert Iordache
    What exactly influences the kind of RAM a desktop or laptop can support? Apart from the clock speed, the maximum amount of RAM the motherboard can handle, the DDR type (1/2/3) and the shape of the module (DIMM for desktops, SO-DIMM for laptops)? I see that in certain cases, such as with the Kingston 4GB DDR3 1333MHz CL9, (and on the Kingston KTD-L3B/4G page) the page displays a set of laptop product numbers. Does the actual model of the laptop also influence the models of RAM it can support? Could, for instance, an Asus K52 work with that particular RAM module, even if it specifies Dell models?

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • Linux driver for Intel ICH10R

    - by Pablo Santa Cruz
    Hi! I would like to know what 64-bit Linux distribution can I use with an Intel ICH10R RAID. I have tried CentOS 5.2 and Ubuntu Server 8.04. Neither support that raid controller. Also, I would like to know where do I setup the RAID-5 I want to use. I am using a TYAN S7002 motherboard and the BIOS software does not include an option to configure the RAID as I am used to do with Dell servers... Thanks a lot in advance.

    Read the article

  • NRPE Warning threshold must be a positive integer

    - by Frida
    OS: Ubuntu 12.10 Server 64bits I've installed Icinga, with ido2db, pnp4nagios and icinga-web (last release, following the instruction given in the documentation, installation with apt, etc). I am using icinga-web to monitor my hosts. For the moment, I have just my localhost, and all is perfect. I am trying to add a host and monitor it with NRPE (version 2.12): root@server:/etc/icinga# /usr/lib/nagios/plugins/check_nrpe -H client NRPE v2.12 The configuration looks good. I've created a file in /etc/icinga/objects/client.cfg as below on the server: root@server:/etc/icinga/objects# cat client.cfg define host{ use generic-host ; Name of host template to use host_name client alias client.toto address xx.xx.xx.xx } # Service Definitions define service{ use generic-service host_name client service_description CPU Load check_command check_nrpe_1arg!check_load } define service{ use generic-service host_name client service_description Number of Users check_command check_nrpe_1arg!check_users } And add in my /etc/icinga/commands.cfg: # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -a $ARG2$ } # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe_1arg command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } But it does not work. These are the logs from the client: Dec 3 19:45:12 client nrpe[604]: Connection from xx.xx.xx.xx port 32641 Dec 3 19:45:12 client nrpe[604]: Host address is in allowed_hosts Dec 3 19:45:12 client nrpe[604]: Handling the connection... Dec 3 19:45:12 client nrpe[604]: Host is asking for command 'check_users' to be run... Dec 3 19:45:12 client nrpe[604]: Running command: /usr/lib/nagios/plugins/check_users -w -c Dec 3 19:45:12 client nrpe[604]: Command completed with return code 3 and output: check_users: Warning t hreshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:45:12 client nrpe[604]: Return Code: 3, Output: check_users: Warning threshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx port 32129 Dec 3 19:44:49 client nrpe[32582]: Host address is in allowed_hosts Dec 3 19:44:49 client nrpe[32582]: Handling the connection... Dec 3 19:44:49 client nrpe[32582]: Host is asking for command 'check_load' to be run... Dec 3 19:44:49 client nrpe[32582]: Running command: /usr/lib/nagios/plugins/check_load -w -c Dec 3 19:44:49 client nrpe[32582]: Command completed with return code 3 and output: Warning threshold mu st be float or float triplet!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLO AD15 Dec 3 19:44:49 client nrpe[32582]: Return Code: 3, Output: Warning threshold must be float or float trip let!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLOAD15 Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx closed. Have you any ideas?

    Read the article

  • No sound (USB speakers) in Ubuntu 9.04 Jaunty Jackalope

    - by Mike
    I just installed Ubuntu 9.04 and have been unable to get sound functioning. I have a set of USB speakers (not USB powered with a stereo plug... they are totally USB). I've tried plugging them directly into the tower as well as into the USB port on my Dell 2405FPW monitor. Both USB ports are functioning correctly (I tested by sticking a flash drive in there - they both read it), and the speakers are functioning correctly in Windows. If it's relevant, I have an SB Audigy 2 sound card that came with the computer, but is not being used. Any ideas? Thanks! EDIT: These are the speakers - Logitech S-150 USB Speakers

    Read the article

  • Windows 7 Easy Transfer does not start (Windows XP)

    - by Gerhard Weiss
    Windows 7 Easy Transfer does not start on my Windows XP system. I installed Windows 7 Easy Transfer and when I click to start it nothing happens. Not even an error message. I uninstalled and reinstalled a few times with the same results. I even tried starting it from C:\Program Files\Windows Easy Transfer 7\migwiz.exe with the same results. Nothing happens! Any one else run into this issue? Any suggestions on how to start it up differently? This is my work system so I do not know if they have it locked down. This is a Windows XP, Dell OptiPlex755 built about 3 years ago. I downloaded the 32-bit from this website: http://windows.microsoft.com/en-us/windows7/products/features/windows-easy-transfer and yes my system is 32-bit ;)

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >