Search Results

Search found 1157 results on 47 pages for 'latency'.

Page 18/47 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • different nmap results

    - by aasasas
    Hello I have a scan on my server form outside and from inside, why results are different? [root@xxx ~]# nmap -sV -p 0-65535 localhost Starting Nmap 5.51 ( http://nmap.org ) at 2011-02-16 07:59 MSK Nmap scan report for localhost (127.0.0.1) Host is up (0.000015s latency). rDNS record for 127.0.0.1: localhost.localdomain Not shown: 65534 closed ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 4.3 (protocol 2.0) 80/tcp open http Apache httpd 2.2.3 ((CentOS)) Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 7.99 seconds AND sh-3.2# nmap -sV -p 0-65535 xxx.com Starting Nmap 5.51 ( http://nmap.org ) at 2011-02-16 00:01 EST Warning: Unable to open interface vmnet1 -- skipping it. Warning: Unable to open interface vmnet8 -- skipping it. Stats: 0:07:49 elapsed; 0 hosts completed (1 up), 1 undergoing SYN Stealth Scan SYN Stealth Scan Timing: About 36.92% done; ETC: 00:22 (0:13:21 remaining) Stats: 0:22:05 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan Service scan Timing: About 75.00% done; ETC: 00:23 (0:00:02 remaining) Nmap scan report for xxx.com (x.x.x.x) Host is up (0.22s latency). Not shown: 65528 closed ports PORT STATE SERVICE VERSION 21/tcp open tcpwrapped 22/tcp open ssh OpenSSH 4.3 (protocol 2.0) 25/tcp open tcpwrapped 80/tcp open http Apache httpd 2.2.3 ((CentOS)) 110/tcp open tcpwrapped 143/tcp open tcpwrapped 443/tcp open tcpwrapped 8080/tcp open http-proxy?

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • Filtered Router Interface

    - by jviotti
    I'm having some problems with a Scientific-Atlanta DPR2320R2. In specific with the WIFI. A few months ago I changed its password and username and now I can't remember. So I tried cracking it with Hydra but it drove things worse. Content of webadmin was rendered partial, and threw lot of errors. I then reseted the router. I found myself abled to browse the web with ethernet-connected pc. Wifi is configured by registering the device's MAC Address, and indeed the router has been reseted and register MAC address were lost. No device could connect to wifi. In fact, the device does not even recognize the network. I tried the pointing to 192.168.0.1 to restablish the MAC's. But I couldn't connect to the router access point. Tried listing up hosts: $ nmap -sP 192.168.0.0/24 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:18 ART Host 192.168.0.1 is up (0.0018s latency). Host 192.168.0.11 is up (0.00025s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 59.62 seconds Then checked 192.168.0.1 was really up by sending pings. It responded to all my pings. I quick-scanned the access point: $ nmap 192.168.0.1 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:08 ART Interesting ports on 192.168.0.1: Not shown: 999 closed ports PORT STATE SERVICE 80/tcp filtered http Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds Look the state of the port 80: FILTERED. I'm pretty confused now. Any suggestion would be appreciated. Thanks in advance.

    Read the article

  • Basic multicast network performance problems

    - by davedavedave
    I've been using mpong from 29west's mtools package to get some basic idea of multicast latency across various Cisco switches: 1Gb 2960G, 10Gb 4900M and 10Gb Nexus N5548P. The 1Gb is just for comparison. I have the following results for ~400 runs of mpong on each switch (sending 65536 "ping"-like messages to a receiver which then sends back -- all over multicast). Numbers are latencies measured in microseconds. Switch Average StdDev Min Max 2960 (1Gb) 109.68463 0.092816 109.4328 109.9464 4900M (10Gb) 705.52359 1.607976 703.7693 722.1514 NX 5548(10Gb) 58.563774 0.328242 57.77603 59.32207 The result for 4900M is very surprising. I've tried unicast ping and I see the 4900 has ~10us higher latency than the N5548P (average 73us vs 64us). Iperf (with no attempt to tune it) shows both 10Gb switches give me 9.4Gbps line speed. The two machines are connected to the same switch and we're not doing any multicast routing. OS is RHEL 6. 10Gb NICs are HP 10GbE PCI-E G2 Dual-port NICs (I believe they are rebranded Mellanox cards). The 4900 switch is used in a project with tight access control so I'm waiting for approval before I can access it and check the config. The other two I have full access to configure. I've looked at the Cisco document[2] detailing differences between NX-OS and IOS w.r.t multicast so I've got some ideas to try out but this isn't an area where I have much expertise. Does anyone have any idea what I should be looking at once I get access to the switch? [1] http://docwiki.cisco.com/wiki/Cisco_NX-OS/IOS_Multicast_Comparison

    Read the article

  • Using Round Robin DNS on simple VPN setup

    - by dannymcc
    We have two internet connections which are load balanced to share the load between the two. We set this up after one of the internet provider proved to be less than reliable but great speed and latency wise when it is working. We'd rather utilise both connections as much as possible rather than leave one idle until the other drops out. We have a number of remote workers who occasionally need to connect via VPN from their laptops or iPads, we also have a small number of permanent LAN to LAN tunnels running from smaller branches. Originally we only had one internet connection and used one of our static IP addresses for all VPN users. Now that we have two internet connections running all of the time I am trying to make sure that the VPN is available to our team regardless of which connection drops. So my solution is to create two A records for our domain name with a value of vpn. and the two static IP addresses from each peer. Is this a sensible way of achieving this? Should I expect higher latency due to packets being lost if one peer fails and some packets still get routed to it anyway? A brief mockup of the setup I have:

    Read the article

  • Feasibility of Windows Server 2008 DFS replication over WAN link

    - by CesarGon
    We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link. Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders. My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth. I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side. My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

    Read the article

  • Nagios check_host_alive and check_ping not showing host as down

    - by Kyle
    I am using the check_host_alive command to send 5 packets every minute to all my routers at remote locations. I noticed today I received a notification from The AT&T Global Client Support Center that a router was down (which can take 5-30 minutes to send these notices out) and never received a notice from Nagios. I went onto Nagios and it is was showing the host as alive with a latency of 0ms. This tells me it is seeing the automated response from my router in the data center that, "TTL expired in transit" as a reply from the remote router. Is there anyway for me to tell nagios to check where the reply is comming from? I feel like other people have to of had this issue... I tested it with the check_ping command and it produced the same results. I have the command defined has %hostname% and the proper IP in the host definition, and it works fine for telling me the latency is high. Any ideas are welcome, I have already exercised my Google skills with no results. EDIT: root@IM-UBTU:/# /usr/local/nagios/libexec/check_ping -H 192.168.250.1 -w 100.0,10% -c 200.0,20% -vvv CMD: /bin/ping -n -U -w 10 -c 5 192.168.250.1 Output: PING 192.168.250.1 (192.168.250.1) 56(84) bytes of data. Output: From 10.69.10.2 icmp_seq=1 Time to live exceeded It knows something is wrong why doesn't it give me a warning?

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • How to prevent asymmetric routing with multiple eBGP routers?

    - by Andy Shinn
    I have 2 routers announcing a /22 subnet to different providers (one providers connects to each of the 2 routers). I have split the /22 in two /23 to announce one /23 on each of the routers plus the /22 (the providers will take the more specific route). This allows me to fail over and keep traffic inside the /23 in and out the same provider. What are other ways in which I could announce just the /22 with both routers and have packets from servers on the network behind the routers go back out the same router in which they came in from? EDIT: The main problem I come across, which end users and clients complain about the most, is that the least hop route is sometimes not the "optimal" route. In my case, I know that Provider B may have better latency to X nation. But when packets come in from provider B, they may go out Provider A or provider B. The reverse is also true. If I send a packet to X nation out provider A, even though it may have more hops back, the packet will likely come in from Provider B (which may have higher latency, packet loss, etc. to this nation)

    Read the article

  • Netgear CG3000D new Modem/Router - Random High Ping

    - by justin.chmura
    Cox just recently came out and looked at my internet and decided that the modem I had was causing high latency issue. The speed was fine but the ping would spike to around 100 and over when gaming or putting a load more than browsing on the line. After they replaced it, it seems like I get better latency, but when it spikes, I get upwards of over 300 ping with like 500 jitter. I figured I would hit the serverfault universe before sending another email to Cox. I opted not to do the Cox setup as it was an extra $20 which I thought would have just setup the wireless (which I can handle). Is it a setting or something that I missed that needs to be setup? The firmware for the CG3000D is awful and not fun to use. I did change some hidden settings on the RgServices.asp page (I'll attach a screenshot). I've also heard that the Router/Modem combos are awful and that I should go back and just ask for a modem stand-alone. Any input is helpful. All screenshots: http://imgur.com/a/JX6qu#0

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • Ubuntu 10.04 network manager issues

    - by Shark
    I was using the default network manager to connect to my wi-fi network, but if the connection is dropped or router restarted the network manager wont reconnect automatically after i guess a couple of tries and just gives a pop-up to connect manually . To avoid this annoyance I installed WICD but though it does try to reconnect to the network after a drop in connection it is unable to resolve the ip address and i am left with an even bigger annoyance . 1. Is there a way to counter either of these issues ? 2. Something like a background process that will check network status periodically and then try to connect to a favored network ? Edit- out put of lshw -C network *-network description: Wireless interface product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:12:00.0 logical name: eth1 version: 01 serial: c0:cb:38:18:9b:7f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.60.48.36 ip=192.168.11.2 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:fbc00000-fbc03fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:13:00.0 logical name: eth0 version: 02 serial: f0:4d:a2:94:2d:74 size: 10MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:29 ioport:e000(size=256) memory:d0b10000-d0b10fff(prefetchable) memory:d0b00000-d0b0ffff(prefetchable) memory:fb200000-fb21ffff(prefetchable)

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • Bonnie does not provide speed for Sequential Input / Block

    - by Lqp1
    I'm using ProxmoxVE and I would like to run some benchmarks regarding performances of this product. One of these benchmarks is bonnie++ ; it runs very well in a VM (qemu-kvm) but when I run it in a conainer (openVZ), it does not provide me reading speed (only writing). I don't understand why... Does anyone know what's happenning ? VMs ans Containers are Debian 7.4. Here's the output of bonnie in the container: root@ct2:/# bonnie++ -u root Using uid:0, gid:0. Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ct2 1G 843 99 59116 8 60351 4 4966 99 +++++ +++ 2745 8 Latency 9558us 3582ms 527ms 1672us 936us 5248us Version 1.96 ------Sequential Create------ --------Random Create-------- ct2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 19567us 358us 368us 107us 59us 25us 1.96,1.96,ct2,1,1401810323,1G,,843,99,59116,8,60351,4,4966,99,+++++,+++,2745,8,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,9558us,3582ms,527ms,1672us,936us,5248us,19567us,358us,368us,107us,59us,25us The filesystem for / is of type "simfs", which is a pseudo filesystem for openVZ. Maybe it's related to this issue but I can't find anyone with the same issue with bonnie and openVZ... Thanks for your help. Regards, Thomas.

    Read the article

  • Not able to connect to port different than 22 - OpenVPN

    - by t8h7gu
    I have OpenVPN network with 5 clients. Computer with Arch Linux which hosts OpenVPN server, It also hosts virtual machine with Computer with CentOS which is also connnected to OpenVPN subnet. Windows 8 which hosts virtual machine with CentOS. Both of them are connected to OpenVPN. Last one machine is virtual machine with CentOS which is hosted by computer with Ubuntu 14( which is not connected to OpenVPN. All machines in OpenVPN subnet are bolded. All phisical computers are in different networks. The problem is that when I use nmap to scan Windows and it's guest virtual machine it's saids that host seems down. When I force namp to scan specific port it shows filtered state: nmap -Pn -p 50010 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 19:49 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.11s latency). rDNS record for 10.8.0.3: node3.com PORT STATE SERVICE 50010/tcp filtered unknown Telnet also cannot connect to this port telnet n3 50010 Trying 10.8.0.3... telnet: Unable to connect to remote host: No route to host But ss on this host show's proper state of this port ss -anp | grep 50010 LISTEN 0 50 10.8.0.3:50010 *:* users:(("java",12310,271)) What might be possible reason of that and how to fix it? EDIT I've found that I am able to connect via telnet to ssh port: telnet n3 22 Trying 10.8.0.3... Connected to n3. Escape character is '^]'. SSH-2.0-OpenSSH_5.3 So it seems that it's not problem with Windows firewall. But I have no idea what it might be. Also nmap result for first thousand ports: nmap -Pn -p 1-1000 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 20:08 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.49s latency). rDNS record for 10.8.0.3: node3.com Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 77.87 seconds

    Read the article

  • What differences are there between "home" switches and "professional" switches?

    - by pjreddie
    Our radio station uses a PtP wireless system to stream our radio and TV signals from our studio up a hill to our transmitter. We have been having problems with warbly sound and drop outs that come from some point in this system. An engineer that occasionally visits the station thinks it could be the switches we use on each side of the PtP wireless system to connect the PtP devices to the encoders and decoders and wants us to get two of these switches: http://www.amazon.com/Netgear-JGS516-ProSafe-16-Port-Ethernet/dp/B0002CWPOK/ref=dp_return_1 The encoder/decoder setup only streams 8Mbps total so it seems like the switches we have should not be stressed out, unless they are causing sufficient latency to degrade the performance of the encoder/decoder. At each end of the connection we only have 4 connections, is there any reason we couldn't get a cheaper, "home" quality switch like this: http://www.amazon.com/D-Link-DGS-1005G-5-Port-Gigabit-Desktop/dp/tech-data/B003X7TRWE/ref=de_a_smtd Is there a significant difference that we would notice in terms of latency between these two switches? How much does the quality of the switch actually matter in this scenario? Any help is appreciated, feel free to ask questions if anything needs clarification. Thanks

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • Can not connect remotely to MySQL Server on Ubuntu 10.10

    - by BobFranz
    Ok I have searched google for two days trying to get this to work. Here are the steps I have taken so far: Clean install of Ubuntu 10.10 Install mysql 5.1 as well as admin Comment out the bind address in the config file Create a new database Create a new user that is username@% to allow remote connections Grant all access to this user to the new database EXCEPT the grant option Login on the server is ok using this new user and database on the localhost Login on the server is ok using this new user and database on the server internal network ip Login from a remote computer is ok using this new user and database using the internal network ip Login is not working when logging in with this username and database using the external ip address from the server or the remote computer. I have port forwarding enabled for this port and it is viewable from outside as confirmed by canyouseeme.org I have nmap'd using the following command on the internal ip and get the below result: nmap -PN -p 3306 192.168.1.73 Starting Nmap 5.21 ( http://nmap.org ) at 2011-02-19 13:41 PST Nmap scan report for computername-System-Name (192.168.1.73) Host is up (0.00064s latency). PORT STATE SERVICE 3306/tcp open mysql Nmap done: 1 IP address (1 host up) scanned in 0.23 seconds I have nmap'd using the following command on the internal ip and get the below result(I have hidden ip for obvious reasons): nmap -PN -p 3306 xxx.xxx.xx.xxx Starting Nmap 5.21 ( http://nmap.org ) at 2011-02-19 13:42 PST Nmap scan report for HOSTNAME (xxx.xxx.xx.xxx) Host is up (0.00056s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds I am completely stuck here and need some help. I have tried everything under the moon and still can not connect from a remote external ip address. Any help is greatly appreciated and I need to do anything to help find the problem let me know and I will post the results here.

    Read the article

  • Can't connect to wi-fi hotspot in Ubuntu 11.10

    - by ht3t
    I'm new to Ubuntu. I'm having a wireless network problem in Ubuntu 11.10. I made a hotspot using Connectify from a computer which is running Windows 7. I can access it in Windows 7 but not in Ubuntu 11.10. Every time I access it,I get a message "disconnected". I'm using msi fx 400 notebook with Intel Centrino wireless -N 1000 wireless card. Ubuntu version is 11.10 with KDE desktop. $ sudo lshw -c network [sudo] password for ht3t: *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: wlan0 version: 00 serial: 00:26:c7:56:b8:f0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:44 memory:e7400000-e7401fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 40:61:86:b6:b1:a2 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw IP=192.168.21.107 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:9000(size=256) memory:e6004000-e6004fff memory:e6000000-e6003fff I can't do anything without internet connection. How can I fix this?

    Read the article

  • Integrating external computer into a domain - some recommendations please

    - by TomTom
    Given: * A multi loation company. Every office has local routers that connect to a central VPN capable rouer in a data center. All fine so far. We now need to move a computer off site into a hosting center across the globe, to get it closer to some supplier computers we work for. it will run limited logic but latency is important, and our latency so far is too large. This computer will be in a data center and does no require incoming connections except for adminsitrative purposes, although it needs outgoing connetions. I have no real chance to put one of my VPN routers there, sadly - otherwise I would have no problem. Usage of RRAs is not recommended (we had various probblems there over time). I could deal with it. The computer MUSt integrate into the corporate structure via VPN and join the domain and be fully "tracked" (controlled for performance). What is the best suggestion? So far it looks like my best bets woudl be to log in via RRAS and deal with whatever issues arise there plus uise the local firewall the limit incoming connections to this computer to what is needed (which runs down to an emergency RDP connection allowance). Anyone a better idea?

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • Using the Onboard VGA output with a PCIe video card. Both nVidia

    - by sebikul
    I have 2 video cards, one On board, a nVidia 6150SE nForce 430 and a PCIe nVidia GeForce GT 220 1GB DDR2 RAM I have already configured the PCIe card to use the dual monitor feature, using the VGA and HDMI ports, but now I want to add a third monitor, using the On board VGA port I have managed to enable the On board graphics processor, which is taking 400MB of ram, but I cant manage to use it, nvidia-settings does not detect it, like it's not usable (but is there) My questions are the following: How can I manage to get the On board VGA display to work together with the PCIe graphics card? If possible, how can I recover those 400 MB the on board card is taking (even without being used) or how can I get it to use the PCIe card available memory? System Details: Linux 2.6.35-28-generic i686 Ubuntu 10.10 (All updates installed) NVIDIA Driver Version: 260.19.06 (Official) If more info is needed please let me know. Here is the lspci output when the On board card is disabled: 00:00.0 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a1) 00:01.0 ISA bridge: nVidia Corporation MCP61 LPC Bridge (rev a2) 00:01.1 SMBus: nVidia Corporation MCP61 SMBus (rev a2) 00:01.2 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a2) 00:01.3 Co-processor: nVidia Corporation MCP61 SMU (rev a2) 00:02.0 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:02.1 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:04.0 PCI bridge: nVidia Corporation MCP61 PCI bridge (rev a1) 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 00:06.0 IDE interface: nVidia Corporation MCP61 IDE (rev a2) 00:07.0 Bridge: nVidia Corporation MCP61 Ethernet (rev a2) 00:08.0 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 00:09.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0b.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0c.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 6150SE nForce 430] (rev a2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:09.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet Pro 100 (rev 08) 02:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) And this is when both are enabled: 00:00.0 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a1) 00:01.0 ISA bridge: nVidia Corporation MCP61 LPC Bridge (rev a2) 00:01.1 SMBus: nVidia Corporation MCP61 SMBus (rev a2) 00:01.2 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a2) 00:01.3 Co-processor: nVidia Corporation MCP61 SMU (rev a2) 00:02.0 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:02.1 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:04.0 PCI bridge: nVidia Corporation MCP61 PCI bridge (rev a1) 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 00:06.0 IDE interface: nVidia Corporation MCP61 IDE (rev a2) 00:07.0 Bridge: nVidia Corporation MCP61 Ethernet (rev a2) 00:08.0 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 00:09.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0b.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0c.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 6150SE nForce 430] (rev a2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:09.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet Pro 100 (rev 08) 02:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) Output of lshw -class display: *-display description: VGA compatible controller product: GT216 [GeForce GT 220] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:02:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:18 memory:df000000-dfffffff memory:c0000000-cfffffff memory:da000000-dbffffff ioport:ef80(size=128) memory:def80000-deffffff *-display description: VGA compatible controller product: C61 [GeForce 6150SE nForce 430] vendor: nVidia Corporation physical id: d bus info: pci@0000:00:0d.0 version: a2 width: 64 bits clock: 66MHz capabilities: pm msi vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:22 memory:dd000000-ddffffff memory:b0000000-bfffffff memory:dc000000-dcffffff memory:deb40000-deb5ffff If what I'm looking for is not possible, please tell me, so I can disable the On board card and recover those 400MB of wasted RAM Thanks for your help!

    Read the article

  • Slow wifi on Ubuntu 12.04 wifi driver ath9k

    - by lunar
    For the last couple of days my wifi connection is extremely slow. I am pretty sure that it is caused by the driver. Can this be improved? lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"MyWiFi" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:68:FE:7B:C7 Bit Rate=58.5 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=48/70 Signal level=-62 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:6960 Missed beacon:0 eth0 no wireless extensions. sudo lshw -class network *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: 74:f0:6d:34:c2:4e width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-31-generic-pae firmware=N/A ip=192.168.1.2 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:d7400000-d740ffff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: c0 serial: 48:4b:38:78:f6:ae capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:51 memory:d3800000-d383ffff ioport:8000(size=128) lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 04f2:b1bb Chicony Electronics Co., Ltd Bus 001 Device 004: ID 0b05:1788 ASUSTek Computer, Inc. lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 18) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 18) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.3 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4 (rev 06) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 425M] (rev a1) 03:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 USB controller: Fresco Logic Device 1400 (rev 01) 06:00.0 Ethernet controller: Atheros Communications Inc. AR8131 Gigabit Ethernet (rev c0) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 05) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 05) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 05) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 05) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 05) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 05) rfkill list all 0: hci0: Bluetooth Soft blocked: yes Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >