Search Results

Search found 3463 results on 139 pages for 'physical'.

Page 20/139 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Ubuntu 10.04 network manager issues

    - by Shark
    I was using the default network manager to connect to my wi-fi network, but if the connection is dropped or router restarted the network manager wont reconnect automatically after i guess a couple of tries and just gives a pop-up to connect manually . To avoid this annoyance I installed WICD but though it does try to reconnect to the network after a drop in connection it is unable to resolve the ip address and i am left with an even bigger annoyance . 1. Is there a way to counter either of these issues ? 2. Something like a background process that will check network status periodically and then try to connect to a favored network ? Edit- out put of lshw -C network *-network description: Wireless interface product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:12:00.0 logical name: eth1 version: 01 serial: c0:cb:38:18:9b:7f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.60.48.36 ip=192.168.11.2 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:fbc00000-fbc03fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:13:00.0 logical name: eth0 version: 02 serial: f0:4d:a2:94:2d:74 size: 10MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:29 ioport:e000(size=256) memory:d0b10000-d0b10fff(prefetchable) memory:d0b00000-d0b0ffff(prefetchable) memory:fb200000-fb21ffff(prefetchable)

    Read the article

  • Assigning IPs to OpenVZ containers

    - by Vojtech
    I have recently bought myself a physical server and I am trying to create containers which would have their IPs. The physical machine has both IPv4 and IPv6 addresses. I have accessible another IPv4 and some other IPv6 addresses which I would like to assign to the container. I managed to assign the addresses as follows: # vzctl set 101 --ipadd 144.76.195.252 --save I can ping to the machine from the physical machine, but not from the outside world. This also applies to the IPv6 I assigned as well. This is ifconfig of the physical machine: eth0 Link encap:Ethernet HWaddr d4:3d:7e:ec:e0:04 inet addr:144.76.195.232 Bcast:144.76.195.255 Mask:255.255.255.224 inet6 addr: 2a01:4f8:200:71e7::2/64 Scope:Global inet6 addr: fe80::d63d:7eff:feec:e004/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217895 errors:0 dropped:0 overruns:0 frame:0 TX packets:16779 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:322481419 (307.5 MiB) TX bytes:1672628 (1.5 MiB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) This is ifconfig of the OpenVZ container: # ifconfig venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: 2a01:4f8:200:71e7::3/64 Scope:Global UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:144.76.195.252 P-t-P:144.76.195.252 Bcast:144.76.195.252 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 What do I need to do to have the container accessible from the outside world? What could I have forgotten? Thanks.

    Read the article

  • disk partition centos

    - by FlourishDNA
    I am setting up server for hosting two WordPress which has size of around 70GB. I have already installed CentOS as OS and I would like to partition the Disk. Is there any tool which can help me or can someone guide me though the process as I am not expert is SSH commands. Here are some output that might help. OS: CentOS release 6.3 fdisk -l Disk /dev/xvdb: 214.7 GB, 214748364800 bytes 255 heads, 63 sectors/track, 26108 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b91e0 Device Boot Start End Blocks Id System Disk /dev/xvda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e542c Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2611 20458496 8e Linux LVM Disk /dev/mapper/vg_flourish-lv_root: 16.7 GB, 16718495744 bytes 255 heads, 63 sectors/track, 2032 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_flourish-lv_swap: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_flourish-lv_root 16070076 758184 14495560 5% / tmpfs 958500 0 958500 0% /dev/shm /dev/xvda1 495844 31926 438318 7% /boot df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_flourish-lv_root 16G 741M 14G 5% / tmpfs 937M 0 937M 0% /dev/shm /dev/xvda1 485M 32M 429M 7% /boot Thanks

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?

    - by andrew311
    In the case of multiple layers (physical drives - md - dm - lvm), how do the schedulers, readahead settings, and other disk settings interact? Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings. For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read? The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers? In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out: Linux Disk Scheduler Benchmarking from Google blktrace - generate traces of the i/o traffic on block devices Relevant Linux kernel mailing list thread

    Read the article

  • Can't connect to wi-fi hotspot in Ubuntu 11.10

    - by ht3t
    I'm new to Ubuntu. I'm having a wireless network problem in Ubuntu 11.10. I made a hotspot using Connectify from a computer which is running Windows 7. I can access it in Windows 7 but not in Ubuntu 11.10. Every time I access it,I get a message "disconnected". I'm using msi fx 400 notebook with Intel Centrino wireless -N 1000 wireless card. Ubuntu version is 11.10 with KDE desktop. $ sudo lshw -c network [sudo] password for ht3t: *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: wlan0 version: 00 serial: 00:26:c7:56:b8:f0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:44 memory:e7400000-e7401fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 40:61:86:b6:b1:a2 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw IP=192.168.21.107 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:9000(size=256) memory:e6004000-e6004fff memory:e6000000-e6003fff I can't do anything without internet connection. How can I fix this?

    Read the article

  • What are the ways to build a failover cluster?

    - by light
    I have a task where I need to build a failover cluster in two cases: first with servers on Red Hat Enterprise 5.1 and second with SUSE Linux Enterprise 11 SP1. Both cases have SAN. I know there are many ways to build failover cluster, but I can’t find out more, so I need next: The ways to build it? I know only virtualization. Any good book or resource to broad my mind? I’ll be glad to hear any suggestion. Thanks! EDIT #1: failover of servers with bussiness application on it. EDIT #2: will be great to hear summary about solutions with SLES servers? EDIT #3: So if I understand correctly, in my cases the main ways are to use internal solutions or virtualization. So now I have additional questions: Does manufacturer of blades provide some solution? For example HP or IBM. (Without virtualization) Do I need additional server to control "heartbeat" between main and redundant servers? (Virtualization) For example I have several physical servers with VMs. Do I need additional server to control availability of VMs and to move VMs to another physical server in the case their physical server failure? Sorry for my poor English. EDIT #4: Failover of VM or OS on physical server. In both cases will be used SAN , it's not specified, but I think with file system image on it. I started to think that my question is stupid and I need to remake it.

    Read the article

  • Importing Multiple Schemas to a Model in Oracle SQL Developer Data Modeler

    - by thatjeffsmith
    Your physical data model might stretch across multiple Oracle schemas. Or maybe you just want a single diagram containing tables, views, etc. spanning more than a single user in the database. The process for importing a data dictionary is the same, regardless if you want to suck in objects from one schema, or many schemas. Let’s take a quick look at how to get started with a data dictionary import. I’m using Oracle SQL Developer in this example. The process is nearly identical in Oracle SQL Developer Data Modeler – the only difference being you’ll use the ‘File’ menu to get started versus the ‘File – Data Modeler’ menu in SQL Developer. Remember, the functionality is exactly the same whether you use SQL Developer or SQL Developer Data Modeler when it comes to the data modeling features – you’ll just have a cleaner user interface in SQL Developer Data Modeler. Importing a Data Dictionary to a Model You’ll want to open or create your model first. You can import objects to an existing or new model. The easiest way to get started is to simply open the ‘Browser’ under the View menu. The Browser allows you to navigate your open designs/models You’ll see an ‘Untitled_1′ model by default. I’ve renamed mine to ‘hr_sh_scott_demo.’ Now go back to the File menu, and expand the ‘Data Modeler’ section, and select ‘Import – Data Dictionary.’ This is a fancy way of saying, ‘suck objects out of the database into my model’ Connect! If you haven’t already defined a connection to the database you want to reverse engineer, you’ll need to do that now. I’m going to assume you already have that connection – so select it, and hit the ‘Next’ button. Select the Schema(s) to be imported Select one or more schemas you want to import The schemas selected on this page of the wizard will dictate the lists of tables, views, synonyms, and everything else you can choose from in the next wizard step to import. For brevity, I have selected ALL tables, views, and synonyms from 3 different schemas: HR SCOTT SH Once I hit the ‘Finish’ button in the wizard, SQL Developer will interrogate the database and add the objects to our model. The Big Model and the 3 Little Models I can now see ALL of the objects I just imported in the ‘hr_sh_scott_demo’ relational model in my design tree, and in my relational diagram. Quick Tip: Oracle SQL Developer calls what most folks think of as a ‘Physical Model’ the ‘Relational Model.’ Same difference, mostly. In SQL Developer, a Physical model allows you to define partitioning schemes, advanced storage parameters, and add your PL/SQL code. You can have multiple physical models per relational models. For example I might have a 4 Node RAC in Production that uses partitioning, but in test/dev, only have a single instance with no partitioning. I can have models for both of those physical implementations. The list of tables in my relational model Wouldn’t it be nice if I could segregate the objects based on their schema? Good news, you can! And it’s done by default Several of you might already know where I’m going with this – SUBVIEWS. You can easily create a ‘SubView’ by selecting one or more objects in your model or diagram and add them to a new SubView. SubViews are just mini-models. They contain a subset of objects from the main model. This is very handy when you want to break your model into smaller, more digestible parts. The model information is identical across the model and subviews, so you don’t have to worry about making a change in one place and not having it propagate across your design. SubViews can be used as filters when you create reports and exports as well. So instead of generating a PDF for everything, just show me what’s in my ‘ABC’ subview. But, I don’t want to do any work! Remember, I’m really lazy. More good news – it’s already done by default! The schemas are automatically used to create default SubViews Auto-Navigate to the Object in the Diagram In the subview tree node, right-click on the object you want to navigate to. You can ask to be taken to the main model view or to the SubView location. If you haven’t already opened the SubView in the diagram, it will be automatically opened for you. The SubView diagram only contains the objects from that SubView Your SubView might still be pretty big, many dozens of objects, so don’t forget about the ‘Navigator‘ either! In summary, use the ‘Import’ feature to add existing database objects to your model. If you import from multiple schemas, take advantage of the default schema based SubViews to help you manage your models! Sometimes less is more!

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • Reset DRAC/MC password on Dell BladeCentre 1855

    - by Farseeker
    I have a Dell Blade Centre 1855, and nobody knows what the root password for the DRAC/MC card in the blade chassis is (I tried root/calvin). I do not have IP access to the DRAC/MC, nor do I have physical access to the back of the blade centre to access the DRAC/MC module. I do have serial access (and can see the login prompt in hyperterm). I do have physical access to the FRONT of the chassis (the back of the cabinet is locked and lo-and-behold the key cannot be found). Does anyone know how to reset the password? Every piece of literature I find on the internet tells me I need to log in, or run racadm on the host machine (which I can't, because it's inside a blade chassis). If someone does know how to do it with physical access to the back of the bladecentre, please post it anyway, as I'm sure I'll get access to the back of the cabinet one day)

    Read the article

  • How can I unfreeze an SSD connected to a remote server?

    - by chmac
    I don't have physical access to the machine, so I can't unplug the drive. # hdparm -I /dev/sda | grep frozen frozen The advice I've read elsewhere is to hotplug the drive, pull the power / sata cables while the machine is running. Those are not possible in this situation as I don't have physical access. I've tried power cycling the machine through the host's control panel a few times, but that hasn't worked. Is there any way I can unfreeze (unfrozen?) the drive without physical access?

    Read the article

  • MySQL 5.5 on Windows server is horribly slow

    - by Brad
    I have had no luck getting MySQL 5.5 to be as fast as 5.1 or MariaDB on the exact same hardware/database/environment under Windows server 2003R2 or 2008R2. My benchmarks from our application: MySQL 5.5 + CentOS 5.2 (XenServer Virtual) = 28 seconds (box is "busy" not buried) MariaDB (5.1) + Windows 2003 (Physical box) = 130 seconds (box is 2% busy) MySQL 5.1 + Windows 2003 (Physical box) = 170 seconds (box is 2% busy) MySQL 5.5 + Windows 2003 (Physical box) = 305 seconds (As high as 600 seconds...) (box is 2% busy) The only difference between these runs is the removal of skip-locking and the running of mysql_upgrade.exe to update some tables for stored procs on 5.5. Yes, I know it's a release candidate, I'm feeding that back to MySQL as well. No slow queries are logged, it doesn't think it's being slow, it just is. I'm going to start tearing into the queries themselves to see if the INSERT/SELECT plans have gone buggo on 5.5. Any help would be appreciated! Thanks

    Read the article

  • Windows 7 using exactly HALF the installed memory

    - by Nathan Ridley
    I've taken this directly from system information: Installed Physical Memory (RAM) 4.00 GB Total Physical Memory 2.00 GB Available Physical Memory 434 MB Total Virtual Memory 5.10 GB Available Virtual Memory 1.19 GB Page File Space 3.11 GB Also the BIOS reports a full 4GB available. Note the 4gb installed, yet 2gb total. I understand that on a 32 bit operating system, you'll never get the full 4gb of ram, however typically you'll get in the range of 2.5-3.2gb of ram. I have only 2gb available! My swap file goes nuts when I do anything! Note that I have dual SLI nvidia video cards, each with 512mb of on board ram, though I have the SLI feature turned off. Anybody know why Windows might claim that I have exactly 2gb of ram total? Note: previously asked on SuperUser, but closed as "belongs on superuser" before this site opened: http://serverfault.com/questions/39603/windows-7-using-exactly-half-the-installed-memory (I still need an answer!)

    Read the article

  • SQL Performance Problem IA64

    - by Vendoran
    We’ve got a performance problem in production. QA and DEV environments are 2 instances on the same physical server: Windows 2003 Enterprise SP2, 32 GB RAM, 1 Quad 3.5 GHz Intel Xeon X5270 (4 cores x64), SQL 2005 SP3 (9.0.4262), SAN Drives Prod: Windows 2003 Datacenter SP2, 64 GB RAM, 4 Dual Core 1.6 GHz Intel Family 80000002, Model 6 Itanium (8 cores IA64), SQL 2005 SP3 (9.0.4262), SAN Drives, Veritas Cluster I am seeing excessive Signal Wait Percentages ( 250%) and Page Reads /s (50) and Page Writes /s (25) are both high occasionally. I did test this query on both QA and PROD and it has the same execution plan and even the same stats: SELECT top 40000000 * INTO dbo.tmp_tbl FROM dbo.tbl GO Scan count 1, logical reads 429564, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. As you can see it’s just logical reads, however: QA: 0:48 Prod: 2:18 So It seems like a processor related issue, however I’m not sure where to go next, any ideas? Thanks, Aaron

    Read the article

  • Two Network Adapters on Hyper-V Host - Best way to configure?

    - by GoNorthWest
    Hi, I have two physical network adapters installed in my Hyper-V host. I want one to be dedicated to the host, and the other to provide external network services to the VMs. Would the appropriate configuration be as such: Leave the first physical network adapter alone, assigning it the host IP, but not using it to create any Virtual Netorks For the second physical adapter, I would create an External Network, along with a Microsoft Virtual Switch, and use that to provide network services to the VMs. Each virtual NIC for the VM would be associated with that External Network. A static IP would be assigned to this adapter, and each VM would be assigned a static IP as well. The above seems reasonable to me, but I'm not sure if it's correct. Does anyone have any thoughts? Thanks! Mark

    Read the article

  • Can you use a USB dongle inside a VMWare ESX Virual Machine?

    - by Keith Sirmons
    Howdy, I need to know if a USB dongle that is required as a license key for a piece of software will accessible from the physical host machine. This will be a small vSphere 4 installation targeting the quick backup and system restore capabilities of VMWare, not specifically HA, so I am not to worried about the virtual machine automatically failing over to another physical machine and the dongle not being accessible. Does ESX have the capability to map a physical USB port or device to a specific Virtual Machine? I believe this is the dongle: Sentinal Superpro USB Dongle by Rainbow Technologies Thank you, Keith

    Read the article

  • Running bridged-networking vmware player on a Linux machine with 2 interfaces

    - by Roman D
    I have got a laptop running Arch Linux with 2 interfaces: wireless (wlan0) and ethernet (eth0). I use wlan0 to access internet (static IP, networking is configured using netcfg), and I connect a second PC to the eth0. Now, whenever I start vmware player (v. 4.0.4), it chooses wlan0 to connect its bridged virtual NIC to, but I need it to connect to eth0 (I want my guest machine to be able to talk to the second physical PC on eth0). So, I disable the wlan0 interface (netcfg -d wireless) and restart vmware. Now, it connects to eth0, and everything works fine; I can ping the host PC from the virtual one, and I can ping the virtual PC from the second physical PC connected to eth0. Then, if I try to reenable the wlan0 interface (netcfg -u wireless), all of the connectivity between the host and the guest (and between the second physical PC and the guest) gets lost, until I disable wlan0 again. Can someone please give me a hint on what's going on?

    Read the article

  • Classic ASP on large memory server

    - by Steve Evans
    I have a client with a large ASP app that apparently is fairly memory intensive. I’m helping them migrate to new hardware they have running Win2k8 R2. They have 4 physical servers with 32gb of RAM each. I’m making the assumption that ASP apps run as a x32 process. So I see that we have two options: On the application pool enable web gardens. Use the physical servers as VM hosts and split the box into say 4 web servers each. Any thoughts on which path will provide us better performance? I’m just not really sure how ASP will handle a machine with lots of memory, and I’m worried it won’t really be able to address the memory well. (you can ignore all the obvious stuff like increased maintenance of 16 web servers vs 4, or the flexibility virtualization gets us over physical servers, etc)

    Read the article

  • Why does my SQL Server use AWE memory? and why is this not visible in RAMMap?

    - by Marnix Klooster
    We have a Windows Server 2008 R2 (64-bit) 8GB server where, according to Sysinternals RAMMap, 2GB of memory is allocated using AWE. As far as I understood, this means that these pages stay in physical memory and are never pushed out. This causes other apps to be pushed out of physical memory. In RAMMap, on the Physical Pages tab, the Process column is empty for all of the AWE pages. We run SQL Server on that box, but (through SQL Server Management Studio, under Server Properties - Memory, under Server memory options) it says is configured not to use AWE. However, when stopping SQL Server, the AWE pages are suddenly gone. So it really is the culprit. So I have three questions: Why does RAMMap not know/show that a SQL Server process is responsible for that AWE memory? Why does SQL Server Management Studio say that AWE memory is not used? How do we make configure SQL Server to really not use AWE memory?

    Read the article

  • Routing a PPTP client and VMware Server instance running on the same box

    - by servermanfail
    I have a Windows 2003 SBS box. It has 2 physical NIC's: WAN and LAN. The WAN is a public IP. The LAN is a simple 192.168.2.x subnet with Microsoft DHCP Server. Microsoft Routing and Remote Access Service is used to provide NAT to LAN. The box also runs VMware Server with a virtual machine running Windows XP. I want people to be able to VPN into the box, and connect to these virtual machines on the MSRDP port. I can VPN (PPTP) into the 2003 SBS box fine, as well as ping other machines on the LAN. I can ping the VM from a physical workstation on the LAN and vice-versa. I can ping the VPN client from the a physical workstation on the LAN and vice-versa. I can ping the VPN client from the Server console and vice-versa. I can ping the VM client from the Server console and vice-versa. But I cannot ping the VPN client from the VM and vice-versa. I was hoping to set up 2 or 3 Windows XP virtual machines on our only server, so that a couple of people can remote in to work without having to leave a physical machine on in the office. You could this attempted set up a "poor mans terminal server". On the 2003 SBS Server:- C:\Documents and Settings\Administrator>route print IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 50 56 c0 00 08 ...... VMware Virtual Ethernet Adapter for VMnet8 0x3 ...00 50 56 c0 00 01 ...... VMware Virtual Ethernet Adapter for VMnet1 0x10004 ...00 53 45 00 00 00 ...... WAN (PPP/SLIP) Interface 0x10005 ...00 11 43 d4 69 13 ...... Broadcom NetXtreme Gigabit Ethernet 0x10006 ...00 11 43 d4 69 14 ...... Broadcom NetXtreme Gigabit Ethernet #2 =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 81.123.144.22 81.123.144.21 1 81.123.144.20 255.255.255.252 81.123.144.21 81.123.144.21 1 81.123.144.21 255.255.255.255 127.0.0.1 127.0.0.1 1 81.255.255.255 255.255.255.255 81.123.144.21 81.123.144.21 1 86.135.78.235 255.255.255.255 81.123.144.22 81.123.144.21 1 109.152.62.236 255.255.255.255 81.123.144.22 81.123.144.21 1 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.2.0 255.255.255.0 192.168.2.3 192.168.2.3 1 192.168.2.3 255.255.255.255 127.0.0.1 127.0.0.1 1 192.168.2.26 255.255.255.255 192.168.2.32 192.168.2.32 1 192.168.2.28 255.255.255.255 192.168.2.32 192.168.2.32 1 192.168.2.32 255.255.255.255 127.0.0.1 127.0.0.1 50 192.168.2.50 255.255.255.255 127.0.0.1 127.0.0.1 1 192.168.2.255 255.255.255.255 192.168.2.3 192.168.2.3 1 192.168.10.0 255.255.255.0 192.168.10.1 192.168.10.1 20 192.168.10.1 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.10.255 255.255.255.255 192.168.10.1 192.168.10.1 20 192.168.96.0 255.255.255.0 192.168.96.1 192.168.96.1 20 192.168.96.1 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.96.255 255.255.255.255 192.168.96.1 192.168.96.1 20 224.0.0.0 240.0.0.0 81.123.144.21 81.123.144.21 1 224.0.0.0 240.0.0.0 192.168.2.3 192.168.2.3 1 224.0.0.0 240.0.0.0 192.168.10.1 192.168.10.1 20 224.0.0.0 240.0.0.0 192.168.96.1 192.168.96.1 20 255.255.255.255 255.255.255.255 81.123.144.21 81.123.144.21 1 255.255.255.255 255.255.255.255 192.168.2.3 192.168.2.3 1 255.255.255.255 255.255.255.255 192.168.10.1 192.168.10.1 1 255.255.255.255 255.255.255.255 192.168.96.1 192.168.96.1 1 Default Gateway: 81.123.144.22 =========================================================================== Persistent Routes: None C:\Documents and Settings\Administrator>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : 2003server Primary Dns Suffix . . . . . . . : mycompany.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : Yes WINS Proxy Enabled. . . . . . . . : Yes DNS Suffix Search List. . . . . . : mycompany.local gateway.2wire.net Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet 8 Physical Address. . . . . . . . . : 00-50-56-C0-00-08 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.10.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet 1 Physical Address. . . . . . . . . : 00-50-56-C0-00-01 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.96.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : PPP adapter RAS Server (Dial In) Interface: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : WAN (PPP/SLIP) Interface Physical Address. . . . . . . . . : 00-53-45-00-00-00 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.2.32 Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : NetBIOS over Tcpip. . . . . . . . : Disabled Ethernet adapter LAN: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet Physical Address. . . . . . . . . : 00-11-43-D4-69-13 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.2.50 Subnet Mask . . . . . . . . . . . : 255.255.255.0 IP Address. . . . . . . . . . . . : 192.168.2.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 192.168.2.3 Primary WINS Server . . . . . . . : 192.168.2.3 Ethernet adapter WAN: Connection-specific DNS Suffix . : gateway.2wire.net Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-11-43-D4-69-14 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 81.123.144.21 Subnet Mask . . . . . . . . . . . : 255.255.255.252 Default Gateway . . . . . . . . . : 81.123.144.22 DHCP Server . . . . . . . . . . . : 10.0.0.1 DNS Servers . . . . . . . . . . . : 10.0.0.1 Primary WINS Server . . . . . . . : 192.168.2.3 NetBIOS over Tcpip. . . . . . . . : Disabled Lease Obtained. . . . . . . . . . : 25 February 2011 22:56:59 Lease Expires . . . . . . . . . . : 25 February 2011 23:06:59 C:\Documents and Settings\Administrator>ping 192.168.2.11 Pinging 192.168.2.11 with 32 bytes of data: Reply from 192.168.2.11: bytes=32 time<1ms TTL=128 Reply from 192.168.2.11: bytes=32 time<1ms TTL=128 Reply from 192.168.2.11: bytes=32 time<1ms TTL=128 Reply from 192.168.2.11: bytes=32 time<1ms TTL=128

    Read the article

  • HD latency measurement using bonnie++ on different machines with different RAM size

    - by j0nes
    Hello, I have run bonnie++ v1.96 on two different servers without any additional load. One server is a "physical" Dell server with 32GB RAM, the other one is a virtual instance with 14GB RAM. I have read in the bonnie manuals that I should use two times the size of RAM in my bonnie runs, so I used 64GB on the physical machine and 28GB on the virtual machine. Now I want to compare the results, and I am wondering whether the results are comparable at all. The most interesting part is the latency part - on the physical machine, the values are about 10 times higher than on the virtual machine! Can I take these results seriously (e.g. the virtual machine HD is much much faster) or does the different RAM size tamper the results? Thanks! Jonas

    Read the article

  • Is it possible to convert striped logical volume to linear logical volume?

    - by JooMing
    I've a logical volume that is striped across three physical volumes. I had to move this logical volume to another physical volume. This worked nicely with pvmove command. However, I discovered later that the logical volume is still striped and now all three stripes are on the same physical volume. Is there any way to convert striped logical volumes to linear logical volumes? I'm using LVM2 on linux. I figured that the obvious possibility is to rename the striped logical volume, create a new linear logical volume, and then copy data over, but that requires taking the filesystem system offline for some time. Unfortunately, I can't do that before the next week. Is there any better alternative?

    Read the article

  • Why does 1GBit card have output limited to 80 MiB ?

    - by Gallus
    I'm trying to utilize maximal bandwidth provided by my 1GiB network card, but it's always limited to 80MiB (real megabytes). What can be the reason? Card description (lshw output): description: Ethernet interface product: DGE-530T Gigabit Ethernet Adapter (rev 11) vendor: D-Link System Inc physical id: 0 bus info: pci@0000:03:00.0 logical name: eth1 version: 11 serial: 00:22:b0:68:70:41 size: 1GB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation The card is placed in following PCI slot: *-pci:2 description: PCI bridge product: 82801 PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: 92 width: 32 bits clock: 33MHz capabilities: pci subtractive_decode bus_master cap_list The PCI isn't any PCI Express right? It's a legacy PCI slot? So maybe this is the reason? OS is a linux.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >