Search Results

Search found 1733 results on 70 pages for 'isc dhcp'.

Page 64/70 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Windows 7 using llt for ipv6

    - by Seoman
    The question asked below is based on the specific implementations of the Os not the RFC. Looking on a way to be able to assign a fixed ip address to a host, before it boots I found that Centos 6 works fine with no modifications and Windows 7 does not work at all. As defined in enter link description here exists 3 valid ways of generate a DUID: 1 Link-layer address plus time 2 Vendor-assigned unique ID based on Enterprise Number 3 Link-layer address Looking at the centos, that works fine, I can see the following autogenerated DUID: option dhcp6.client-id 0:1:0:1:19:60:25:f1:52:54:0:6b:b9:9e; and the MAC address for this host is: ifconfig eth1 | grep HWaddr eth1 Link encap:Ethernet HWaddr 52:54:00:6B:B9:9E As you can see, the DUID containts the MAC address. I can assign a fixed ip address to this host by including an entry on my dhcp server similar to: host vm { hardware ethernet 52:54:00:6B:B9:9E; fixed-address6 2001:db8:0:1::200; if packet(0,1) = 1 { log(debug,"VM Request match!"); } } And the Centos 6 gets his ip. On the windows side, I faced a common problem explained on this other link enter link description here As summary, Win7 uses the option 2 of the DUID generation or a variation of this one. On the link explains how to move it to a llt (link layer + time) but is not working fine. If I modify the DUID to one that looks like the one generated on Centos (but with the right MAC) it works as expected. Question 1 How Can I change the DUID generation for Windows 7 to be based on MAC as Centos 6 does? Thanks

    Read the article

  • Router(s) Issue: DNS quries sporadically fail with multiple computers hooked in

    - by bob-the-destroyer
    Basically, after anywhere from 5-60 minutes, DNS queries fail for a few minutes, then slowly begin to resolve correctly. Then the cycle repeats. This occurs only when more than one computer is on the network. All computers on the network experiences the same sporadic DNS outage at the same time. Wireless or wired, Linux or Windows, fresh OS install or old, browser or ping, same symptoms. Duplicated on 3 routers (not chained together, mind you) and 3 ISP's and 3 separate locations over the past several months. The only common theme is a single 5-yo WIN XP laptop which has been in use on the network throughout all this. There also may be anywhere between 1 - 10 devices hooked up wired or wirelessly at a time. The only reprieve I have from this torture is by using any VPN to an outside source - always smooth sailing. I typically set up any router to a) use WPA2/etc security; b) MAC whitelist; c) UPNP OFF (if available); d) always update firmware when available; e) obtain DNS from ISP automatically; f) set the router to act as DHCP server for the internal network. Adjusting channels has no effect. Any ideas?

    Read the article

  • 2 Computers, same network, different outgoing speeds when uploading to internet?

    - by user117339
    I have 2 work machines in my office, a PowerMac G5 and a MacBook Air. Both behind an IPCop firewall. The PowerMac is connected through a gigabit switch, the MacBook Air is connected through a Netgear 802.11g access point that is then plugged into the gigabit switch. There is also a FreeNAS box, both machines are able to read and write files to it at close to their pipe speeds. The main problem is when I am trying to upload files to the internet at large. The G5 is only hitting 0.1 - 0.25 Mbps. The Macbook is able to hit 2-3 Mbps. The setup (G5 / IPCop / Network) has been the same for 5 years. The issues with the internet speed started about 3 months ago. I hadn't tested on the Macbook at this point. I had complained to the ISP, they said their modem needed a firmware update, did that nothing changed. Reset IPCop, turned off squid, etc. No changes. The ISP switched the office over to a better plan with a theoretical 6 Mbps up, still no change. At this point I tried testing the Macbook, and lo and behold there's the speed. But why? I have tried changing out everything, cables, switches, using another ethernet port on the G5, wiping the system, using DHCP, using manual IPs, changing DNS servers, etc. Nothing works. I figured that if there was something horribly wrong with the network, then internally I would find a similar issue, but that is perfect. iperf, ping, etc show no dropped packets and near saturation of the internal network. I'm at a loss as to what the heck is going on. Any ideas would be appreciated! Below are some screenshots of speedtest.net: G5: Macbook Air:

    Read the article

  • How do I setup routing for two companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: Two companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants its own Internet connection (ISP A) and company B wants its own Internet connection (ISP B). VLANs are deployed internally to separate the two companies' networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I set up policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Cannot connect to my VPN Server from another network

    - by SantaC
    ok here is the deal. I have a Windows 2008 R2 server with RRAS installed configured for VPN. I also have DHCP running. On my DC I have AD running and they're connected with my domain. I am only using one NIC though. As a client I have Windows 7. So I tried connecting to my VPN server through my own network, which worked fine, so the setup is correct. However, when I tried connecting to my VPN server on another network, it does not work. I went to my brothers home and tried connecting to my server but it did not pass. So on my VPN server I have ip: 192.168.2.99 At my brothers house, i did the configuration on his windows 7 and it cannot connect to that ip. I am operating on the 192.168.2.1 network and he is operating on the 192.168.0.1 network. So how do I configure his client in order to get it to work? I tried changing his ip to the 192.168.2.x network, but i am not sure you can do that. I need some help here what to do.

    Read the article

  • Logging communication between two VMs

    - by sYnfo
    Hi, I'm trying to set up "malware lab" described in this paper. So far, I've set up Windows guest system, adding one Host-only Network adapter, and setting this (sorry if the names aren't exactely correct, I don't have an english language version): - IP Address - 10.0.0.3 - Subnet mask - 255.255.255.0 - Default gateway - not set - Preferred DNS - 10.0.0.4 - Alternate DNS - not set And a Linux guest system - Ubuntu 9.04 - with two Network adapters - Bridged (eth0) and Host-only (eth1), and setting eth1 IP Address to 10.0.0.4, leaving the eth0 to be set by DHCP. Then, I have configured iptables as described in the paper, ie.: iptables -F -t nat iptables -F -t mangle iptables -t mangle -P PREROUTING ACCEPT iptables -t mangle -P OUTPUT ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -t nat -P OUTPUT ACCEPT iptables -t mangle -A PREROUTING -i eth0 -j ACCEPT iptables -t mangle -A PREROUTING -p udp -i eth1 -d 10.0.0.3 --dport 53 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 --dport 80 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 -d 10.0.0.3 --dport 6000:7000 -j ACCEPT iptables -t mangle -A PREROUTING -i eth1 -j ULOG iptables -t mangle -A PREROUTING -i eth1 -j DROP Now, when I try to ping the windows system from within the Linux system, it does not reply, I guess thats perfectly normal, because iptables is blocking ping responce. Same when I try to ping the Linux system from within the Windows. But when I try to access any web page from within the Windows system, I would expect that this action should get logged by iptables. But thing is, I don't see any of that kind of lines in log file (If I am looking in the right place, that is. :) It is at /var/log/messages, isn't it?). So, what do you think might be the problem here? I should note, that this is the first time I'm using linux, so don't expect ANY working knowledge of Linux at all... :) Also, since english is not my mother tongue, feel free to point out any gramatical mistakes... :) Thanks for any advice.

    Read the article

  • Trying to diagnose network problem: ping 127.0.0.1 (or any address) results in error code 1

    - by Mnebuerquo
    NIC seems to be working, as windows detects the hardware and has a driver and reports success. DHCP seems to have gotten an ip address, 192.168.1.101. I released and renewed it and it seemed to work normally. I tried ping 127.0.0.1 as first step of testing network configuration. Pinging 127.0.0.1 with 32 bytes of data: PING: transmit failed, error code 1. I read somewhere that net helpmsg [error code] would give a human readable name for the error code. net helpmsg 1 says "Incorrect function" I've tried disabling the firewall and antivirus in McAfee SecurityCenter and I still get the same error. Could the firewall/antivirus be breaking it even if disabled? Broadcom Advanced Control Suite 2 is installed, and its network test passes all tests, including ping 192.168.1.1 which is the default gateway. If I try ping 192.168.1.1 from the command prompt I get the error code 1 again. So does anyone have any theories that would explain this problem? Other tests I should try? Thanks!

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Network outage caused by SMC8013WG Cable Modem/Router ?

    - by mkocubinski
    At work, we have a basic Class C Network. The gateway/router is a SMC8013WG (stock comcast commercial cable modem), and simple unmanaged switch (HP Pro Curve 1400 24G). The SMC8013WG is our default gateway as well as DHCP server. Periodically, I'd say almost every other day.. the entire network will just stop responding. I won't be able to ping/see the gateway, any computers on our local network, or anything on the internet. The only way to fix this is to unplug the Comcast cable modem, wait, and plug back in. This unfailingly fixes the problem. But this doesn't make much sense to me.. shouldn't the network still be fine locally, since everything is plugged into the switch anyway? Why would resetting the router fix this? Can anyone suggest anything to check to in order to narrow this problem down? Just to be clear.. here is the basic topology: { Internet } -- (12.345.67.89) Comcast Cable Modem (192.168.1.1) -- Switch -- 192.168.1.2-254 P.S. Our IT guy is in about 3 hours a day every other week or so, so.. we're kind of on our own most of the time.

    Read the article

  • VirtualBox bridged network not working as expected

    - by iby chenko
    I am having hard time getting Bridged network to work with VirtualBox. Idea is to have host as well as one or more guests on same LAN. Using NAT (default) I do get access to internet and any node on the LAN when working from one of the VM guests. However, no LAN node including host can access (or ping) guest in VM. I need to be able to use any guest as if it was a physical computer on the network (need to be accessed by any machine on LAN). According to my understanding of the VirtualBox documentation, this should be Bridged mode. I think I set it correctly, well, actually there is not much to it: 1. select Bridged mode in VM network setup 2. select physical NIC of the host to connect bridge to 3. start VM When I do this, each VM does get new IP address that corresponds to LAN settings : 192.168.1.100 192.168.1.102 192.168.1.103 etc. where host is 192.168.1.80 / 255.255.255.0 (IP addresses above 100 are served by DHCP server). This seem to be correct based on what I know about ethernet. From VM I can ping other nodes like 192.168.1.50 etc. and I still get ethernet access. So far so good... But I STILL cannot ping any of the other VMs (running ones of course). I cannot ping them from other VMs, from host or from other nodes on the LAN. Aside from fact that IP addresses handed to guests are now local, this still acts same as NAT. What is going on? What am I missing? Regards, I

    Read the article

  • Incorrect Internal DNS Resolution

    - by user167016
    I'm having a DNS issue. Server 2008 R2. The first clue was that after being off the network for a month, I could no longer Remote Desktop into my workstation by name, it wouldn't find it. Both via VPN and internally. But if I connect using its IP, that works. Now I notice in the server's Share and Storage Management, in Manage Sessions, it's displaying the incorrect computer name for some users. So I try, for one example: Ping -a 192.168.16.81 Pinging BOBS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: - replies all successful Then I try Ping RICHARDS_COMPUTER Pinging RICHARDS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: -all replies successful In DHCP, .81 belongs to RICHARDS_COMPUTER I did try flushdns. Not sure if this is related, apologies if it's not, but when I try to connect, I also get prompted: "The identity of the remote computer cannot be verified. Do you want to connect anyway? The remote computer could not be authenticated due to problems with its security certificate. It may be unsafe to proceed.." It then lists the correct name as the name in the certificate from the remote computer, but claims that the certificate is not from a trusted authority. Any thoughts are most appreciated!

    Read the article

  • Variable host IP address in iptables rule

    - by DrakeES
    I am running CentOS 6.4 with OpenVZ on my laptop. In order to provide Internet access for the VEs I have to apply the following rule on the laptop: iptables -t nat -A POSTROUTING -j SNAT --to-source <LAPTOP_IP> It works fine. However, I have to work in different places - office, home, partner's office etc. The IP of my laptop is different in those places, so have to alter the rule above each time I change place. I have created a workaround which basically determines the IP and applies the rule: #!/bin/bash IP=$(ifconfig | awk -F':' '/inet addr/&&!/127.0.0.1/{split($2,_," ");print _[1]}') iptables -t nat -A POSTROUTING -j SNAT --to-source $IP The workaround above works. I only still have to execute it manually. Perhaps I could make it a hook executing whenever my laptop obtains an IP address from DHCP - how can I do that? Also, I am just wondering if there is an elegant way of getting it done in the first place - iptables? Maybe there is a syntax allowing to specify "current hardware ip addres" in the rule?

    Read the article

  • Protocol (or service publish/discovery) to detect devices in network

    - by Gobliins
    we connect some embedded devices in a network. What i am looking for now, is a way to find the devices IP and identify them. We work with Windows PC´s and i am about to write a C# tool that should do this. I thought about send a udp broadcast and in the ack i.e. is the device´s ip, which would mean the device needs a daemon runnig to assign an ip itself. Running a service (like a printer) on the device, and on the PC just lookup for the service. I read about some things like apipa, zeroconf, ipv4 local link, bonjour, dns-sd, mdns, bonjour; They can automatically assign ip´s and publish services in a network. My Question is, can someone recommend me what would be good for my task? -The protocol or Service should be low on ressource (memory/cpu usage) use. -Are there some standard protocolls to use? -Is DNS a good idea or would it be to ressource consumpting just for finding a device´s IP? -Should also work when no dhcp servers are around. edit: To clarify a bit: The IP configuration is automatic. The problem to focus is how to tell the PC which IP in the network (or a direct connection in this vase there would only be one) belongs to the device (identity).

    Read the article

  • Adding user groups from a remote domain server to permissions of a remote desktop terminal server fails. why?

    - by doveyg
    I have 3 computers, two of which are servers running Windows Server 2008 and another running Windows 7. One of the servers has the following roles installed; Active Directory, DHCP and DNS. The other server has a Terminal Server role installed. I am trying to log-on to the Terminal Server via Remote Desktop using the Windows 7 machine with credentials from the Active Directory server. Sounds simple enough, right? Well, no. Whenever I try to add users or groups from the Active Directory Domain server to the Terminal Server's permissions for RDP it seems to ignore, or forget, them. Though the various methods I was able to find it either adds a strange sting of numbers after the user group or the logo to the left has a question mark on it, reopening the dialogue box replaces the user group with the name of the Domain. I am confident I have the Domain setup correctly as I am able to log-on to users in the Active Directory from other computers I have put in the Domain, and when I attempt to browse the user objects from the Domain I am prompted with a username/password field and am able to view the structure of Active Directory objects. Please advise.

    Read the article

  • Mail server DNS failed to resolved by Mac clients

    - by Concordus Applications
    We have two internal DNS servers. One is located on a linux server box and the other is the router's DNS management. We set the linux box as primary DNS via DHCP and the router as secondary. We have a few Mac clients that are accessing our internal mail server (hostnamed "mail" internally). When using IMAP or SMTP against the mail server internally, the mac boxes will sometimes fail to locate the server. If I use NSLOOKUP I can see that "mail" is pointed to the correct IP address and is being resolved via the correct DNS server, but if I ping "mail" it fails. ~ (bash)$ nslookup mail Server: 254.254.254.206 Address: 254.254.254.206#53 Name: mail.example.com Address: 254.254.254.205 Note: I replaced our actual internal IP address with 254.254.254.* If I wait a few minutes (3-5 minutes), somehow it resolves itself and sends successfully. This happens multiple times a day. The /etc/hosts file on the mac boxes is the default config. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Is there something about Mac clients I should know to prevent this failed DNS resolution? Client boxes are: OSX 10.7.4, 8GB RAM, i5 MacBooks Server is: Ubuntu 12.04 Server

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • VLAN across a router to give wireless access to remote sites?

    - by Don
    I've been looking online for this answer, but getting conflicting information. I was under the impression that you couldn't use a VLAN across a router, but maybe it's possible (according to some documentation I see online)? I was hoping someone could clear it up for me. Here's what I'm working with: We have a remote site with a handful of users. We recently gave them an access point (Cisco 1142n) for internal wireless. It's plugged into a switch and working fine (getting IPs from the same DHCP scope as the wired users are getting). Private wireless is set on VL50. At the home office we have private wireless for our internal network working and on VL50, with a test VLAN setup for VL60, which points to our DSL line for the time being. Both private and public wireless works fine internally (not crossing a router). VL50 is named the same at both sites for consistency in naming. If we wanted to give the remote site access to the public wireless (VL60), would that be possible across the routers? For more information, currently the site is connected to the home office via a T1 connection, Cisco routers on both ends. I didn't think it was possible due to the nature of VLANS being layer 2. But, I am from from an expert on this and would appreciate any instruction as to the actual truth of the matter. The end result I'm going for is, how to get our remote sites access to a public (outside) connection along with their private connection, without actually having a DSL (or similar type line) dropped at their location? Thanks in advance for your thoughts.

    Read the article

  • 3 Server, is this a cluster scenario?

    - by HornedBeast
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? In short, how can I get the best out of my 3 hardware servers? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Wireless connection silently dies randomly

    - by Force Flow
    I have two WAP4410N wireless access points powered using Power-Over-Ethernet. They are both connected to the same LAN and broadcasting the same SSID with a WPA2 password. One is using channel 1, while the other is using channel 11. There is coverage overlap where the signal from both access points hover around -75db to -85db while standing in the same physical location. DHCP is disabled, and is being provided by another network device. Every day or so, devices can connect and authenticate to the access points, but are not granted an IP address (and subsequently are unable access to the LAN or Internet). For devices that had already retrieved an IP address prior to the issue exhibiting itself, the devices simply stop communicating with LAN and Internet. However, I can still access each access point's web admin interface from the LAN. If I reboot both devices, the problem vanishes and devices are once again able to get an IP address and connect to the LAN and Internet. Are these symptoms of signal interference between the two WAPs or is this a completely different issue?

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • April Oracle Database Events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 17-18, 2012 – Moscow, Russia Oracle Develop Conference The Oracle database developer conference, Oracle Develop, will visit Moscow, Russia and Hyderabad, India this spring. Oracle Develop includes a database development track, which contains .NET sessions and hands-on lab led by an Oracle .NET product manager. Register today before the event fills up. http://www.oracle.com/javaone/ru-en/index.html Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 24, 26 – San Diego, CA & San Jose, CA ISC2 Leadership Regional Event Series Oracle at (ISC)2 Security Leadership Series: Herding Clouds -- Managing Cloud Security Concerns and Expectations Join us for his interactive day-long session where industry leaders, including Oracle solution experts, will talk about how to: dispel the top Cloud security myths minimize "rogue Cloud" implementations identify potential compliance pitfalls and how to avoid them develop contract language for your cloud providers manage users across your fractured datacenter leverage existing technologies to protect data as it moves from the enterprise to the cloud http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=146972&src=7239493&src=7239493&Act=228 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 22-26, 2012 – Las Vegas, NV IOUG Collaborate 12 From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12.  Your COLLABORATE 12 - IOUG Forum registration comes complete with a bonus- a full day Deep Dive education program on Sunday! Choose from numerous hot topics, including Real World Performance, High Availability and more, presented by legendary and seasoned Oracle pros, including Tom Kyte and Craig Shallahamer. http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=143637&src=7360364&src=7360364&Act=5 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Independent Oracle User Group (IOUG) Regional Events: April 16, 2012 – Columbus, Ohio Ohio Oracle Users Group Higher Performance PL/PQL and Oracle Database 11g PL/SQL New Features – Featuring Steven Feuerstein http://www.ooug.org/ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 26, 2012- Irving, TX Dallas Oracle Users Group Oracle Database Forum: 5-7PM Oracle Corporation, 6031 Connection Drive Irving, TX http://memberservices.membee.com/538/irmevents.aspx?id=64 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Apr 30, 2012- Los Angeles, CA Real World Performance Tour A full day of real world database performance with: Tom Kyte, author of the ever popular AskTom Blog, Andrew Holdsworth, head of Oracle's Real World Performance Team, and Graham Wood, renowned Oracle Database performance architect. Through discussion, debate and demos, they’ll show you how to master performance engineering topics like: Best practices for designing hardware architectures and how to spot and fix bad design. How to develop applications that deliver the fastest possible performance without sacrificing accuracy.  New for 2012! Updates on Enterprise Manager, Exadata, and what these technologies mean to your current systems. http://www.ioug.org/Events/ADayofRealWorldPerformance/tabid/194/Default.aspx

    Read the article

  • MacGyver Moments

    - by Geoff N. Hiten
    Denny Cherry tagged me to write about my best MacGyver Moment.  Usually I ignore blogosphere fluff and just use this space to write what I think is important.  However, #MVP10 just ended and I have a stronger sense of community.  Besides, where else would I mention my second best Macgyver moment was making a BIOS jumper out of a soda can.  Aluminum is conductive and I didn't have any real jumpers lying around. My best moment is probably my entire home computer network.  Every system but one is hand-built, usually cobbled together out of spare parts and 'adapted' from its original purpose. My Primary Domain Controller is a Dell 2300.   The Service Tag indicates it was shipped to the original owner in 1999.  Box has a PERC/1 RAID controller.  I acquired this from a previous employer for $50.  It runs Windows Server 2003 Enterprise Edition.  Does DNS, DHCP, and RADIUS services as a bonus.  RADIUS authentication is used for VPN and Wireless access.  It is nice to sign in once and be done with it. The Secondary Domain Controller is an old desktop.  Dual P-III 933 with some extra drives. My VPN box is a P-II 250 with 384MB of RAM and a 21 GB hard drive.  I did a P-to-V to my Hyper-V box a year or so ago and retired the hardware again.  Dynamic DNS lets me connect no matter how often Comcast shuffles my IP. The Hyper-V box is a desktop system with 8GB RAM and an AMD Athlon 5000+ processor.  Cost me less than $500 to put together nearly two years ago.  I reasoned that if Vista and Windows 2008 were the same code then Vista 64-bit certified meant the drivers for Vista would load into Windows 2008.  Turns out I was right. Later I added three 1TB drives but wasn't too happy with how that turned out.  I recovered two of the drives yesterday and am building an iSCSI storage unit. (Much thanks to Starwind.  Great product).  I am using an old AMD 1.1GhZ box with 1.5 GB RAM (cobbled together from three old PCs) as my storave server.  The Hyper-V box is slated for an OS rebuild to 2008 R2 once I get the storage system worked out.  maybe in a week or two. A couple of DLink Gigabit switches ties everything together. Add in the Vonage box, the three PCs, the Wireless-N Access Point, the two notebooks and the XBox and you have gone from MacGyver to darn near Rube Goldberg. The only thing I really spend money on is power supplies and fans.  I buy top-of-the-line for both. I even pull and crimp my own cables. Oh, and if my kids hose up a PC, I have all of their data on a server elsewhere.  Every PC and laptop is pretty much interchangable for email and basic workstation tasks.  That helps a lot too. Of course I will tag SQLVariant.

    Read the article

  • Free and Open Source Software in Oracle Solaris 11.1

    - by user13277799
    Oracle Solaris 11.1 contains number of Free and Open Source packages. The following table contains important FOSS packages with their versions available in this latest Oracle Solaris release. a2ps 4.14 aalib 1.4.0 pmtools 20071116 apache-ant 1.7.1 httpd 2.2.22 mod_dtrace 0.3.1 mod_fcgid 2.3.6 tomcat-connectors 1.2.28 mod_perl 2.0.4 mod_proxy_html 3.1.1 modsecurity-apache 2.5.9 mod_wsgi 3.3 apr 1.3.9 apr-util 1.3.9 areca 7.1 autoconf 2.68 autogen 5.9 automake 1.10 automake 1.11.2 automake 1.9.6 bash 4.1 bcc 0.16.17 beanshell 2.0b4 db 5.1.25 bind 9.6-ESV-R7-P2 binutils 2.21.1 bison 2.3 bzip2 1.0.6 cdrtools 3.00 clisp 2.47 cmake 2.8.6 gnu 0.5.11 conflict 20100627 convmv 1.15 coreutils 8.5 cups 1.4.5 curl 7.21.2 cvs 1.12.13 diffutils 2.8.7 doxygen 1.7.6.1 ejabberd 2.1.8 elinks 0.11.7 emacs 23.4 otp_src R12B-5 fcgi 2.4.0 fetchmail 6.3.22 flex 2.5.35 foomatic-db 20080903 foomatic-db-engine 3.0-20080903 foomatic-filters 4.0.15 foomatic-filters-ppds 20080818 fping 2.4b2_to gawk 3.1.8 gcc 3.4.3 gcc 4.5.2 gd 2.0.35 gdb 6.8 gdbm 1.8.3 gettext 0.16.1 grep 2.10 ghostscript 9.00 git 1.7.9.2 gnu-gs-fonts-other 6.0 gnu-gs-fonts-std 6.0 gmp 4.3.2 gnupg 2.0.17 gnuplot 4.6.0 pth 2.0.7 gocr 0.48 gperf 3.0.3 gpgme 1.1.8 grails 1.0.3 graphviz 2.28.0 tar 1.26 guile 1.8.6 gutenprint 5.2.7 gzip 1.4 hal-cups-utils 0.6.19 hexedit 1.2.12 hplip 3.10.9 httping 1.4.4 hwdata 0.5.11 iftop 0.17 ilmbase 1.0.1 ImageMagick 6.3.4 iperf 2.0.4 ipmitool 1.8.11 ircii 20060725 dhcp 4.1-ESV-R7 junit 4.10 INIT 2011-02-08 lcms 1.19 less 436 lftp 4.3.1 libassuan 2.0.1 confuse 2.6 libedit 20110802-3.0 libee 0.3.2 libestr 0.1.2 libevent 1.4.14b expat 2.1.0 libidn 1.19 libksba 1.1.0 libmcrypt 2.5.8 libmemcached 0.16 libmng 1.0.10 neon 0.29.5 libnet 1.1.5 libpcap 1.1.1 librsync 0.9.7 libsigsegv 2.6 libsndfile 1.0.23 libtecla 1.6.1 libtool 2.4.2 libtorrent 0.12.2 libusbugen 0.1.8 libusb 0.1.8 libxml2 2.7.6 libxslt 1.1.26 lighttpd 1.4.23 links 1.03 logilab-astng 0.19.0 logilab-common 0.40.0 lua 5.1.4 m4 1.4.12 make 3.82 mc 4.7.5.2 meld 1.4.0 memcached 1.4.5 memcached-java 2.0.1 mercurial 2.2.1 mpc 0.9 mpfr 2.4.2 mutt 1.5.21 mysql 5.1.37 ncftp 3.2.3 net-snmp 5.4.1 nethack 3.4.3 nmap 5.51 ntp-dev 4.2.5 open-fabrics 1.5.3 openexr 1.6.1 openldap 2.4.30 openscap 0.8.1 openssl 0.9.8q openssl 1.0.0j libopenusb 1.0.1 p7zip 9.20.1 pam_pkcs11 0.6.0 patch 2.5.9 pconsole 1.0 pcre 8.21 perl 5.12.4 DBI 1.58 Net-SSLeay 1.36 pmtools 1.10 XML-Parser 2.36 XML-Simple 2.18 PHP 5.2.17 PHP 5.3.14 pinentry 0.7.6 privoxy 3.0.17 proftpd 1.3.3 psutils p17 pv 1.2.0 pwgen 2.06 pylint 0.18.0 CherryPy 3.1.2 coverage 3.5 jsonrpclib 0.1.3 ldtp 2.1.1 M2Crypto 0.21.1 Mako 0.4.1 nose 1.1.2 ply 3.1 pybonjour 1.1.1 pycups 1.9.46 pycurl 7.19.0 lxml 2.3.3 pyOpenSSL 0.11 Python 2.6.8 Python 2.7.3 setuptools 0.6 quagga 0.99.19 quilt 0.60 rdiff-backup 1.3.3 readline 5.2 rpm2cpio 0.5.11 rsync 3.0.8 rsyslog 6.2.0 rtorrent 0.8.2 ruby 1.8.7 samba 3.6.6 sane-backends 1.0.19 sane-frontends 1.0.14 screen 4.0.3 sed 4.2.1 sendmail 8.14.5 slang 2.2.4 slib 3b1 slrn 0.9.9 snort 2.8.4.1 sox 14.3.2 spawn-fcgi 1.6.3 squid 3.1.18 stdcxx 4.2.1 subversion 1.7.5 sudo 1.8.4.5 swig 1.3.35 expect 5.45 tcl 8.5.9 tk 8.5.9 tls 1.6 tcpdump 4.1.1 tcsh 6.17.00 texinfo 4.7 tidy 1.0.0 timezone apache-tomcat 6.0.35 top 3.8beta1 trousers 0.3.6 unixODBC 2.3.0 unrar 4.1.4 unzip 6.0 vim 7.3 visual-panels wget 1.12 which 2.16 wireshark 1.8.2 wxGTK 2.8.12 xorriso 0.6.0 xz 5.0.1 zip 3.0 zlib 1.2.3 zsh 4.3.17

    Read the article

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

  • Session Sharing with another User on *NIX and Windows

    - by Giri Mandalika
    Oracle Solaris Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example. eg., % finger Login Name TTY Idle When Where root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command. script -a /dev/null | tee -a <target_terminal eg., # script -a /dev/null | tee -a /dev/pts/2 Script started, file is /dev/null # # uptime 5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81 # # isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32 # # exit Script done, file is /dev/null After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges. The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal. Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this. [Thanks to Sujeet for sharing this tip] Microsoft Windows Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps. Connect to the target Windows system using Remote Desktop Connection client Launch Windows Task Manager Navigate to the "Users" tab Find the user session that you want to connect to and have full control over as the other user who is currently holding that session Select the user name in Windows Task Manager, right click and choose the option "Remote Control" A window pops up on the other user's session with the message "<USER is requesting to control your session remotely. Do you accept the request?" Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70  | Next Page >