Search Results

Search found 6887 results on 276 pages for 'internal'.

Page 206/276 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Office 365 domain federation conversion failed

    - by Matt Bear
    We're doing things backwards, we have an established o365 domain, with 400+ users, and are just now deploying local AD, and ADFS for SSO. Last night, after configuring my servers, I ran the powershell command convert-MSOLdomaintofederated to convert the xxx.com vanity domain to federated, it errored out with an unspecified error(Microsoft ADFS support said the error has to do with the default password settings being changed.) And when I run convert-MSOLdomaintostandard, it comes back with the domain is already standard. Also in the o365 portal it shows the domain as standard, however it is trying to process login attempts as if it were a federated domain. I've spent 5 hours total on the phone with Microsoft, and it has been escalated to their engineering department for resolution, sometime within the next few days... I need it yesterday. From what we can gather, the conversion process started, error out, changed some of the internal configurations to federated, but left the description as standard.(if that makes since). So its in a weird limbo, where its in both modes but neither at the same time. Currently, the only way to fix it is to remove the vanity domain, and re-add it. I need a way to dissociate the user accounts from xxx.com domain to allow its removal. Removal of all the users themselves is not an option.

    Read the article

  • Powershell script to delete secondary SMTP addresses of Exchange 2010 Mail Contacts

    - by Zero Subnet
    I have a few thousand Exchange 2010 Mail Contacts who get erroneously assigned internal SMTP addresses by the default recipient policy. I'm trying to use the following command to delete these addresses (keeping the primary SMTP) and disabling the automatic update from recipient policy so the SMTP addresses don't get recreated again. Get-MailContact -OrganizationalUnit "domain.local/OU" -Filter {EmailAddresses -like *@domain.local -and name -notlike "ExchangeUM*"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $_; $email = $contact.emailaddresses; $email | foreach {if ($_.smtpaddress -like *@domain.local) {$address = $_.smtpaddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.identity -EmailAddresses @{Remove=$address}; $contact | set-mailcontact -emailaddresspolicyenabled $false} }} I'm getting the following error though: You must provide a value expression on the right-hand side of the '-like' operator. At line:1 char:312 + Get-MailContact -OrganizationalUnit "domain.local/testou" -Filter {EmailAddresses -like "@domain.local" -and name -notlike "ExchangeUM"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $; $ email = $contact.emailaddresses; $email | foreach {if ($.smtpaddress -like <<<< *@domain.local) {$address = $_.smt paddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.ident ity -EmailAddresses @{Remove=$address}; $contact }} + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : ExpectedValueExpression Any help as to how to fix this?

    Read the article

  • All application passwords lost on Windows 7

    - by Rynardt
    A couple of days ago I changed my Windows 7 login password. My laptop is on my company's domain, so password changes are done over the internal network. Since changing the password I noticed that all my saved Chrome passwords are missing. Also Skype, Windows Live, Internet Explorer and Outlook lost their saved passwords. I guess there could be more applications with lost passwords, but I have not opened them yet. This makes me think that most applications saves their passwords to a general password vault on the Windows system and this vault got somehow corrupted when I changed my domain login password for windows. Do anyone have any idea of how to fix this and prevent it from happening again? EDIT : More Info I do development work at the office, so most of the time I bypass the firewall and connect directly to the internet gateway. Now and then I would connect to the company wifi network to do printing and access files on a NAS. So by default my laptop does not connect to the wifi hotspot. On this occasion to update the password, I had to connect to the wifi. So referring to the comment by OmnipotentEntity below, could this have happened when the system rebooted without a connection to the network as the laptop does not auto connect to the wifi hotspot?

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Router vs switch in a LAN [closed]

    - by servernewbie
    If I have a LAN and and connect it with a switch, I understand it uses a CAM table to route packets in layer 2 (by saving mac to port relations). So far all good. However, when using a router for a LAN (ONLY for a LAN, not to connect it to "the outside" WAN/internet/etc) I get a bit confused as to how it internally processes packets. I would first split this into two router scenarios: Router with buit-in switch In this scenario, I would expect that it will act exactly as a switch with a CAM table internally. This would probably benefit a bit in speed (guessing here?) compared to the next option. Router without built-in switch Here is where I get confused. If hostA wants to send a packet to hostB, it will ARP to find hostB's MAC address and send it there. Now, if we had a switch (above scenario) this would be easy. But how does it work now in a router WITHOUT a switch? If I would guess, hostA would send an Ethernet frame with hostB's MAC address to the line. The router would fetch the packet (even though the router has another MAC address, it would still fetch this packet even if it only contains hostB's MAC address). It would strip the Ethernet frame header and check the IP, and then check its own internal ARP table again for the MAC address. Now, this would seem like a waste of resources compared to a router with a built-in switch. But maybe it does not work like that at all. Does it also contain a CAM table? If that would be true, what would then the difference between these two routers really be?

    Read the article

  • Can not copy files after installing windows

    - by Ali
    I am experiencing a weird problem. I was running Xubuntu on my laptop until yesterday that I had to delete Xubuntu and install Windows. I had a NTFS partition on my Xubuntu that I kept some files on it. Today after installing windows I wanted to move all the files from that partition to an external HDD. I selected all files and folders and clicked on Copy, then I went to the HDD and clicked on paste but nothing happened. I can not do that. I do not know why. I copy the files, and wherever I click paste, nothing happens. If I try to copy the files and folders one by one, I can copy some of them, but some of them do not move. The other problem I have is that I can not open some files, in particular pdf files. When I click on pdf files I get this error: There was an error opening this document. This file cannot be found. Also, I cannot play some mp4 files. I can not open some jpg and txt files. I get this error The directory name is invalid. So in summary, after removing Xubuntu and installing windows 7 I have the following problems with one of the NTFS partitions on my internal drive: Can not copy or cut all folders and files from that partition to any other partition - I also do not get any errors. Can copy some folders and files Can not access some pdf, jpeg, txt and mp4 files and get the above errors. I should also mention I did not change anything for this partition during the installation or formatting the other partitions.

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Emails not sending from outlook / OWA - Not even hitting the mail queue in exchange

    - by webnoob
    We are having an issue this morning where we can receive external emails but cannot send internal or external ones from Outlook or OWA. If I use: Send-MailMessage –From <[email protected]> –To <[email protected]> –Subject “Test #01”-Body “Just a test message.” –SMTPServer <Server-Name> –Credential <domain\user> the email is sent correctly which makes me think there is a connection issue with OWA and Outlook. However, outlook is reporting as Connected with exchange. I have checked the message tracking in exchange tools and emails sent via outlook and OWA do not appear. Nothing has changed on the server on the weekend so I don't really know where to start debugging this issue. We are using Windows SBS 2011. We only have one send connector which isn't using Smart Hosts and is set to use DNS MX records. Use external DNS is not checked and I can ping google.com etc so doesn't appear to be a DNS issue (plus the email sends from the console anyway). EDIT It appears that users using IMAP can send emails correctly, its only ones that rely on the normal exchange connection type that don't work. EDIT Emails from IMAP are hitting the email queue's where as emails from the normal exchange accounts aren't. EDIT It seems that some of the emails we tried to send yesterday sent at about 1am but now it won't work again..

    Read the article

  • Make UEFI, GPT, Bootloader, SSD, USB, Linux and Windows work together

    - by user129552
    I like to use the latest hardware and the latest software; thus I have a Laptop (Lenovo X220) with UEFI instead of BIOS an SSD instead of an HDD GPT partitioning scheme instead of MBR USB to boot from instead of optical disks. I need to use both Windows and Linux. I tried to make them work alongside, but I didn't succeed. Most Linux distribution isos don't even really work on UEFI systems booted from USB. (Not even the self-claimed cutting-edge Fedora. I also tried Linux Mint Debian Edition and Sabayon Linux (according to this guide) which did not work. Only Ubuntu worked for me. I first installed Windows 8 which created sda1: Recovery, sda2: EFI system, sda3: msftres, sda4: NTFS Windows. Windows worked without a problem. I then created sda5: linux-swap and installed Ubuntu into sda6: btrfs. After rebooting, I was not presented GRUB2 as expected, but instead my system just booted into Ubuntu. I could no longer access Windows. After fixing dpkg in btrfs Ubuntu, I followed the Ubuntu documentation on UEFI booting. The result left me with a broken GRUB2, but interestingly, when I wanted to select the device to boot from, I was not only presented the internal SSD, an attached USB device, or LAN, but also Grub2 (broken), Ubuntu and Windows. The result is not very satisfying to me. What would I have to do to fix everything? Or differently asked, what operating system should I install at what point given my possibilities and requirements, so that I have a working bootloader in my UEFI GPT system which presents me a working Linux and Windows.

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • Moving Microsoft Exchange server to the private network.

    - by Alexey Shatygin
    In one of the offices, we have a 50-computers network, which had only one server machine: Windows 2003 Server Microsoft ISA Server Microsoft Exchange 2003 This server worked as a gateway (proxy server), mail server, file server, firewall and domain controller. It had two network interfaces, one for WAN (let's say 222.222.222.222) and one for LAN (192.168.1.1). I set up a Linux box to be the gateway (without a proxy), so the Linux box now has the following interfaces: 222.222.222.222 (our external IP, we removed it from the Windows machine) and 192.168.1.100 (internal IP), but we need to keep the old Windows server as a mail server and a proxy for some of our users, until we prepare another Linux machine for that, so I need the mail server on that machine to be available from the Internet. I set up iptables rules to redirect all the incoming connections on the 25th and 110th ports of our external IP to 192.168.1.1:25 and 192.168.1.1:110 and when I try to telnet our SMTP service telnet 222.222.222.222 25 I get the greetings from our windows server's (192.168.1.1) SMTP service, and that's works fine. But when I telnet POP3 service telnet 222.222.222.222 110 I only get the blank black screen and the connection seem to disappear if I press any button. I've checked the ISA rules - everything seems to be the same for 110th and 25th ports. When I telnet on 110th ports of our Windows server from our new gateway machine like this: telnet 192.168.1.1 110 I get the acces to it's POP3 service: +OK Microsoft Exchange Server 2003 POP3 server version 6.5.7638.1 (...) ready. What sould I do, to make the POP3 service available through our new gateway?

    Read the article

  • Error compiling PHP 5.5.9 on CentOS 6.5 during make command

    - by Chris Mancini
    Here is the error message: cc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions. make: *** [ext/fileinfo/libmagic/apprentice.lo] Error 1 The very last thing make was processing is apprentice.lo which appears to be part of the image manipulation libraries (maybe?). I am using Ansible to provision my instance. It is a Digital Ocean single core 512MB VM. I have been using vagrant / ansible with the same config locally for dev and it has compiled fine, this is the first cloud VM I am attempting to provision. The only difference is the base image for my DO server is coming from DO and for my local dev, I built my own Vagrant box via VirtualBox from a stock CentOS basic server install. I pull it down from my DropBox. The problem has been experienced by others and reported as a php bug report My php ansible role up to the error: --- - name: Download php source get_url: url={{ php_source_url }} dest=/tmp register: get_url_result - name: untar the source package command: tar -xvf php-{{ php_version }}.tar.gz chdir=/tmp when: get_url_result.changed or php_reinstall - name: configure php 5.5 command: > ./configure --prefix={{ php_prefix }} --with-config-file-path={{ php_config_file_path }} --enable-fpm --enable-ftp --enable-mbstring --enable-pdo --enable-soap --enable-sockets=shared --enable-zip --with-curl --with-fpm-group={{ nginx_group }} --with-fpm-user={{ nginx_user }} --with-freetype-dir=/usr/lib64/ --with-gd --with-jpeg-dir=/usr/lib64/ --with-libdir=lib64 --with-mcrypt --with-openssl --with-pdo-mysql --with-pear --with-readline --with-tidy --with-xsl --with-zlib --without-pdo-sqlite --without-sqlite3 chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall - name: make clean when reinstalling command: make clean chdir=/tmp/php-{{ php_version }} when: php_reinstall - name: make php command: make chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall Thanks in advance for any help. :)

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • What's the best way to clone multiple PCs from one machine?

    - by Jason T.
    Where I work we have dozens and dozens of old ThinkPad laptops. A lot of these can be reused but not for our needs. They have been long since replaced. The higher-ups have decided to donate them to charity. For better or for worse I have been tasked with reimaging them. I took a laptop and installed the factory copy of Windows, updated it, configured it appropriately. Now I'm trying to reimage it to dozens of other laptops. What's some good software to do this? First I used clonezilla to clone the hdd in the laptop to an internal drive in an external enclosure and it worked. Then I tried taking the base image out and connecting it externally to a laptop that needed to be imaged and I got it to work a few times. So far so good, right? Well once I informed my boss of my findings and what I would want to do then the images started to not work on new laptops. One of three things would happen: The Thinkpads would just blink at me and Windows wouldn't load. Or Windows would load but freeze within two minutes. Last but not least the laptops would BSOD during the Windows XP bootup. These laptops are not going to be used by the company. They're going to charity. So can anyone else recommend a way to reimage multiple laptops?

    Read the article

  • Router reporting failed admin login attempts from home server

    - by jeffora
    I recently noticed in the logs of my home router that it relatively regularly lists the following entry: [admin login failure] from source 192.168.0.160, Monday, June 20,2011 18:13:25 192.168.0.160 is the internal address of my home server, running Windows Home Server 2011. Is there anyway I can find out what specifically is trying to login to the router? Or is there some explanation for this behaviour? (not sure if this belongs here or on superuser...) [Update] I've run both Wireshark and netmon for a while on my home server. Wireshark captured the traffic, but didn't really show anything useful (or nothing I could make use of). A simple HTTP GET request is sent from the server (192.168.0.160) to the router (192.168.0.1), from a seemingly random port (I've seen examples from 50068, 52883), and it appears to do it twice in quick succession (incrementing port by 1), about every hour. Running netstat around the time of the failure didn't show anything (probably too long after anyway). I tried using netmon as it categorises by process, so I thought it might show a corresponding process for the port. Unfortunately, this comes in under the 'unknown' category, meaning it's basically just a slower, less useful Wireshark. I know there's not much to go on here, but does this help in anyway?

    Read the article

  • dhcp3-server (dhcpd) is tampering with host NIC

    - by user61000
    Hi all, I have a debian box that is serving as a router (using iptables NAT). When first turned on, everything works fine for a few minutes. Then the dhcp server assigns an IP (other than 192.168.0.1) to its' host NIC, eth0. This is NOT what I want. I just want dhcp3-server to listen on eth0, not assign it an IP, and changes the kernel routing table. This of course ruins the NAT capablities of the box. How can I tell the dhcp3-server NOT to do this? Thanks Before dhcp3-server tampers with eth0, the IP is 192.168.0.1, and the routing table looks like this: ~# netstat -r Kernel IP routing table Destination Gateway Iface 192.168.0.0 * eth0 173.33.220.0 * eth1 default 173.33.220.1 eth1 After dhcp3-server tampers with eth0, the IP is 192.168.0.3, and the routing table looks like this: ~# netstat -r Kernel IP routing table Destination Gateway Iface 192.168.0.0 * eth0 173.33.220.0 * eth1 default 192.168.0.1 eth0 default 173.33.220.1 eth1 SETUP Outbound NIC is eth1 Internal NIC is eth0 /etc/network/interfaces ... iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 /etc/default/dhcp3-server INTERFACES="eth0"

    Read the article

  • PC -> TV HDMI suddenly doesn't work, any other option does

    - by XSlicer
    So my parents wanted to connect their PC to their TV, so they bought a 10m (35 ft) cable. It all worked perfectly. Then my mom tripped over the cable, breaking it. She somehow got a free replacement, but at the same time some more expensive one from an uncle. Now the problem is that this suddenly doesn't work anymore (TV saying "Format not supported"). I've tried several other things to pinpoint the actual problem, but none brought me something conslusive: PC (hdmi) to TV doesn't work (both cables) PC (hdmi) to TV doesn't work (short cable) Laptop (hdmi) to TV work (same cables) BluRay (hdmi) to TV works (same cables) PC (hdmi) to monitor works (same cables) PC (DVI w/ adapter HDMI) to TV works (same cables) I've tried different ports, tried Linux, reinstalling drivers (the graphics card is internal using an i5 sandy bridge. Normally it's running Windows 7 Home Prem, the TV being a Philips 37PFL7605H). So basically, everything works, except that thing what we want. The only things I am thinking of is that the sound is interfering or that the HDMI-output is somehow broken. Or is it anything else? I'm kind of lost.

    Read the article

  • How can I monitor VNC via Nagios?

    - by atroon
    I have a number of remote sites which have VNC running on a few computers for support purposes. They are (obviously) only available on our internal network. I am using Nagios to keep track of all the systems in the network and I want to have it check to make sure the VNC server is running on the appropriate hosts. There is a 'check_vnc' plugin available here but it relies on VNC Snapshot which I don't want to use. Certainly I could use it, but it adds more complexity and dependency, which I want to avoid. It seems simpler to just use check_tcp to make sure I get the proper response to a connection request for VNC, e.g. port 5900, send a connect string, get back framebuffer info. My real question, I suppose, is this: What is the 'proper' generic connect string for VNC (I use both UltraVNC and RealVNC) and what is the expected response? If it's really easier to use the VNC Snapshot and check_vnc, let me know. I just can't imagine that a string of text isn't easier, faster, and less bandwidth intensive to monitor.

    Read the article

  • Setup windows 2012 AD in Hyper-V for a Test environment

    - by hub
    Im trying to setup a Windos 2012 R2 test environment on my work computer (a laptop). I have a AD, DHCP and DNS server on server A, and a client connecting to the doman and that works. The client can ping the AD server and gets a valid IP adress. If I ping google.com from the client I get the IP adress but I dont get any responses (request time out). If i ping google.com from server A it works as it should. Server A have a connection to the Internet through a "external network switch" in hyper-v, which gets its internet from a router and the client is connected to a "internal network switch". May the poblem be that server A is behind a router? Can I make this solution to work regadless the network my laptop is connected to? At home i have one IP adress, at work its a totally different range. What I would like is to use my laptops internet connection, regardless wifi or wired, to act as incomming internet, is this possible?

    Read the article

  • Windows shutting down, CPU maxing out in Windows-7 32-bit? [closed]

    - by Vivek Sharma
    I have no idea, what is happening to my laptop. For last 5-6 times it has automatically shutdown while running, without doing anything serious. And I even do get the message during boot which says - Start windows normal - Start in safe mode It just boots up normal. Few days ago, my screen blinked and it the PC shutdown immediately. After that I was not able to get it started. I got it repaired, the guy said he has replaced the display chip (nVidia-nvs-140), but i seriously doubt that. Now it started working, but would shutdown every 20 mins or so. I have a dual boot, ubuntu-11.10 works just fine. Virus scan on windows shows nothing. I am pasting my perfmon output. My CPU for reason is maxing out to 100% continously, for windows internal processes. Please have a look at the attached file. Strangly now for last 2 hours it is working fine, but i am just writing emails and reading excels. ThinkPad T61 | t9300 | 3gb | nVidia 140 nvs-quadro (latest driver 296.10) What do you suggest?

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • System occasionally hangs boot process with SLES 11

    - by ThaMe90
    I have several (new) systems on which I had to install SLES11 on. However, after a few (though not every) reboots, the system hangs during the boot sequence. It will only continue after I physically press a key on the keyboard. From what I've found in the dmesg log from a failed boot is the following: [ 22.170276] sd 0:0:0:0: [sda] Mode Sense: b7 00 00 08 [ 22.171155] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 22.182760] sda: sda1 sda2 sda3 [ 22.383424] sd 0:0:0:0: [sda] Attached SCSI disk [ 22.545372] PM: Marking nosave pages: 000000000009a000 - 0000000000100000 [ 22.545377] PM: Marking nosave pages: 00000000bf780000 - 0000000100000000 [ 22.546217] PM: Basic memory bitmaps created [ 22.590380] PM: Basic memory bitmaps freed [ 22.596284] PM: Starting manual resume from disk [ 22.602319] PM: Resume from partition 8:1 [ 22.602321] PM: Checking hibernation image. [ 22.602479] PM: Error -22 checking image file [ 22.602481] PM: Resume from disk failed. [ 22.718727] kjournald starting. Commit interval 15 seconds [ 22.718960] EXT3-fs (sda3): using internal journal [ 22.718964] EXT3-fs (sda3): mounted filesystem with ordered data mode [ 1555.644404] udevd version 128 started [ 1555.697664] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 1555.707961] ACPI: Power Button [PWRB] I've looked around the internet for the PM: Resume from disk failed. message, but this seems to only be important when restoring the system after a hybernate, i.e. restore from the hdd. But this is not my situation. I only get this after a reboot, as I said before. The timestamp [ 1555.xxxxxx] is only the result of me pressing a key on the keyboard. Any suggestions on how to proceed? As I am getting stuck on this issue.

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >