Search Results

Search found 13516 results on 541 pages for 'common dialog'.

Page 414/541 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Asus WL-520GU conflicting subnet (and/or IP) with 2Wire DSL

    - by Paula
    I have an Asus wireless router: WL-520GU... and an AT&T 2Wire for my DSL connection. When I try to browse anywhere, I just get an odd message from the Asus router (in the common Asus broken-English, bad formatting, and awful spelling): http://postimage.org/image/upxrjflcj I guess it's trying to say: Your Asus Router and your 2Wire have the same subnet mask. (It doesn't say if that's good, or bad... but it sounds like they must be different.) but... But for the "solution" it looks like it's trying to say: Your Asus Router and your 2Wire have the same IP address. My Asus has the defaults: 192.168.1.1 and 255.255.255.0 My 2Wire has: 192.168.1.66 I'm not seeing where the conflict(s) could be. The Asus firmware is v3.0.0.14 . None of these problems occur with the old v3.0.0.8 firmware. Any ideas on how to fix this? (PLEASE don't say to run a totally different DD/Tomato firmware because it's "better". I need to fix THIS 1 problem, not try to convince my company to switch everything to an entirely different set of problems.)

    Read the article

  • Login box not shown on Ubuntu

    - by Alexandre
    I've installed Ubuntu 10.04 (64bit) as a guest OS in VirtualBox, using Windows7 Professional (64bit) as host. After Ubuntu install, I did installed Xfce4 (sudo apt-get install xfce4). Logged in using a Xfce session, and when I logged out, I couldn't see the login box anymore, only the regular gnome background from login screen. Then I restarted the virtual machine, and now I'm not able to see the login box anymore, only the gnome background. Does someone knows how to solve this? Thanks in advance Update: I've tried to use Xubuntu, that comes with Xfce. And I'm facing the same problem. As a common denominator from the two cases, now I see the problem arises after I've updated the system, and then installed curl (via apt-get), zlib (make process), git (make process). But I'm not sure this can be the cause for this ... Update: I isolated the problem. The bad guy is zlib, but I don't have a clue why. In fact, it does crash not only ubuntu and xubuntu, but also Debian (Yes, I've tested all). I'll keep this question in case someone have the same problem, but I think my initial question is not valid anymore.

    Read the article

  • Windows 7 migration led to crashdump and hibernate problems

    - by MartyMacGyver
    Note: I'm using a Samsung 830 SSD (migrated OS from defunct PC) and other than these two (interrelated?) problems it's working fine. Surprisingly well actually. Motherboard is a ASUS P8Z77-V Deluxe. Problem 1: Crashdumps are not working. volmgr throws an event 45 "The system could not sucessfully load the crash dump driver." whenever you modify crashdump settings, or if a crashdump occurs. diskpart says that "Crashdump disk = no" which is peculiar. Problem 2: Hibernation isn't working. Again, volmgr throws the same event 45 if you try to hibernate. The screen blanks, then you're at the password prompt. No sleepage occurs. (Yes, I know I should avoid hibernation on SSDs but it's enabled and the hibernation file is definitely there so I'd like to know why it's failing). Diskpart claims "Hibernation file = no" which is again peculiar... it's plainly there and getting created by the system. The common factor appears to be volmgr and/or the crashdump "service" (if that's what it is). I'd much rather get this working than spend days reinstalling and reconfiguring the entire system, especially when it's working perfectly otherwise. Sleep works as well (as long as it's not hybrid sleep). So, what defines the flags "Crashdump disk" and "Hibernation file disk" in diskpart's output? And what might be going wrong that's breaking crashdumps in particular?

    Read the article

  • Can not join additional domain controllers

    - by Hosm
    Hi all, I had a dead PDC and another not so synced domain controller for my domain. using comments here link now the so called secondary domain controller has seized domain controls and I can verify it from dsa.msc that it is a domain controller. I set up another domain controller (win2003SRV) and about to promote an AD on it as a domain controller for my domain. When I try to join the new domain controller to the domain I face DNS problem. here is some more detail DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain DOMNAME.A.B: The query was for the SRV record for _ldap._tcp.dc._msdcs.DOMNAME.A.B The following domain controllers were identified by the query: update.DOMNAME.A.B Common causes of this error include: - Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. For information about correcting this problem, click Help. it is worth noting that update.DOMNAME.A.B is the current domain controller to which I'd like to add another controller named PDC.DOMNAME.A.B Ip address of update.DOMNAME.A.B is 192.168.200.1 and for pdc.DOMNAME.A.B is 192.168.200.100 querying DNS on both machine return correct results. Any idea?

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • USB Wifi will not connect on Windows 7 (Even though the driver installs OK)

    - by Pete Roberts
    Windows 7 will not connect to a WiFi Netowrk using a USB Network Adapter. I have 3 adapters, A Senoa SUB 364 (EXT), a Repeatit SU2410 USB V2 and a ZYXEL G202. All of these devices install OK on Windows 7 Home Premium on my Destop PC (64 bit) and on my Asus Wii Netbook (32 bit). In each case the adapter can be enabled/disabled and the driver properties says it is working correctly. When I try and connect to a network Windows 7 behaves as though the adapter does not exist and reports no networks. The Wii has an integrated adapter which works perfectly under Windows and connects to either of the 3 networks available to me. I have done all the checks I can on the configuration. What seems odd to me is that it happens to all 3 devices on 2 different windows 7 PCs both of which are working perfectly in any other respect. This suggests the common denominator is me and I must be doing something wrong.. what's also strange is that I cannot find any similar problems being reported on any of the forums. From what reading I've been able to do it seems like the new wifi virtualisation thingy in W7 is not recognising the adapters which suggests I', missing a configuration option somewhere. Looking forward to finding out if I'm not alone or just being stupid. Pete

    Read the article

  • What is the best keyboard for typing speed (not layouts)

    - by Gapton
    So I am a programmer, and I like playing typing speed games. My typing speed is, for common English words, 85 to 90 wpm, max 95. I type on various devices, my laptop, desktop, office pc.... they all have slightly different keyboards. Being a curious programmer, I wonder what types of keyboard is used for the highest possible typing speed. Or let me phrase it in another way, what is the type of keyboards that people use in typing speed contest? Here is something I know that I feel like I can share: It must be a wired keyboard, I can feel the lag as I am typing this on my wireless keyboard, even if it is a slightly more expensive model which claims to have zero lag. I know people prefer a mechanical keyboard, for the hepatic feedback, however I have not tried one. It lasts longer and is noisy, it also does not have the problem of normal keyboards where you press many keys at a time the signals will get all jammed and the computer will only receive one or two keys. I personally prefer those "thin profile" keyboards. I type a lot, and 95 wpm put me in the top 5%, this is of course just on a gaming site. However when I type on the fat keyboards, my fingers have to travel a much longer distance before the keys actually click. This is where I find myself typing much faster with those thin profile keyboards found on my laptop. Because my fingers only hover on the keys and I only need to press a short distance, each stroke takes less force and light rapid strokes is what makes me type fast. When I type on a fat keyboard, I was forced to use heavy strokes, and this slows me down. There must be some people out there who are keyboard scientists, who actually do experiments and user tests with different setups. It would be interesting to understand more about the things we use everyday for not just work but a majority of our communications. P.S. this is about hardware and not about switching keyboard layouts to dvorak

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • Easyphp Web Setup

    - by Dominique
    I've tried to setup an EasyPHP in local and make it visible from the Web via DynDNS, which I've already successed many times before, but now this just doesn't work, maybe I've forgotten something... *The "server" is a common workstation. Here is what I have done : 1) Installed EasyPhp (with a index.php/html file in WWW folder) 2) Changed the port in the config to port 80 3) Forwarded port 80 to the server IP in my router configuration 4) Added the server to the router DMZ *Also tried removing antivirus/firewall I've installed PortListener, pointed it on port 80, and when I access "myname.dyndns.com" it says Client connected GET / HTTP/1.1 Host: xyz.dyndns-remote.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive So the server is accessible via Web, receive the connection successfully, but in my browser it says that the connection failed and show nothing...

    Read the article

  • Set default MySQL connect charset for PHP (in RHEL)?

    - by Martijn Heemels
    We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL. To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini: mysql.connect_charset = latin1 mysqli.connect_charset = latin1 pdo_mysql.connect_charset = latin1 Specific more modern sites could override this in their bootstrapping code with: <?php mysql_set_charset("utf8", $dsn ); ...and all was well. Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice. In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset? I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all. How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Clicking a link in IE6 doesn't load page (internal DNS entry on our intranet)

    - by Callum
    I have a very strange problem that is only affecting some versions of IE6. The problem does affect IE 6.0.2900.5512, but does not seem to affect 6.0.3790.3959 Basically I work for a company and we have an intranet. While I'm not an expert on "internal DNS pointers", what I was able to do was create a website (let's say about football), and when an employee who is sitting behind the company firewall types the word "football" in to the web address bar of their web browser, they get redirected to a particular server. I am told this is some kind "internally pointing DNS entry". So, I've set one of these up, and I have a placed a link to it on our company intranet page. However, when the link is clicked in IE6.0.2900.5512, the page goes blank. Clicking "refresh" then loads the correct page (the one specified in the link). Can anyone help me out here. I have tried changing the way URL is formed, everything from //football to http://football/ etc. The link works fine in every other browser and IE7+, but unfoturnatly, IE6 is still the most common browser in use at my organisation.

    Read the article

  • Memory leak in Google Chrome

    - by jasondavis
    As a developer it is very common for me to have 2-3 different IDE's open, 10-15 google chrome windows which can hold up to 200 open tabs (I know I get out of hand some times), Photoshop, couple twitter bots for promo, and a few other programs but my system still runs fast and smooth. I have an i7 processor with 12gb ram. Now with all my usual stuff running my Physical memory is usually running around 50-60% however over the course of the day or much less even, I will gradually grow to 98% The highest Memory usage processes will be from Google Chrome, if I sort in the task manager by highest memory usage and end the 1 highest process which will be a google chrome one, my memory usage will jump back down to about 60%. Also by ending that 1 process, all my Chrome windows will remain open and in use, so it doesn't affect me at all by ending that process. Based on this research I am assuming that that 1 runaway process is likely the Adobe Flash as I also can say that it gets up to the 98% much faster when I am using flash items like video or music player. But even without using any of them it will still climb up to that high number eventually. Has anyone else experienced similar results?

    Read the article

  • Postfix installation error on Ubuntu

    - by kgpdeveloper
    How do I fix this error on Ubuntu 10.04 ? Reading package lists... Done Building dependency tree Reading state information... Done postfix is already the newest version. The following packages were automatically installed and are no longer required: libaprutil1-dbd-sqlite3 libcap2 apache2.2-bin libapr1 libaprutil1-ldap libaprutil1 php5-common Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up postfix (2.7.0-1) ... Postfix configuration was not changed. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases newaliases: warning: valid_hostname: numeric hostname: 202002 newaliases: fatal: file /etc/postfix/main.cf: parameter myhostname: bad parameter value: 202002 dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 75 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) Even if I reboot, the same error shows up. Thanks for the help..

    Read the article

  • iPhone Remote with iTunes Library via VPN

    - by sudo work
    Alright, so I'm currently behind a network router (not under my control). The router performs NAT and somehow prevents a computer from scanning other nodes. At least, you're unable, in this instance, to locate an iTunes library. You can, however, communicate with a node's open ports if the local IP address is known, as well as the port. I haven't actually tried port scanning a specific IP using nmap or another tool yet. So I've tried one solution to remove the contribution of the router entirely (to verify that it works without the influence of the routers). I set up an access point using my iPhone and tethered my computer (with the library) to it. From here, I was able to pair my library and the iPhone Remote application. Control of the library was normal as well. This solution is not ideal, however, because I am actively using bandwidth with my computer and cannot afford to be tethered to my 3G connection. A viable solution for me is to use a common VPN connection, which I have set up on a Ubuntu (Intrepid) server that is remote. Both my computer and iPhone are able to access the VPN via PPTP. The server is setup with PPTPD as the VPN-server; I'm using IPTables to perform IP masquerading and forwarding traffic. I however, still cannot connect the library to the phone. I can however, see both devices on the VPN subnet (192.168.0.0/24). SSH'ing and such works fine. What settings on the VPN server must I change to get this to work? Also, how can I assign static IP addresses to various PPTP clients based on MAC addresses?

    Read the article

  • How do I install the pdo_mysql driver on Red Hat Enterprise Linux 6.1?

    - by Will Martin
    I have a RHEL box running PHP 5.3.3, which was installed using the binary packages provided by yum. I have installed the php-pdo package: # yum info php-pdo Loaded plugins: product-id, rhnplugin, subscription-manager Updating Red Hat repositories. Installed Packages Name : php-pdo Arch : x86_64 Version : 5.3.3 Release : 3.el6_1.3 Size : 168 k Repo : installed From repo : rhel-x86_64-server-6 Summary : A database access abstraction module for PHP applications URL : http://www.php.net/ License : PHP Description : The php-pdo package contains a dynamic shared object that will add : a database access abstraction layer to PHP. This module provides : a common interface for accessing MySQL, PostgreSQL or other : databases. It appears to be working correctly for SQLite databases, but not MySQL. There's no file including pdo_mysql.so in /etc/php.d, and there is no copy of pdo_mysql.so in /usr/lib64/php/modules. I'm pretty sure I just need the driver file and a line in the PHP configuration. A yum search pdo mysql didn't turn up any useful packages, and Google has failed me. If I were on Ubuntu or Debian, I'd apt-get install php5-mysql and be done with it. So ... where in Red Hat land do I get a copy of pdo_mysql.so, and install it properly?

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • Auto-archive IMAP mail folders on OS X

    - by Pradeep
    Hi, I am trying to achieve the following. Download all messages from mail server(and remove downloaded messages from server). Downloaded messages should be in a local mailbox preserving folder structure as was defined on server. The download process should be automatic and shouldn't create duplicates. I am on OSX and looking for solutions using Apple Mail or Thunderbird or similar. So far I have found POP is not the way to go (as it looses folder structure and potentially can cause duplicates). The solution described here seems very good but isn't yet available for thunderbird or apple mail. http://getsatisfaction.com/mozilla_messaging/topics/auto_archive_and_keep_folder_structure. The other alternative is outlook which has auto archive which is paid and I think exports to pst instead of the more common mbox format. Yet another alternative is http://www.pop4.org/ which adds support for folder management to POP. Which I don't think is going to become usable soon. Any other better solutions.? Thank you

    Read the article

  • Basic IP address structure

    - by dannymcc
    We currently have a few servers, around 30-40 workstations and 16 phones. Each device has a static IP address. As an example the standard settings for a new workstation is; IP: 192.168.1.XXX Subnet: 255.255.255.0 Gateway: 192.168.1.99 DNS: 192.168.1.50 As I am slowly exploring new server OS's and virtualisation etc. I am getting close to wanting a wider range of IP addresses. What I would like to do is seperate the devices by IP as follows: Servers 192.168.1.XXX Workstations 192.168.2.XXX Printers 192.168.3.XXX Phones 192.168.4.XXX VM's 192.168.5.XXX Is this a bad idea, or is this a common way of doing things? My biggest concern is the phones and subnet masks. The phones are managed by our provider although I have access to the server that runs them. Would I need to change the subnet mask to 255.255.0.0 on all devices? Or only those that change? For example, the phones don't need to connect to any other devices other than other phones and the phone server. So if I have the phones on 192.168.1.XXX with a subnet mask of 255.255.255.0 and then moved everything I had complete ownership/control of to 192.168.X.XXX with a new subnet mask of 255.255.0.0. Would that work?

    Read the article

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • sql developer cannot establish connection to oracle db with listener running

    - by lostinthebits
    I am working from home and connected to my work's vpn. I have tried to connect to the work db with sql developer (the latest version and the previous version) on the following environments: mac os x 10.8.5 (with sql developer launched and installed directly on the iMac. sql developer launched and installed directly on a vm on same computer (guest Ubuntu 12.04 LTS) sql developer launched and installed directly on a vm on same computer (guest Windows 7.0 Professional) I get Status Failure Test Failed : IO Error - The Network Adapter could not establish the connection. I have read dba forums and googled and the most common suggestion is that the oracle listener is not up and running. I can conclusively say this is not the case because I have the option of using remote desktop and accessing the oracle db in question on my work computer. If the listener was down, according to my DBA, no one would be able to connect. My sysadmin and dba are stumped so I assume it is something unique to my home system. The reason I do not want to continue with the remote desktop workaround is because remote desktop has an annoying (infuriating often) lag.

    Read the article

  • Favorite tricks with linux kernel boot parameters?

    - by ~drpaulbrewer
    Most linux bootloaders let you edit the kernel boot command line before booting. There are often lots of parameters available -- Knoppix, for instance, has a list on their Knoppix Cheat Codes page -- but most are applicable only to compatibility and special situations. A few are hidden gems. Common usages of these codes are to boot to single-user mode, alter screen mode or drivers, or to specify an alternative root directory. Other more exotic uses are possible. Some linux distributions let you copy the boot cd into ram. Others (e.g., Ubuntu) let you use preseed files to clone installs when setting up multiple systems -- useful when installing a lab full of computers without having to baby sit each install. What other tricks have you found useful in system installs, repairs, backups, restores, establishing temporary servers, or other tasks? To add your favorite trick to the list: As much of the code for these options goes on either in initrd, or in a service handler that detects the kernel parameters, please list *(1) the kernel boot line parameter, (2) what it does, (3) the linux distribution and any required packages to activate the feature*. Thanks.

    Read the article

  • What used the linux memory? Low cache, low buffer, not a VM

    - by Jason
    First of all, yes, I have read LinuxAteMyRAM, which doesn't explain my situation. # free -tm total used free shared buffers cached Mem: 48149 43948 4200 0 4 75 -/+ buffers/cache: 43868 4280 Swap: 38287 0 38287 Total: 86436 43948 42488 # As shown above, the -/+ buffers/cache: line shows indicates the used memory rate is very high. However, from output of top, I don't see any process used more than 100MB of memory. So, what used the memory? PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28078 root 18 0 327m 92m 10m S 0 0.2 0:25.06 java 31416 root 16 0 250m 28m 20m S 0 0.1 25:54.59 ResourceMonitor 21598 root -98 0 26552 25m 8316 S 0 0.1 80:49.54 had 24580 root 16 0 24152 10m 760 S 0 0.0 1:25.87 rsyncd 4956 root 16 0 62588 10m 3132 S 0 0.0 12:36.54 vxconfigd 26703 root 16 0 139m 7120 2900 S 1 0.0 4359:39 hrmonitor 21873 root 15 0 18764 4684 2152 S 0 0.0 30:07.56 MountAgent 21883 root 15 0 13736 4280 2172 S 0 0.0 25:25.09 SybaseAgent 21878 root 15 0 18548 4172 2000 S 0 0.0 52:33.46 NICAgent 21887 root 15 0 12660 4056 2168 S 0 0.0 25:07.80 SybaseBkAgent 17798 root 25 0 10652 4048 1160 S 0 0.0 0:00.04 vxconfigbackupd This is an x86_64 machine (not a common-brand server) running x84_64 Linux, not a container in a virtual machine. Kernel (uname -a): Linux 2.6.16.60-0.99.1-smp #1 SMP Fri Oct 12 14:24:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Content of /proc/meminfo: MemTotal: 49304856 kB MemFree: 4066708 kB Buffers: 35688 kB Cached: 132588 kB SwapCached: 0 kB Active: 26536644 kB Inactive: 17296272 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 49304856 kB LowFree: 4066708 kB SwapTotal: 39206624 kB SwapFree: 39206528 kB Dirty: 200 kB Writeback: 0 kB AnonPages: 249592 kB Mapped: 52712 kB Slab: 1049464 kB CommitLimit: 63859052 kB Committed_AS: 659384 kB PageTables: 3412 kB VmallocTotal: 34359738367 kB VmallocUsed: 478420 kB VmallocChunk: 34359259695 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB df reports no large consumption of memory from tmpfs filesystems.

    Read the article

  • Cannot access SMC8014WG-SI provided by TimeWarner/RoadRunner administrative interface...

    - by Matt Rogish
    I just received installation of RoadRunner internet/TV/Voice and I was given a wi-fi router from the TimeWarner folks. The model is a SMC SMC8014WG-SI. Unfortunately, the password it uses is WEP and that is, as we all know, completely insecure. The tech that was here didn't know how to change it to something like WPA2 w/TKIP, and I was on hold for 20 minutes with the TimeWarner folks before I gave up. My problem is that the default web interface (http://192.168.0.1) isn't responding. I can ping it, I can access the internet through it, but I can't get to the admin interface. I did a "hard reset" of the device but still no dice. My suspicion is that the wi-fi admin interface is disabled (a common setting) but the wired interface isn't working on either of my two laptops (I've tried two laptops with two different cables, no link light activated). Am I SOL? Did they lock this down so I can't do what I want to do? Worst-case is I just hook up my go-to WRT54G router to the other modem and leave this one turned off, but I'd rather use their hardware to avoid any "It's not our problem" in the future. Any thoughts? Thanks!!

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >