Search Results

Search found 6180 results on 248 pages for 'max van der heijden'.

Page 51/248 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Thunderbird Reply2All mistake when having two addresses

    - by Bart van Heukelom
    I have a Gmail account and a second email address. The mail from this second address is forwarded to the Gmail address. I use Thunderbird to read my email, but there's a little problem with the Reply2All feature. See, I have one of the addresses registered as my email address in Thunderbird. If somebody sends me an email on the other address, and I click reply 2 all, it doesn't recognize that address as mine, so it adds it to the recipient list - I am mailing myself. Anything I can do to fix this? Some way to let TB recognize both addresses?

    Read the article

  • How to do a quick find with forward slash in Chrome?

    - by Ton van den Heuvel
    In Firefox forward slash is mapped to quick find. Is it possible to let forward slash behave the same in Google Chrome as in Firefox? To find a link and follow it in a page in Google Chrome I now have to type: Ctrl + F, <search query>, ESC, Enter In Firefox this is: /, <search query>, Enter Not being able to use forward slash to find in page has been a real show-stopper for me as I use it all the time in Firefox to browse documentation.

    Read the article

  • Why do users get an HTTP 404 error when attempting to clone a Mercurial repository over HTTP?

    - by Geoffrey van Wyk
    The repository is hosted on my PC. I use Apache with WAMP and TortoiseHG. I have setup users and passwords and they are able to browse the repository in their browsers after entering their usernames and passwords. The problem is that, when they try to clone the repository, they get an HTTP404 file note found error. However, I can clone the repsoitory on my own PC using their credentials. The problem must lie somehwere with the mercurial setup.

    Read the article

  • cannot reach munin port on other AWS instance

    - by Amedee Van Gasse
    2 AWS instances, in the same region but different availability zones, one is in regular EC2 and the other is in VPC, both have an Elastic IP, both are 64bit Amazon Linux AMI 2014.03.1. Both are running munin-node. The instance in the VPC is running munin-cron. I have added incoming TCP and UDP port 4949 to the security groups of both instances. On the munin node, I added an allow-line with the IP address (regular expression) of the munin server to /etc/munin/munin-node.conf. I bind munin-node to any interface using host *. Then I did sudo service munin-node restart. Then I ran netstat. $ sudo netstat -at | grep munin tcp 0 0 *:munin *:* LISTEN So the port is open there. On the munin server AND on the munin node: $ nmap AMAZON-IP -p 80,4949 | grep tcp 80/tcp open http 4949/tcp closed munin On the munin node: $ nmap localhost -p 80,4949 | grep tcp 80/tcp open http 4949/tcp open munin So from the outside, the http port is open (Apache is running) but the munin port is closed. The node can't even reach the munin port on it's own public IP address, but it can on localhost. I added port 80 as a sanity check, to be sure that there is network connectivity at all. So what am I overlooking here?

    Read the article

  • Removing files on a limit access backup server

    - by Bart van Heukelom
    I have an account on a backup server but it's full, so I need to clear it. The problem is that It's only accessible via FTP, SFTP and Rsync (no shell) Deleting lots of small files (as in, multiple full Linux installations), which I have to do, is undoable over FTP/SFTP because it cannot recursively delete directories in one command (Yes, most clients will fake this by issueing all the seperate commands for you but the overhead is huge and the process takes several days...well it crashes before that). What do I do?

    Read the article

  • SMTP IP - Bad reputation, how do I work around?

    - by Louis van Tonder
    I recently had a spamming incident and got listed on a blacklist. I have rectified the issue, removed from the blacklist, but my IP reputation is now classified as a high volume sender. What is the best way to rectify this? I have an additional IP address. I am thinking configure my server to make outbound SMTP connections using the other IP. My questions are: How long does it take for my reputation to stabilize again? How do I configure my server/mailserver to use a specified outbound IP? Setup: Server 2008 Web hMailserver 2 IPs configured on one NIC Cloud based server Your urgent help would be greatly appreciated. Cheers

    Read the article

  • Mac OS X : Why does chown report "Operation not permitted"?

    - by josef.van.niekerk
    I am trying to do the following on my Mac (10.6.7) : sudo chown myusername:wheel ./entries but Unix/Mac is returning "Operation not permitted". When I ls -lash, the culprit file, it looks as follows: 8 -rwxrwxrwx 1 myusername staff 394B Apr 26 23:26 entries I've tried sudo, I've tried sudo su, nothing works? Any ideas what's up? The files I'm trying to chmod I've copied from my old Ubuntu box, most of the files have successfully chmodded recursively, just this one is stuck and I don't understand why.

    Read the article

  • Howto maintain an EXT3 filesystem

    - by Reinoud van Santen
    Lately I had several servers which encountered a write error on an EXT3 filesystem and as a result of that remounted the filesystem read-only. Understandably on a production server this causes severe problems. On a reboot the filesystem where fixed but on large partitions this takes a lot of time. After the filesystem was fixed, correcting several errors, the server runs well again. What can I do to minimize the rate at which this happens? I can't seem to find much information on periodically checking the filesystem(s) on a running server. Is it possible to change the way in which EXT3 / the system handles write errors? What would be a sane solution. All servers which this is regarding to are running CentOS Linux 5.4 or 5.5.

    Read the article

  • EC2 Apache Server not reflecting changes to PHP files

    - by Josef van Niekerk
    We're running a Laravel application on an Amazon EC2 server with Apache installed. I've noticed on multiple occasions, that the server doesn't respond to changes in PHP files, even after restarting Apache. For example, if I edit a file that I'm accessing through a URL, and I break the syntax, I don't even get a PHP exception thrown. This is really strange, and has been sticking its head out more frequently these days. Is it possible Apache is caching the PHP files somewhere? Opcode caching perhaps?

    Read the article

  • Route return traffic to correct gateway depending on service

    - by Marnix van Valen
    On my office network I have two internet connections and one CentOS server running a website (HTTPS on port 443). The website should be publicly accessible through the public IP of the first internet connection (ISP-1). The other internet connection, ISP-2, id the default gateway on the network. Both internet connections have routers (the household-kind) with NAT, SPI firewalls etc. The router on ISP-2 is a Netgear WNDR3700 (aka N600) with original firmware. The problem is that the website is unreachable. Looks like incoming traffic on ISP-1 will reach the server but the returning traffic is routed through ISP-2, effectively making the site unreachable. As far as I can tell I can't do port based routing on the WNDR3700. What are my options to make this work? I've been looking at implementing an iptables / routing based solution on the server itself but haven't been able to make that work. Update: Note that the server has one network interface connecting it to both routers.

    Read the article

  • USB seems to pause system

    - by Marco van de Voort
    I've an application that does some simple measuring, for which it polls a few 100kbs several times a second (8-25 times) The behaviour is not really dependant on chipset (happens on several mobo's intel 965- P55) and OSes (XPsp3 and win7). Also the make of the USB keyboard doesn't seem to matter. I notice that sometimes when an USB kbd is plugged in, the system pauses for say 500-1000ms. (about 900-1000ms on disconnect, and 400-500 on the subsequent connect) It also happens for other USB devices (most notably mice and massstorage devices), but only the first time such device is connected to an installation. This disrupts the measurement and I really would like to get rid on this. I already tried to disable as much as possible. (powersave, teletubby mode (*) etc), and while this helped with the non-USB related disruptions of the measurement, it doesn't help with the USB related ones. (*) fyi, turning off themes (to resp. classic/non-aero), and turning off effects in system solved problems that occured when minimizing/maximizing the app. Any pointers to look into? I'm a bit stuck with this.

    Read the article

  • Tar dereference only 1 level

    - by Bart van Heukelom
    I use the following pseudo-script to create a TAR of my installed software mkdir tmp ln -s /path/to/app1/bin tmp/app1 ln -s /and/path/going/to/the-app-2 tmp/app2 tar -c --dereference -f apps.tar tmp I need the --dereference option here to follow the links I just made in tmp. The reason I make the links in the first place is to store the directories with a different name in the archive than they have on the filesystem. Until now it has worked fine. However, I now have the situation that /path/to/app1 also contains links, and those I don't want to follow. Is this possible with some changes to the tar command? Or do I need to completely switch around the way I build the archive?

    Read the article

  • Two SSL certs for a domain in DirectAdmin

    - by Bart van Heukelom
    If I were to get 2 SSL certificates, one for example.com and one for www.example.com, is there a way to install them both on the site example.com in DirectAdmin? The default interface only allows installing one for both versions. If not, can I separate the 2 domains into 2 sites? One of them would only be a redirection, so there wouldn't be any duplication of site files. (Please don't answer with "one certificate should work for both". It doesn't always. This is a DirectAdmin question)

    Read the article

  • rsync & rdiff backup combination giving erros

    - by Maikel van Leeuwen
    On the server I'm making every day a backup with rdiff-backup like: rdiff-backup /home/ /backup/home Then every week I want to make a rsync backup offside with sshfs like: rsync -avz /home/server/backup/home /backup/server-home/ This is giving me the following errors: Fatal Error: Previous backup to /backup/server-home/. seems to have failed. Rerun rdiff-backup with --check-destination-dir option to revert directory to state before unsuccessful session. Does anybody have a good solution to deal with this errors/situation? *2x edit for typo's

    Read the article

  • VNC unattended Server (No user Interaction)

    - by Louis van Tonder
    I worked on a proof on concept a while ago.... whereby I managed to get VNC going in full "unattended" mode... I.e. The VNC Server dials into the viewer... which is running in Listening mode. The same concept of how single click works, but without the user interaction. I cant seem to locate my source files for this concept I worked on... although I have found my shortcut that worked on the Viewer side to listen. "C:\Program Files\UltraVNC\vncviewer.exe" -listen 5007 /noauto /256colors I can not however remember/locate my demo of what the server is doing.... how to configure it. If I remember correctly, the server was also started with command line params that "dialed" into a remote IP/port, that the viewer is listening on. Any ideas? Thanks

    Read the article

  • sbs-server with 2 nics and 2 connections to the internet with different providers not working as it

    - by erik-van-gorp
    We have the following configuration : A sbs-2003 server in a domain (mydomain.com) with 2 network cards, each connected to a different network (provider), with different gateways, one for web and one for mail and clients. (we do this because the bandwitdh we get from our providers is too small to handle all the mail(+spam) traffic and webservices, so we took 2 providers) DNS is as follows : www.mydomain.com 1.2.3.4 mail.mydomain.com 5.6.7.8 NIC 1(192.168.1.3) is connected to to the internet through a firewall at 192.168.1.1, having wan address 1.2.3.4 NIC 2(10.0.0.3) is connected to to the internet through a firewall at 10.0.0.1, having wan address 5.6.7.8 Both nics have their default gateway installed at their corresponding routers. Also the metrics are set equal. (i know this isn't a supported config, but it works more or less). In this configuration i can use RDP on both wan adresses, and telnet to port 25 works as well on both. The issue now is that since a few weeks , we get regular disconnections, and website hickups(timeouts), several per hour. If we set one router to a higher metric, that route no longer works. In short, I want the mails to route through NIC2 and the web through NIC1. Any better configuration (without installing a second mail server) ?

    Read the article

  • Adobe software does not save to network share

    - by Bart van Heukelom
    I'm running Windows 7 inside virtualbox on a linux host. I have shared my linux filesystem so it's accesible in Windows under \vboxsvr\sharename. I've mounted this share on S:. For most software, it works fine. Adobe software like Photoshop has problems with it though. I can read from S: just fine, but if I try to save something it gives me the message "There are no more files". How can I make it able to write to the share?

    Read the article

  • Force ID of user created by apt-get

    - by Bart van Heukelom
    Context: I'm automatically installing postgresql-9.1 on an Ubuntu server with apt-get. This creates the required postgres user. The Postgres data is on an external volume that survives reinstalls. This data is obviously owned by the postgres user. The problem I'm having is that the ownership is not recorded under the name postgres, but under the UID that postgres had at creation time. When the server is reinstalled, postgres sometimes gets a different UID, and no longer owns the data directory, and thus does not work. Question: Can I force the UID of the user postgres created by apt-get to something fixed? Or is there another way to solve my problem? (As you may have deduced, this is on Amazon EC2 with the data on an EBS volume)

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >