Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 429/617 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Using 1and1.com Servers, SMTP Mail is Limited - Local XAMPP Server Works As Expected

    - by nicorellius
    I'm starting to not like 1and1.com that much. I've used them for years, but mainly for simple sites without much need for configuration. I know there are better hosting companies out and I may go seeking them. The problem here is that on my Local XAMPP server (sitting on a network with Comcast ISP), I have a PHP script that uses PEAR::Mail to send mail using MIME. The script works find locally with either smtp.1and1.com and corresponding credentials and smtp.gmail.com with corresponding credentials, using appropriate ports, etc. 1and1 tells me that I have to change the MX record on the domain where this script runs in order to make this work. This doesn't make sense to me. Now I'm pretty new to all this, but how is it that this is the case? Why can my local server work just fine, out of the box, but their servers not? I have asked them these questions, but they are very vague and I cannot get any good answers from them. Versions: PEAR Version: 1.5.0 PHP Version: 4.4.9 Zend Engine Version: 1.3.0 My apologies in advance for my ignorance. Thanks for the help in advance.

    Read the article

  • Bypass proxy authentication [closed]

    - by Diego Queiroz
    My scenario: My network has a proxy that requires interative authentication. When I access any URL, an username and password is requested to enable navigation. I do have a valid username/password (this means I have permissions to access external content). I do not have access to the proxy server (any change to the proxy server is not an option). What I need: I need to bypass the interative authentication process and make it an automated authentication process. What I do NOT need/want: I do not need/want to hack the network. I do not need/want to access unauthorized content. In other words, I just need to find a way to "save" my password in the computer (security is not a problem) to allow application that does not support this kind of interative authentication to access the internet (like non-browser software that also uses HTTP port). My guess: My guess is to develop a new proxy server that will run in the local machine (eg, a proxy for the network proxy). This proxy server will access my network proxy, authenticate and forward the content. Of course this is a last resort. I prefer to not need to develop a proxy server. Does someone know other solution? (any operating system)

    Read the article

  • Compiled git from source, cannot access subversion repositories using git-svn

    - by haydenmuhl
    I'm setting up a CentOS dev box, and need git. At first I tried to install git using yum, but I could not connect to a yum repository that had git. Next I downloaded the git source (version 1.7.1) and compiled it. When I run the following command git svn clone svn+ssh://... I get the following error. Initialized empty Git repository in /root/main_ec/.git/ Can't locate SVN/Core.pm in @INC (@INC contains: /usr/local/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/local/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.8/i386-linux-thread-multi /usr/lib/perl5/5.8.8 .) at /usr/local/libexec/git-core/git-svn line 41. It looks like git-svn uses perl in some manner, but I don't know packages I'm missing. Can anyone help?

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • Windows 7 - User profile corrupted on standby/hibernate

    - by Dogbert
    I have a friend who uses Windows 7 for her home PC. She has a RAID1 array that is using up-to-date Intel Matrix storage drivers, and the entire array is backed up to a separate internal SATA HDD via Acronis True Image every night. Over the weekends, she lets her machine go into suspend after 4 hours of inactivity, and then later into hibernation after 6 hours of inactivity. Her Acronis backup system does nightly incremental backups, and full backups every Saturday night. She also has AVG Free Antivirus installed which does full scans every Monday. So far, on two occasions, on Sunday morning, her user profile is corrupt. I couldn't find any solution that allowed me to repair her profile, so I end up having to (as suggested by MS Knowledge Base, http://windows.microsoft.com/en-us/windows7/fix-a-corrupted-user-profile ;, Yep, no solution to fixing it, just clobber the whole thing) recreating her profile, then copying over data, recreating her Outlook profile, reconfiguring all third-party applications, etc. It's a real nightmare, and takes 4 hours to do each time. Are there any suggestions on how to resolve this profile corruption? This was happening even before the RAID/Acronis solution was in place, but I thought I'd provide as much information as possible.

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • GRUB reporting wrong partition type

    - by plok
    It all started when I had to replace one of the disks that the software RAID 1 on this machine currently uses. From that moment on I have not been able to boot to the Windows XP that is installed on the fourth hard drive, /dev/sdd. I am almost positive that the problem is related not to Windows but to GRUB, as if I unplug all the other hard drives so that the Windows XP disk is now /dev/sda it boots with no problem. The problem seems to be that GRUB detects a wrong partition type, which I understand suggest that something is really messed up. This is what I get when I try to follow the steps that until now had worked like a charm: grub> map (hd0) (hd3) grub> map (hd3) (hd0) grub> root (hd3,0) Filesystem type is ext2fs, partition type 0xfd 0xfd? That doesn't make sense. /dev/sdb and sdc are 0xfd (Linux raid), but not /dev/sdd: edel:~# fdisk -l [...] Disk /dev/sdd: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00048d89 Device Boot Start End Blocks Id System /dev/sdd1 * 1 30400 244187968+ 7 HPFS/NTFS edel:/boot/grub# cat device.map (hd0) /dev/sda (hd1) /dev/sdb (hd2) /dev/sdc (hd3) /dev/sdd I have been trying to work this out for hours, to no avail. Can anyone point me in the right direction?

    Read the article

  • Why am I unable to reach local network computers, but able to browse the web?

    - by Igor Zinov'yev
    I have a weird problem. Today after turning my Ubuntu 9.10 PC on I can't connect to my local network, but I can use the Internet. We have a single Windows 2003 server machine that acts as a local main DNS server, DHCP server and a domain controller. Although it seems to give me the local IP address, I can not ping it, as well as any other machine on the net. I have tried all of the below and it didn't help: Rebooting; Reconnecting to the network; Forcing the dhclient to renew the IP address; Deleting and creating new connection profiles; Plugging my machine into another network outlet; Maybe it has something to do with routing, because I have tampered with routing tables the day before, but the tables seem ok to me: $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 vboxnet0 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 Our LAN uses a D-Link DI-604 router, and it looks to me as if I am connected to the network outside the router. I can not even access its administration page. Please at least suggest what I can do to solve this. P.S. What seems strangest to me is that I can access the PC in question from outside the network by opening a port on the router. I have managed to ssh to it from outside, but I still can't ping nothing on the inside. P.P.S Today I tried reinstalling network-manager with --purge option, but it did no good. After that I created a new DCHP reservation for my PC in order to change my local IP, but that didn't change anything either. My PC is able to get a DHCP offer, but then it's unable to connect to any local computers. I am desperate.

    Read the article

  • Managing multiple IMAP accounts in Thunderbird

    - by baritoneuk
    I've been using Thunderbird for years without issues with 20+ pop3 accounts. I'm moving over to imap which will enable me to keep copies of the emails locally and on the server whilst keeping everthing synchronised. However I'm looking for the best way to manage multiple imap accounts on Thunderbird. Currently I have a filter that copies all the emails into a central inbox and into seperate local folders. The reason for this is I go through my inbox daily and delete all emails that don't require any action. I move any emails that require action to my "action" imap account folder. This way I can syncronise all the emails that require action across multiple computers (and mobile devices). This technique is my implemantion of the GTD or Getting things Done philosophy. I also copy over each email into seperate local folders. The reason I do this is just in case any emails on the imap accounts get deleted, or something drastic happens on the server which means I lose all the emails. My business partner has access to some of these emails and still uses pop3 (with "leave copy on server" checked), but I know sometimes Thunderbird can still delete emails off the server sometimes. The problem with the above is that thunderbird gives me the dreaded error dialogue saying that the emails cannot be filtered due to another process. I find the folder list in Thunderbird hard to manage. Here is a screenshot of part of my folder list- as you can see it's a bit of a complicated list and not easy to manage: What would be the best way of me managing multiple imap accounts whilst allowing me to have copies put in a central folder and emails in local folders? It would be useful if people think this is necessary, as perhaps there is a betterway? How do people manage multiple imap accounts in a way that allows them to keep on top of actionable emails? I'd be interested in how others manage this. I've never used the Thunderbird-based client "Postbox", does this handle multiple imaps better?

    Read the article

  • How do I install git/git-svn on RHEL5 with a custom perl install?

    - by kbosak
    I've had nothing but trouble trying to install Git on RHEL5. First I tried from source, but ran into several issues with installing the docs. There appeared to be missing libs and such for parsing xml that I couldn't figure out how to get installed and recognized. Then I tried using the EPEL yum repository and was able to install git and its docs but now git-svn is not working. It complains about not finding the perl modules Git.pm and SVN/Core.pm. When I set the GITPERLLIB environment variable to the location of those libs it seg faults. Some background: RHEL5 came with perl 5.8.8, but we wanted to use 5.10 so I installed that from source (to a custom location). Someone then symlinked the system perl binary to this newer version of Perl to make sure nobody uses the wrong version. Each developer also has their own build of Perl. So I'm wondering what's the best way to install Git on this system and have both the docs and git-svn working correctly for each user. Unfortunately I'm a developer and not as good with system administration so take it easy on me.

    Read the article

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • Apache crashes every 5min

    - by Simon
    I'm relatively new to server issues, having a site of mine that I started early in the year grow beyond my capabilities of managing it. I need help. I recently moved out of my shared hosting environment onto a dedicated virtual server from Mediatemple. Each week, I run a script that fetches data from my DB, fetches data from last.fm's API and then tweets information to Twitter. My server uses Virtuozzo and when the script runs, Apache crashes every 5min. I checked and saw that the 'kmemsize' parameter reaches its cap (its 13mb). I realise my problem. The MySQL process stays open for long while Apache needs to handle lots of incoming links (about 200 000 pageviews for that day according to my previous host's AWSTATS). Yes, I'm quite unexperienced in this, and I'm clearly killing the server with too many incoming links while it has to manage the updating of the DB. So that is the precedent: I want a few answers. 1) Why did my shared hosting environment not crash apache every 5min? It ran fine, the site only slowed a lot. Clearly, it must be the virtual container and the kmemsize limit? 2) Where do I go from here? Would a physical server (not a virtual container) encounter the same problems? I sent a support request to Mediatemple as well. I need all the help I can get.

    Read the article

  • ESX hosts lose connectivity with iSCSI SAN LUNs

    - by Themist
    I've been experiencing this issue for a couple of months now where my ESX hosts lose connectivity with my iSCSI SAN vmfs volumes. As a results the ESX hosts enter a nonresponsive mode the associated VMs disconnect and the only remedy is to reboot the host. This issue happens randomly . I have escalated this issue with VMWare but I haven't had any solution to the issue yet. I see no errors on my switches and there are no hardware issues as well. My SAN infrastucture is solid and there are 2 paths for every vmfs volume. Did anybody else experienced a similar issue? edit: Here are some more details: The iSCSI SAN software is Datacore Sanmelody 2.0.4.2 running on 2 HP Proliant G5 servers. The storage attached to each of the servers is an HP MSA70 and all the iSCSI SAN Volumes that are presented to my 4 ESX hosts are mirrored. I have two iSCSI swithces HP Procurve 1800G-24 that are trunked together. My SANLELODY servers are using NC360T NICs. I team two NICs and have one cable connecting to each iSCSi switch. Each ESX server uses two NICs as well for the iSCSI Network.

    Read the article

  • Is Gmail Being Blocked by my ISP (wait till you read this)?

    - by James
    This is the strangest thing I have ever encountered. I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). So okay, maybe my ISP is blocking these for some reason or maybe my firewall is, or maybe there is something wrong with my connectivity, right? NO. On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail ages starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping a complete reinstall would solve the issue, but no luck. So can anyone for the life of them figure out why this could be? And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cabel from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before gmail was perfectly accessible in this computer. I can't remember anything special that happened on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter

    Read the article

  • Test server on a local network with XAMPP

    - by hopscotch1978
    Hi, I'm not very proficient with networks and could use some help. I've got a Win 7 desktop with XAMPP which acts as my local dev machine. I've configured a virtual host on the desktop which I'm able to access fine. If I'm understanding things correctly, the virtual host uses port 80 (<VirtualHost 127.0.0.1:80>). I've just tried to configure a separate Win XP laptop on the local wireless network to connect to the main desktop for testing purposes. I've added the IP address and virtual host name to my Hosts file on the laptop. My virtual host is imaginatively named "virtualhost1". When I type this into my laptop browser, it connects correctly to the main desktop and I get the XAMPP welcome screen. But I can't seem to get to the actual site, just the XAMPP welcome screen. It kind of jumps the browser to http://virtualhost1/xampp/. I think it's a port issue of some sort but I have no idea how to resolve it. I would get the same XAMPP welcome screen on my desktop if I omitted ":80" from the virtual host declaration. On my main desktop, typing "virtualhost1" to the browser address bar gives me the site correctly, not the XAMPP welcome screen. Help would be appreciated. Thank you.

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • Cannot connect with Cisco VPN but can connect with ShrewSoft VPN

    - by rodey
    EDIT: We connected an air card to the computer to use a different Internet connection and using the Cisco software, we were able to successfully connect to our VPN server. I just don't understand why the ShrewSoft VPN client would connect but the Cisco connection won't. I'm not our network admin so sorry if I butcher some of the terminology. I have a computer at remote site that connects to our network through Cisco VPN. It uses the Cisco VPN software to do so. The problem is that the computer at this site cannot connect to our VPN because it is getting error "Reason 412: The remote peer is no longer responding." To see if perhaps something on their network was blocking the connection, I installed the ShrewSoft VPN client on the computer, imported our .pcf file and connected with no problem. I have tried two different versions of the Cisco VPN software (4.8.0.* and 5.0.03.*) and have the same problem. I installed Wireshark on the computer and have confirmed (while trying to connect through Cisco) that the computer is trying to contact the VPN server but is not receiving a response. We are not having any other problems regarding users not being able to connect. I'm at a loss at what else to check. I'll be monitoring this and have access to the computer at any time.

    Read the article

  • Asus MyCinema U3100mini Choppy

    - by dsimcha
    I'm running an Asus MyCinema U3100mini ATSC on Windows 7 64-bit. When I play live TV in Windows Media Center, it's very choppy and uses 500+ MB of RAM, I'm guessing due to the hard drive buffering functionality. Is there any way to disable the live TV pause buffer completely? If not, can anyone recommend alternative software that: Works with the MyCinema. Is lightweight and not horribly bloated with features I'll never use like Windows Media Center is. Edits: This is a dual boot system. I've discovered that the tuner actually works fine on XP. It also works fine on my other computer, which has slower hardware and also runs Windows 7 64-bit. The problem actually seems to be with playback at large screen sizes, not with hard drive buffering. Everything works fine below a certain window size and fails for large windows or full screens. Also, the same thing seems to happen whether playing live or recorded TV. As far as the obvious stuff goes, I have the latest video drivers from ATI for my Radeon x1050.

    Read the article

  • IIS ASP Redirect Removal

    - by Kim L
    We have a website that is setup on IIS 7 and are trying to replace it with a new site, but need a redirect that is in place removed. The old site used a custom file as the homepage (WN-main.asp). We removed all the old site files, including web.config, and placed them in a subdirectory for safe keeping. The new site no longer uses ASP, and we'd like to use a regular index.html as the default. However, when we go to the website, it keeps trying to redirect our .com to .com/WN-main.asp -- and that gives us a 404 Error in the Application for "Default Web Site" because we removed that page. In the IIS "Default Document" settings we have index.html at the top, and WN-main.asp is nowhere to be found in the list (it never was there). We've also removed the web.config file from the root directory, and put the entire old website in a subdirectory. As well as restarted IIS. We're assuming that the redirect is setup somewhere in IIS because if I navigate to .com/index.html which is our new site, it works. Our problem is that oursite.com redirects to oursite.com/WN-main.asp. Grr. If you go to www.worzalla.com you can see how it redirects to the WN-main.asp page right now as the homepage. Any ideas where this redirect could have been setup so we can remove it? Thanks!

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I have asked this question on SO, but was suggested that i ask it here on SF, so here it goes. http://stackoverflow.com/questions/3010753/wastage-of-resources-in-virtualization I am not sure if this is the right place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the memory and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20? Further Notes: Here is an example of what i am thinking: I have a master-slave configuration for a long running, cpu + memory intensive process, which can be distributed to 4 machines. Lets say when the process runs on these 4 machines with lets say 1 Gh CPU and 1 Gig RAM, i get 400 results per hour from the cluster (assuming 100 results from one machine) . Now i get a bigger machine ( lets say 4Gh and 4 Gig RAM), have 4 virtual hosts on it with 1 Gz CPU and 1 Gig RAM. Will this configuration give me the same 4 results per hour from these 4 virtual hosts?

    Read the article

  • Debian DNSSEC - howto secure a domain?

    - by Daniel Marschall
    I have a beginner question about DNSSEC. I have much experience with TLS and cryptography-stuff and would like to try out this new technology. I have googled very much about this but I haven't found useful information for me. I think one confusion in information gathering is that "Debian howto DNSSEC setup" can mean "How to USE DNSSEC for resolving" OR "How to secure your domain with DNSSEC". I am searching the second. I am running a Debian Squeeze server with root privileges which has a domain name ending with ".de" (which is already signed by the root zone). The network interface at this server uses the gateway IP (DNS resolver?) of the datacentre the server is running on. My domain is hosted at freedns.afraid.org , where I can add DNS RRs for my domain. They are currently NOT capable of adding DNSSEC RRs, but I am bugging them to support this soon. ;-) My simple question is: How do I setup DNSSEC on Debian? Resp. who have I ask to? As far as I understand, all I have to do is to run dnssec-keygen on my Debian server and then add the key to my DNS-provider as DNSSEC RR. (And change it every 30 days?) I have looked at this http://www.isc.org/files/DNSSEC_in_6_minutes.pdf but it looks like you have to be the owner of a ZONE, so I don't think this applies to me. Who needs to sign my domain? My DNS-provider or my zone (DeNIC) or can I do it myself? Any help is very appreciated!

    Read the article

  • Getting Apache to serve same directory with different authentication over SSL?

    - by Lasse V. Karlsen
    I have set up VisualSVN server, a Subversion server that internally uses Apache, to serve my subversion repositories. I've managed to integrate WebSVN into it as well, and just now was able to get it to serve my repositories through WebSVN without having to authenticate, ie. no username or password prompt comes up. This is good. However, with this set up there is apparently no way for me to authenticate to WebSVN at all, which means all my private repositories are now invisible as far as WebSVN goes. I noticed there is a "Listen 81" directive in the .conf file, since I'm running the server on port 81 instead of 80, so I was wondering if I could set up a https:// connection to a different port, that did require authentication? The reason I need access to my private repositories is that I have linked my bug tracking system to the subversion repositories, so if I click a link in the bug tracking system, it will take me to diffs for the relevant files in WebSVN, and some products are in private repositories. Here's my Location section for WebSVN: <Location /websvn/> Options FollowSymLinks SVNListParentPath on SVNParentPath "C:/Repositories/" SVNPathAuthz on AuthName "Subversion Repository" AuthType Basic AuthBasicProvider file AuthUserFile "C:/Repositories/htpasswd" AuthzSVNAccessFile "C:/Repositories/authz" Satisfy Any Require valid-user </Location> Is there any way I can set up a separate section for a different port, say 8100, that does not have the Satisfy Any directive there, which is what enable anonymous access. Note that a different sub-directory on the server is acceptable as well, so /websvn_secure/, if I can make a location section for that and effectively serve the same content only without the Satisfy Any directive, that'd be good too.

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • How to pipe internet radio into a tuner?

    - by JW
    UPDATE: Thanks everyone for the ideas! This was an area I knew very little about but now I can talk with a little more expertise about it. Much appreciated! Visited my dad this weekend and he wants to pipe some internet radio he's found down to a tuner on quite a distance away in the house. He uses computers for only very basic things: e-mail, getting the Post crossword, checking Yahoo!, checking recipes, etc. There's currently one computer in the house (no router). My initial suggestion (without any research whatsoever) was to get a wireless router and a netbook for downstairs near the tuner, but he initially wasn't too keen about having another computer down there. Anyway, is there any computer hardware that could magically pipe the audio output from the computer down to one set of (RCA) audio inputs on the tuner? Wireless isn't necessary but it probably would be easier. Anyway, thanks for your suggestions! UPDATE Thanks everyone! Voted up all of your suggestions now that I have 15 rep. Much appreciated.

    Read the article

  • Postfix spool on ext3 optimiziations in >=linux-2.6.34 days

    - by Luke404
    Given the very specific nature of the subject (we're not talking about mailboxes, just the spool; we're not talking about other filesystems, just ext3; and so on...) and the maturity of the softwares involved (linux kernel, ext3fs, postfix) I'd think there should be a more or less agreed on set of best practices to filesystem related tuning. I'm trying to get a roundup of them: data=journal became the default in recent kernels (somewhere around 2.6.30 IIRC) so we should be ok with that Wietse Venema says atime must be on, but Postfix documentation recommendsnoatime while talking about the Incoming Queue. Does that mean that postfix needs atime on just for some queue directories and will benefit from noatime on the others? can we use noatime if we just don't use ETRN? filesystem can be mounted nodev,noexec,nosuid - no* won't prevent you from setting attributes (postfix uses exec attr) they just won't have any effect (we don't run anything from the spool) the fsync() issue cited by Wietse and/or the chattr -S are probably linked to sync/async options of ext3fs but I do not understand them enough. Mouting the filesystem with async option is equivalent to chattr -R -S the whole fs? Seems like it will increase performance, but will that pose a risk of "loss of mail after a system crash" or is it really "safe on /var/spool/postfix" ? would you tune anything else on postfix-2.6.x to work better on ext3 or do you leave defaults everywhere? is there a "best" linux I/O scheduler for this kind of workload (namely CFQ or deadline?) or that's something that will vary too much based on hardware configuration? would you tune anything else in the filesystem or in the kernel? anything else? References: Postfix Performance here on SF Postfix documentation about the Incoming Queue Wietse Venema in Best file system on [email protected] here Postfix and ext3 on [email protected] here and there

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >