Search Results

Search found 26004 results on 1041 pages for 'debian based'.

Page 349/1041 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • how to resolve all externally unresolved DNS queries ?

    - by red eyes dev
    I am using PowerDns on a Linux box (Debian 6). I would like to set up the powerdns server to resolve all externally unresolved DNS queries to a given, internal host. Is this possible? How is it done? I think it's necessary to use pdns-recursor, but my configuration file doesn't works ! I use mysql for backend. I add manually google.com and it's works, but if I delete entry I have "server failed", root dns (or isp dns) don't answer me.

    Read the article

  • mod_rewrite not working for subdomain in Apache2

    - by Matt
    Hi, I'm having some trouble with mod_rewrite. So I'm implementing it through .htaccess, and I can get it working on my main vhost, domain.com - what I want it to do is rewrite http:// domain.com to force it to https:// domain.com, which it does well. I want to have name-based vhosts for the one IP with the following redirects: (I'm breaking up domain names with a space because otherwise serverfault recognises them as links) http:// domain.com -- https:// domain.com http:// staging.domain.com -- https:// staging.domain.com http:// test.domain.com -- https:// test.domain.com http:// beta.domain.com -- https:// beta.domain.com domain.com redirects to https:// domain.com, but staging.domain.com doesn't, although I can access https:// staging.domain.com. The .htaccess is identical for both, just with the domain name different. It doesn't seem to do any rewriting at all for staging.domain.com, I've tested this by trying to get it to rewrite to www.google.com. I have a wildcard DNS record, *.domain.com which points to the domain IP. Is there a particular way I should have the virtualhosts configured to allow this? I keep reading in the Apache documentation that it doesn't support multiple SSL name-based vhosts. But I can access both https:// domain.com and https:// staging.domain.com just fine. Any thoughts? Thanks to everyone for your help with this.

    Read the article

  • Windows Server 2003: Remapping external domain

    - by Chuck Harmston
    We're playing a going-away prank on a coworker, and would like to use a rule in our internal DNS server to redirect techcrunch.com to point at one of our internal development servers. Basically, I'd like to accomplish the same thing as adding a line to a Linux /etc/hosts file, only for the entire network. I have access to our DNS server. How would you go about doing this? I created an entry in the reverse lookup subnet with the 'Host Name' of techcrunch.com and the 'Host IP' of our development server, a Linux box running Debian on which I've created a virtualhost to handle requests to techcrunch.com. It doesn't appear to be working, however, and my expertise has reached its limit. Thanks!

    Read the article

  • Corrupt mysql system tables

    - by psynnott
    I am having issues with the columns_priv table in the mysql system database. I cannot add new users currently. I have tried repairing it using mysqlcheck --auto-repair --all-databases --password but I get the following output: mysql.columns_priv Error : Incorrect file format 'columns_priv' error : Corrupt Is there any other way to repair this table, or how do I go about replacing it with a blank table? What would I lose by doing that? Thank you Edit (Additional Info) mysqld is currently using 100% cpu constantly. Looking at show processlist, I get: mysql> show processlist; +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | 5 | debian-sys-maint | localhost | mysql | Query | 1589 | Opening tables | ALTER TABLE tables_priv MODIFY Column_priv set('Select','Insert','Update','References') COLL | | | 752 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • Server suddenly running out of entropy

    - by Creshal
    Since a reboot yesterday, one of our virtual servers (Debian Lenny, virtualized with Xen) is constantly running out of entropy, leading to timeouts etc. when trying to connect over SSH / TLS-enabled protocols. Is there any way to check which process(es) is(/are) eating up all the entropy? Edit: What I tried: Adding additional entropy sources: time_entropyd, rng-tools feeding urandom back into random, pseudorandom file accesses – netted about 1 MiB additional entropy per second, problems still persisted Checking for unusual activity via lsof, netstat and tcpdump – nothing. No noticeable load or anything Stopping daemons, restarting permanent sessions, rebooting the entire VM – no change in behaviour What in the end worked: Waiting. Since about yesterday noon, there are no connection problems anymore. Entropy is still somewhat low (128 Bytes peak), but TLS/SSH sessions have no noticeable delay anymore. I'm slowly switching our clients back to TLS (all five of them!), but I don't expect any change in behavior now.

    Read the article

  • rsync osx to linux

    - by Nick
    I did a backup to a remote nfs folder with rsync, from a MAC to a Remote Debian. The final backup is 58GB less than the original. Rsync says that everything was OK, and nothing to update. Macintosh:/Volumes/Data1 root# du -sh Produccion/ 319G Produccion/ root@Disketera:/mnt/soho_storage/samba/shares# du -sh Produccion/ 260G Produccion/ can I trust in rsync? I'm using rsync -av --stats /Volumes/Data1/Produccion/ /mnt/red/ (/mnt/red is my samba mountpoint) Some differents folders root@Disketera:/mnt/soho_storage/samba/shares/Produccion/tiposok# du -sh * 0 IndoSanBol 0 IndoSans-Bold 0 IndoSans-Italic 0 IndoSans-Light 0 IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 12K TCL IndoSans_mac Macintosh:/Volumes/Data1/Produccion/tiposok root# du -sh * 36K IndoSanBol 40K IndoSans-Bold 36K IndoSans-Italic 36K IndoSans-Light 36K IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 160K TCL IndoSans_mac

    Read the article

  • Keepalived with apache unable to bind interface on Backup server

    - by davideagle
    I have two debian 6 servers running keepalived 1.1.20 with one server acting as a Master and the other as a Backup. Both servers host apache 2.4 that have a global Listener on all interfaces on port 80 (Listen *:80) how ever I have some sites that require a listener for port 443 (SSL) and that is configured for each VirtualHost in the Apache config since I do not want every VirtualHost to listen on port 443. The problem is when I try to start Apache on the Backup machine that does not hold the virtual interface the VirtualHost is supposed to be listening on, I get AH00072: make_sock: could not bind to address 1.1.1.1:443. I know this is expected behavior of Apache. The real question is are there any known workarounds or solutions to this scenario?

    Read the article

  • Can My Personal GMail Query A Remote LDAP Server?

    - by Maarx
    I have a personal GMail account, from which I frequently send e-mail to a great many various users of a specific business. The corporation has been kind enough to provide me with the credentials to access their LDAP server, with which I would like my GMail web client to be able to auto-complete partial addresses or names for which that LDAP server has an entry. Is there any way I can get a personal GMail account (or it's corresponding entire Google account) account to incorporate an LDAP server into it's Contacts? If I cannot get it to query dynamically and on-demand, is there an idiot-proof way (assuming the client permits, which they may not) to query the LDAP server for it's entire database, save it, and bulk import it to GMail? Perhaps, even, something I could set to repeat periodically (weekly, perhaps), without human interaction? If I did the latter, I assume it would be trivial to import all of these contacts under a single category that could be easily manipulated from within the GMail web-based client. I have been a staunch user and supporter of the GMail web-based client since it's instantiation, but this one is kind of a deal-breaker for me. If it's impossible, what do you suggest I do?

    Read the article

  • Throttling apache downloads selectively

    - by Synchro
    I have a linux box running Debian Sarge (old I know) and apache 2.0.54. It serves two kinds of files - regular web pages and small images, and a lot of large podcast mp3s. The podcast downloads swamp the connection and make the rest of the site unresponsive, so I'm looking to throttle the data transfer rate (not the request rate) of just the podcasts. I've set up haproxy using this technique which does what it says it will, but solves a different problem - even only 5 simultaneous podcast downloads is enough to saturate the link. In a perfect world, haproxy would support per-connection throttling, but it doesn't. So far I've looked at mod_bw (won't compile for me, seems unsupported), mod_cband (unsupported, widely reported as problematic) and iptables using tc. The iptables approach would allow me to throttle things, but would not be at all selective, slowing down everything on the server, not just the podcasts, so would just move the bottleneck without changing overall behaviour. Ideas?

    Read the article

  • How to configure installed Ruby and gems?

    - by NARKOZ
    Hi. My current gem env returns: RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux] - INSTALLATION DIRECTORY: /home/USERNAME/.gems - RUBYGEMS PREFIX: /home/narkoz - RUBY EXECUTABLE: /usr/bin/ruby1.8 - EXECUTABLE DIRECTORY: /home/USERNAME/.gems/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/USERNAME/.gems - /usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gempath" => ["/home/USERNAME/.gems", "/usr/lib/ruby/gems/1.8"] - "gemhome" => "/home/USERNAME/.gems" - REMOTE SOURCES: - http://rubygems.org/ How can I change path /home/USERNAME/ to my own without uninstalling? OS: Debian Linux

    Read the article

  • Very high memory usage, but not claimed by any process?

    - by SharkWipf
    While stress-testing LVM on one of our Debian servers, I came across this issue where memory would fill up a lot to the point where it would run the server out of memory, but no process would claim the memory. See http://i.imgur.com/cLn5ZHS.png, and see http://serverfault.com/a/449102/125894 for an explanation on the colors used in htop. Why is this happening? And is there any way to see what process is using the memory? Htop is configured not to hide any processes, so what is it that htop is missing? In this particular case, I can fairly certainly say that it is caused, directly or indirectly, by lvmcreate, lvmremove or dmsetup, as I was stress-testing that. Do note that this question is not about solving the LVM problem, but about why the memory isn't claimed by any process. Stopping all LVM commands does bring the memory back down to <600MB.

    Read the article

  • Qemu in an ssh session or the quest for the nographic option ?

    - by LB
    Hi, I ssh to a machine and I would like to start a qemu session inside this ssh session. I thought that the nographic option would do the trick. -nographic Normally, QEMU uses SDL to display the VGA output. With this option, you can totally disable graphical output so that QEMU is a simple command line application. The emulated serial port is redirected on the console. Therefore, you can still use QEMU to debug a Linux kernel with a serial console. but unfortunately, i don't see any output. The command line that i'm using once i've sshed to my machine is : qemu-system-x86_64 -hda debian.img -nographic any idea ? thanks.

    Read the article

  • SBS2011 Standard DNS suddenly not resolving some domains

    - by Matt
    Suddenly today I am unable to resolve common domains like serverfault.com, facebook.com; but other domains like google.com, cnn.com work fine. This is on a client machine (Win7 Pro) connected to an SBS2011 Standard domain. The only DNS server is the SBS2011 server. The same domains work fine on all client PCs I have tried, and the same ones do not work. Using nslookup, I get 'no such domain' errors for facebook.com, and the correct DNS entries for the ones that do work. When I add Google's Public DNS to my client PC as a backup (primary = local SBS server, secondary = 8.8.8.8), everything works fine for my client PC, but querying from the SBS server directly or from other client PCs are broken (so I don't believe it's a firewall issue). My main question is how can I see what servers the SBS2011 server queries if it doesn't know about a domain? There is nothing in our firewall logs that say it blocked any DNS-based packets, but I also wanted to query based on the IP/FQDN on the servers that the SBS server was likely to contact to find out about facebook.com for example. Update 23/05/2012: It appears DNS is working again this morning for the affected websites. Both the DC on its own and all client PCs can once again access the websites that were not loading last night, as well as the websites that were working. I haven't changed anything overnight, so it appears that there was some kind of temporary glitch, but I can't understand what would have caused it on the network.

    Read the article

  • central log-server with auditdisp

    - by johan
    I want to setup a central log-server. The log-server is running with debian 6.0.6 and the audit daemon is installed in version 1.7.13-1. The Clients are running with Red Hat 5.5 and they connect to the log-server via audispd. The connection works fine and i get all messages from each node. My questions is: is it possible that the auditd daemon from the log server write the messages from each node in a separate file? I try to transfer the messages via the syslog daemon, that works but i can not use tools like ausearch to analyze these log-files.

    Read the article

  • Apache worker is crashing after 3.000 users

    - by user1618606
    I activated Apache Worker on my VPS and I'm having problems, 'cause the website is crashing when 3000 users are accessing the website. I'm using http://whos.amung.us/stats/2jzwlvbhvpft/ as counter. My Apache Worker configuration: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 1 <IfModule mpm_worker_module> ServerLimit 20000 StartServer 8000 MinSpareThreads 10400 MaxSpareThreads 14200 ThreadLimit 5 ThreadsPerChild 5 MaxClients 20000 MaxRequestsPerChild 0 </IfModule> The VPS have the SO: Debian 64 LAMP, memory: 14gb and CPU: 24ghz What I could to do to give a best performance?

    Read the article

  • HFS partition mounting read-only

    - by Sid
    Hey, I have an external Western Digital Hard drive with two HFS partitions with journaling disabled. When I connect it to a computer running Linux (Debian or Ubuntu), frequently both partitions are mounted read-only. In the past, mounting them on my Macbook and executing the command to disable the journaling often worked (even though it would tell me that journaling was already disabled) but I would love to have a solution which works every time. Thanks! Edit: In light of Chris Johnsen's comment below - my question is how to mount the filesystem read+write on Linux since it is not automatically doing so itself

    Read the article

  • Linux - Block ssh users from accessing other machines on the network

    - by Sam
    I have set up a virtual machine on my network for uni project development. I have 6 team members and I don't want them to SSH in and start sniffing my network traffic. I already have set the firewall on my W7 pcs to ignore any connection attempts from the Virtual Machine, but would like to go a step further and not allow any network access from the VM to other machines on my network. Team members will be access the VM by SSH. The only external port forwarded is to vm:22. The VM is running in VirtualBox on a bridged network connection. Running latest Debian. If someone could tell me how to do this I would be much obliged.

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • Compilation of Etherpad fails in an OpenVZ VE

    - by ulf
    Hi everyone. I’m almost giving up, this will be my last try: I try to compile Etherpad on my OpenVZ server. It’s running a Debian 5.0 as the host system, in the VE I’ve got Ubuntu 10.04. I installed Etherpad in this VE with the instructions from the official Ubuntu Wiki: https://wiki.ubuntu.com/Etherpad. Everything runs fine until it comes to compilation. After calling bin/build.sh as described in the wiki the first steps are running fine. But then I’m running into a memory error: java.io.IOException: Cannot run program "cp": java.io.IOException: error=12, Cannot allocate memory Well, I understand the error message but don’t see the cause. The command free tells me that there’s plenty memory left in this VE: total used free shared buffers cached Mem: 2415236 1140872 1274364 0 0 0 -/+ buffers/cache: 1140872 1274364 Swap: 0 0 0 Beautiful. But even repeating the compilation process doesn’t bring me any further. Any help would be appreciated.

    Read the article

  • Remote Software Solution that Acts as a Client

    - by Richard
    I am looking for something that I am not sure exists. I have a remote computer that will not allow incoming traffic due to ISP blocking of ports(basically double NAT situation that I am unable to get around). I am wondering if I have a computer acting as a client, is there any solution out there that will allow remote access to the computer. I do have other servers on the net that have static IP's that the computer could initiate a connection with. I am thinking of using Debian Linux, However computer is not built yet so OS is not overly important at this point.

    Read the article

  • How to setup a fast VPN server

    - by Saif Bechan
    I am trying to set up a VPN that has a fast download speed. The server I have is a linux server and from there I can download 2 megabytes a second. At home I can also download with 2 megabytes a second. All the downloads I do are from the same source, no different server. Now I have set up a VPN connection between my home and the server, and now I am only downloading 64 kilobytes a second! The connection I have created is a PPTP server on a debian machine. Now my question is if it is possible to optimize this connection. Should I maybe switch to OpenVPN, or change operating systems? Or are there some kind of settings to tweak to make the connection optimal. PS. The server I am running is on a XEN node. I have done the proper ip forwarding.

    Read the article

  • The cache is getting at full level so fast

    - by CompilingCyborg
    Please, the memory and the cache are getting to the full level quite quickly under my linux mint 9 - isadora system. I used Ubuntu and Debian before, and it was not causing this issue at all. At the current time i typing the following command frequently to empty the cache "echo 3 /proc/sys/vm/drop_caches". Please any way around this? or do you know what's going wrong? | I am only programming on this machine; no graphics, no games nothing. Thanks in advance for your help!

    Read the article

  • Why is the size of windows off by 226x238 if defined via the Window Rules?

    - by Bobby
    I have installed Sawfish 1.8.2 from source on my new Ubuntu 12.04 installation following the Debian instructions, but I had this problem also with the stock 1.5.3. Whenever I define dimensions in the Window Rules for a window, the size is off by exactly 226x238 pixels, which means that 100x100 turns into 326x328. That's a very odd behavior, given that Sawfish is saving and loading the dimensions of the windows correctly (if saved via the window menu). Some additional system information: $ uname -a Linux Dagon 3.2.0-24-generic-pae #39-Ubuntu SMP Mon May 21 18:54:21 UTC 2012 i686 i686 i386 GNU/Linux $ sawfish --version sawfish version 1.8.2 nvidia proprietary driver, 9600GT Two monitors, 1920x1080 + 1440x900 in one session. Positionng the windows is working fine, only the dimensions are off by that odd number. Does somebody have an idea why?

    Read the article

  • Using slapcat to backup LDAP

    - by rsw
    I'm running an OpenLDAP directory on a Debian server, using the hdb backend. I've been wondering about backups, and did som reading on the net. Slapcat seems to be the way to go, but I keep seeing these posts speaking about it being dangerous to use it while slapd is running. In what way is this dangerous? I'm planning to run these backups during the night, and no writing will be done to the database during the night - reads will probably occur though. If there's any other backup solution better suited for this, I'd gladly hear about it.

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >