Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 193/423 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Adding MySQL servers/ data nodes into database clustering without restarting mysql cluster

    - by Dwayne Johnson
    I currently have mysql clustering up and running. For high scalability is there a way to include either mysql node, data nodes, or management nodes without restarting the entire cluster. I wish to understand how is it implement or is there a documentation I can read. I believe only the latest version can support this. I am running NDB 7.0. I am aware that I am able to add the nodes online, but it requires me perform a rolling restart. What other approach I can take to implement this without restarting in my network?

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • GAC locking problem when running deployment

    - by Kieran
    We have a NANT script that uses msbuild to compile our visual studio solutions and deploys the .dlls into the GAC. This works well on our integration/test servers as part of continuous integration, cruise control uses the NANT scripts and every time the dlls are put into the GAC without problem. On our local development machines, where we use subversion/vs.net etc. for development, frequently certain dlls do not make it to the GAC when we run the build. We think we have narrowed this down to visual studio and/or a plug in locking the GAC or the dlls for some reason. Strangely if we run the build a second time all the dlls make it to the GAC. We have added various iisreset's to the NANT script in the hope of releasing the lock but to no avail. Can anyone suggest a good approach to attack this problem? All the best

    Read the article

  • Preserving URLs after CMS migration with redirect database and apache http?

    - by stwissel
    We will migrate an entire intranet from one CMS to another. All URLs will change in a non-predictable pattern, but I can capture a file with original,new URLs I can feed into anything. I have hundred thousands of URLs, not just a few hundred. What I would like to do: every URL that is not found (404) should be checked against the database and if a new URL found a 301/308 issued instead. Some trickery to suggest similar pages in the 404 message if the lookup was unsuccessful would be an added bonus. Is that the right approach or should I check redirection first all the time? How would I do that in Apache2? Is that a custom 404 module?

    Read the article

  • Bash: Reset and Clear Commands

    - by sixtyfootersdude
    I have been using the command: reset to clear my terminal. Although I am pretty sure this is not what I should be doing. Reset, as the name suggests resets your entire terminal (changes lots of stuff). Here is what I want: I basically want to use the command clear. However if you clear and then scroll up you still get tonnes of stuff from before. In general this is not a problem however I am looking at gross logs that are long and I want to make sure that I am just viewing the most recent one. I know that I could use more or something like that but I prefer this approach.

    Read the article

  • KVM online disk resize?

    - by Eil
    We're evaluting KVM for Linux virtualization on a few projects. All is going well so far. But one of our requirements is the ability to add disk space to a running guest without rebooting or taking it offline. Is this possible with KVM? The only thing I've found so far (but have not tested yet) is the ability to hotplug disks into the machine. If I go this route, then I could always add the new disk to an LVM volume group on the guest and then extend the chosen logical volume. The biggest downside to this approach is that over time we might end up with guests having variable numbers of virtual disks. The "real" disk space would be provided to the host over a SAN, so we can always add more space to the host whenever.

    Read the article

  • OpenVPN performance: how many concurrent clients are possible?

    - by Steffen Müller
    I am evaluating a system for a client where many OpenVPN clients connect to a OpenVPN server. "Many" means 50000 - 1000000. Why do I do that? The clients are distributed embedded systems, each sitting behind the system owners dsl router. The server needs to be able to send commands to the clients. My first naive approach is to make the clients connect to the server via an openvpn network. This way, the secure communication tunnel can be used in both directions. This means that all clients are always connected to the server. There are many clients summing up over the years. The question is: does the OpenVPN server explode when reaching a certain number of clients? I am already aware of a maximum TCP connection number limit, therefore (and for other reasons) the VPN would have to use UDP transport. OpenVPN gurus, what is your opinion?

    Read the article

  • Configuring and managing Windows web server

    - by Mike C.
    Hello, I run a few websites and I was thinking of paying for a dedicated Windows web server from GoDaddy instead of paying for each site's hosting individually. I know enough about IIS to configure the Host Header and stuff like that, but I'm a little fuzzy about the email portion of the hosting. I have a few questions: Do I need to install an SMTP server on the web server to allow for emails to be sent/received to a website email address? Or is there another approach that I'm unaware of? Are there tools that monitor the amount of bandwidth used by the server? GoDaddy charges for bandwidth and I want to make sure I don't go over. Am I opening a can of worms that I don't really want to open by going the dedicated server route? Things like server updates, security, etc? Thanks!

    Read the article

  • Is RAID 0 or JBOD better for home media server?

    - by Donald Hughes
    I have an external two-bay drive enclosure (the OWC Mercury Elite-AL Pro) connected to a Mac Mini (my home media server) over FireWire 800. I'm streaming media to other computers in the house over wired gigabit. I have two 1.5 TB drives that I'm using independently right now. The media is on one, and I'm mirroring the files to the other drive at night as a backup. But as I approach filling up the drive I'm wanting to span those two drives together to give me a total of about 3 TB, and then buy another drive for backups. The external enclosure supports both RAID 0 and JBOD, but I'm not clear on which would be better in this situation. Would RAID 0 provide any performance improvements over JBOD for streaming video (possibly several streams at once? How does each affect the MTBF of the drives? In general, should I choose RAID 0, JBOD, or keep them independent?

    Read the article

  • Forward Shibboleth Environment Variables to Tomcat via Apache

    - by Deepak Singh Rawat
    I am using Shibbolethv2.3 with Apache web server and Tomcat application server. I am using Apache as a reverse proxy using mod_proxy.so. I am not able to forward the Shibboleth environment variables from Apache to Tomcat. I am able to forward the attributes in the headers but as already mentioned in the wiki this approach is not safe. I have tried forwarding the environment variables by the following directive : SetEnv AJP_username ${username} then at the Java side I can access the attribute by : request.getAttribute("username"); The strange thing here is that, I get a different value instead of the one set by Shibboleth. I get the Windows account name as a result. If I use any other attribute name, I get a null value. I have searched a lot and have run out of options. Please guide me towards the right solution. My setup details : Shibboleth version : 2.3 OS : Windows XP SP3 Webserver : Apache 2.2 Application Server : Tomcat 6 Proxy module : mod_proxy.so

    Read the article

  • MacPorts, how to run "post-destroot" script

    - by Potatoswatter
    I'm trying to install MacPorts gdb; it seems to be poorly supported… Running "port install" installs it to /opt/local/libexec/gnubin/gdb, but the intent doesn't seem to be to add that to $PATH. The portfile doesn't define any parameters for port select which is typically used to set a MacPorts installation to handle default Unix commands. But it does include these lines: foreach binary [glob -tails -directory ${destroot}${prefix}/bin g*] { ln -s ${prefix}/bin/${binary} ${destroot}${prefix}/libexec/gnubin/[string range $binary 1 end] } This is buried under an action labeled post-destroot. destroot is a MacPorts command but post-destroot is not. The script is apparently not run by port install or port activate, or if it's failing it's doing so silently. Is there a better approach than creating the links manually?

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • List all devices of system

    - by pareshverma91
    In my understanding linux can list only those devices which it can understand, i.e. for which the drivers have been installed. I think lspci is the command for that. But how can one figure out if there exists some device in the system for which the drivers are not installed and perhaps some hint about what that device is for and what driver would suffice for it. I would like to know this info so as to be able to recompile my linux kernel to a minimum and would like to avoid a hit and trial approach.

    Read the article

  • How to protect my VPS from winlogon RDP spam requests

    - by Valentin Kuzub
    I got some hackers constantly hitting my RDP and generating thousands of audit failures in event log. Password is pretty elaborate so I dont think bruteforcing will get them anywhere. I am using VPS and I am pretty much a noob in Windows Server security (am a programmer myself and its my webserver for my site). Which is a recommended approach to deal with this? I would rather block IPs after some amount of failures for example. Sorry if question is not appropriate.

    Read the article

  • Launch an arbitrary application with a specific icon

    - by Camilo Martin
    I'm thinking about customizing my Windows 7's application icons, so that they all follow a specific theme (Token-like icons). This would involve using Resource Hacker (or similar) to patch each .exe. Not only this would be tiresome, but also it would require doing it again at each update of each application (nevermind that some could break just because it was tampered with). Instead, is there any way to launch an application with a specific icon? Ideally it would be something from the command-line (so I can make a shortcut), like this: launchwithicon.exe --app C:\myapp.exe --icon C:\myicon.ico Note that while it is possible to do something similar by setting the taskbar to "always combine, hide labels", I do not like this approach and instead am looking for something that works without combining the taskbar icons.

    Read the article

  • Redirect users to internal webpage on first visit

    - by Sihan Zheng
    I have a lan of maybe 20 - 30 computers, and a Windows server 2003 server on hand (I can also run any x86 Linux distro). What I am trying to do, is to redirect users to a webserver inside the LAN the first time they visit certain domains. For example, the first time a user visits "google.com", they will be redirected to 192.168.1.2 (a webserver, where they will be shown a custom webpage), attempts after that will go to google. Pretty much what I am trying to do, is to provide a captive server like service, showing people a custom webpage the first time they try to access certain websites (but not others). I'm pretty flexable on how this can be done, as long as it works. Can you guys give me an idea on how to approach this problem? I am looking for (hopefully) a free solutions. Thanks

    Read the article

  • How to debug xsane "no devices available" error on 64 bit Ubuntu 10.04

    - by BD at Rivenhill
    I have a Brother MFC-J615W printer/scanner and I wish to use the scanning feature across a network with a computer running 64-bit Ubuntu 10.04. I have installed the drivers from the Brother website and followed all of the instructions, and printing works fine, but xsane (installed from repositories) produces a popup with the message "no devices available" on startup. I recently had success with a similar approach when installing drivers for a Brother MFC-495CW on 32-bit Unbuntu 9.10, so I am aware of issues such as requiring root access if the driver permissions are not set correctly, but running xsane as root does not solve this problem. Are there any tools available to debug this problem further or does anyone have advice on how to proceed?

    Read the article

  • Is there a Windows or Linux equivalent of Soulver calculator application?

    - by Shevek
    I've just been shown a brilliant calculator app called Soulver which is only available on Mac OS X Maths on a Mac as it should be Soulver is a new kind of calculator application which uses a simple yet powerful word-processor style interface instead of the traditional "button" approach to doing math. Main Features No equals button - Soulver instantly calculates as you type. Multiple lines - Soulver lets you do math over multiple lines and edit previous expressions. Flexible to words - Soulver doesn't mind if you include words or labels between numbers. Basic functions - Soulver includes every standard calculator function, like sin(), cos() & tan() Clever English functions - Soulver includes some "English" math functions. For instance you can type "10% off $200" and get $180 Floating palettes - Soulver's answer & stats palettes will give you conversions and statistics on your work as your type Save your work - like a word processor, Soulver allows you to save and reopen your work This is a fantastic concept and I would really like to find it's equivalent for Windows and/or Linux. Any suggestions?

    Read the article

  • Building php-devel package from source (php 5.3)

    - by BajaBob
    I am building PHP 5.3 rpm packages for our custom CentOS 5 yum repo. I am fairly new to building rpms to be honest, but I have had moderate success downloading the SRPMS for a given package and repackaging them using "rpmbuild --rebuild" command. One thing that is throwing me off though is how to satisfy the php-devel package.. I obviously have the PHP 5.3 source files as I was able to build my php-common and other packages with it. But I am not sure how to actually build the devel package! From what I understand, I already have most of what I need - the latest php 5.3.5 source tarball. However I am not sure how to build the correct .spec file to satisfy what I need. If you are knowledgeable in this area, would you mind helping a fellow sysadmin out? Sharing a spec file or at least giving me some pointers on how to approach it. Thanks much serverfault community! -BajaBob

    Read the article

  • VoIP Tunnel Implementation for SIP Client

    - by Mahendra
    I am planning to provide an option for tunneling in my SIP client. I have tried to search on web for open-source implementation of this, but couldn't find one. My questions are: 1) If I go writing down my own custom code for implementing the feature - What are the different parameters / cases that I should consider & what should be my approach to start it? 2) Is there any open source implementation for SIP Tunneling already available? Any inputs are appreciated. Thanks.

    Read the article

  • Log application changes made to the system

    - by Maxim Veksler
    Hello, Windows 7, 64bit. I have an application which I don't trust but still need to run. I would like to run the installer of this application and later on the installed executable under some kind of "strace" for windows which will record what this application did to the system. Mainly: What files have been created / edited? What registery changed have been made? To what network hosts did the application tried to communicate? Ideally I would also be able to generate a "UNDO" action to undo all the changes. Please don't suggest full Virtualization solutions such as Virtualbox, VMWare and co. because the application should run in the host system (A "sandbox" approach will OTHO be accepted, IMHO). Do you any such utility I can use? Thank you, Maxim.

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Doubts about Cloud Infrastructure

    - by Pravin
    Maybe a little more of the same questions that others have asked but wanted to clarify my doubt, for some years run my hosting company (reseller of esds) and I've done well so far, but I am determined to bring quality and server technology to offer another level. So far I have understood that there is a difference between cloud and cluster servers because the cluster function as load balancers that distribute in different servers roles and use the servers less overloaded in the cloud is the union of multiple servers and then the same is vitualized unlike the cluster that is allowed to use the resources of the CPU and RAM servers in the virtualized environment. My approach is to use 3 dedicated servers to create a cloud server, My doubts: Does this type of cloud servers are only reserved for big companies? (Either because the union of the servers is done by hardware or software with high price) What characteristics should these servers meet? Possibly through software which should be used? Available? Thanks for your time, Cheers!

    Read the article

  • zero-config CGI enabled web server

    - by halp
    To serve static content of a directory over http, one can simply navigate to that directory and type: python -m SimpleHTTPServer 11111 which will start a http server on port 11111. This hack is nice because it requires zero-config: no stand-alone web server, no config files at all. Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support? The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser. Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.

    Read the article

  • ZFS replication between 2 ZFS file systems

    - by XO01
    I initially replicated tank/storage1 -- usb1/storage1-slave (depicted below), and then (deliberately) destroyed the snapshot I replicated from. By doing this, did I lose the ability to incrementally (zfs send -i) replicate between these 2 file systems? What's the best way to approach SYNC'ing these file systems after destroying this snapshot? # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 128G 100G 23K /tank tank/storage1 128G 100G 128G /tank/storage1 usb1 122G 563G 24K /usb1 usb1/storage1-slave 122G 563G 122G /usb1/storage1-slave usb1/storage2 21K 563G 21K /usb1/storage2 What if I initially RSYNC'd tank/storage1 -- usb1/storage1-slave, and decided to incrementally replicate 'via zfs send -i'.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >