Search Results

Search found 3477 results on 140 pages for 'vps administration'.

Page 106/140 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Oracle 11g connection prob.

    - by Kuldeep dwivedi
    Our application is based on Oracle 11g database its drivers already installed but application throws an error on runtime. "AppliMSP.ADOcommands.GetConnected Error while connecting, Provider cannot be found, It may not be properly installed." I am using OraOLEDB.oracle provider. This provider work properly on other module (Administration) of this application but as I want to connect as client with same name and password I get above error. I have tried with MSDAORA(Oracle) but I don't get any success. Can anyone help me?

    Read the article

  • high virtual memory usage in openvz?

    - by freedrull
    We're having a lot of memory problems on a new OpenVZ box. It is supposed to have 1 gig of memory, I'm not sure how much of that is burstable or guaranteed memory. Programs in general seem to take up more virtual memory than they do on my box at home, and on our other OpenVZ box. I wrote this simple C program: #include <stdio.h> #include <stdlib.h> int main(){ char *thingy = malloc(500); getchar(): return 0; } So it simply allocates 500 bytes and then returns. I ran the program on 3 computers. On my home machine, and our other OpenVZ box it shows about 1k bytes of virtual memory being used. On the new problematic machine its about 3k. I know this is just virtual memory and not resident memory, but why is this machine allocating so much virtual memory? Are there some settings I need to adjust to the OpenVZ memory settings? I tried changing the stack size with ulimit -s 256 and restarting some demons, but I still saw the same results. I'm doing all of my monitoring with htop, is this even a good program to use with a OpenVZ vps? I've read I should be parsing the output of /proc/user_beancounters intead or something.

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Linux: Managing users, groups and applications

    - by RN
    I am fairly new to linux admin so this may sound quite a noob question. I have a VPS account with a root access I need to install Tomcat, Java on it and later other open source applications as well. Installation for all of these is as simple as unzipping the .gz in a folder. My questions are A) Where should I keep all these programs? In Windows, I typically have a folder called programs under c:\ where I unzip all applications. I plan to have something similar here as well. Currently, I have all these under apps folder under/root- which I am guessing is a bad idea B) To what group should Tom belong to ? I would need a user - say Tom who can simply execute these programs. Do I need to create a new group? or just add Tom to some existing group ? C) Finally- Am I doing something really stupid by installing all these application by simply unzipping them? I mean an alternate way would be to use Yup or RPM or something like that to install these applications. Given my familiarity and (tight budget) that seems too much to me. I feel uncomfortable running commands which i don't understand too well

    Read the article

  • Failed to start apache Can't open /etc/apache2/envvars

    - by bumperbox
    i have had this problem a couple of times now and i am not sure what is causing it Failed to start apache : .: 45: Can't open /etc/apache2/envvars when i look at a dir listing, i get these question marks next to envvars, does anyone know what that means? os is ubuntu 10 if that helps drwxr-xr-x 7 root root 4096 Jan 29 11:56 . drwxr-xr-x 83 root root 4096 Feb 4 10:34 .. -rw-r--r-- 1 root root 8113 Sep 29 01:52 apache2.conf -rw-r--r-- 1 root root 8027 Oct 3 22:26 apache2.conf.dpkg-old drwxr-xr-x 2 root root 4096 Jan 29 11:56 conf.d ?????????? ? ? ? ? ? envvars -rw-r--r-- 1 root root 0 Oct 3 22:25 httpd.conf ?????????? ? ? ? ? ? magic drwxr-xr-x 2 root root 4096 Jan 29 11:56 mods-available drwxr-xr-x 2 root root 4096 Jan 29 10:18 mods-enabled ?????????? ? ? ? ? ? ports.conf drwxr-xr-x 2 root root 4096 Jan 29 11:56 sites-available drwxr-xr-x 2 root root 4096 Jan 29 11:55 sites-enabled UPDATE Just heard back from the hosting company, they move my VPS to a new hardware node last night, and something at their end wasn't quite right which caused the issue

    Read the article

  • IIS 7 Serving all pages with an injected iframe [closed]

    - by Andre Carlucci
    Possible Duplicate: My server's been hacked EMERGENCY My VPS just got hacked an all my pages are being served with an malicious iframe injected just before the html tag. The code is like this: <iframe src= http://117.21.247.171:700/1.htm width=0 height=0></iframe> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="pt-BR"> ... Firstly I thought it could be something related with wordpress, but my asp.net sites are also infected and even if I create a static html file with nothing inside, the iframe is injected. I'm using a Windows Server 2008 R2 Standard with IIS7.5 7600. Please, I'm trying to find the source of this for hours now, any help would be really appreciated. EDIT: Hey, why was this closed? I'm very interested to know how that be done in IIS instead of simply re-installing everything. Andre

    Read the article

  • nginx + @font-face + Firefox / IE9

    - by Philip Seyfi
    Just transferred my site from a shared hosting to Linode's VPS, and I'm also completely new to nginx, so please don't be harsh if I missed something evident ^^ I've got my WordPress site running pretty well on nginx & MaxCDN, but my @font-face fonts (served from cdn.domain.com) stopped working in IE9 and FF (@font-face failed cross-origin request. Resource access is restricted.) I've googled for hours and tried adding all of the following to my config files: location ~* ^.+\.(eot|otf|ttf|woff)$ { add_header Access-Control-Allow-Origin *; } location ^/fonts/ { add_header Access-Control-Allow-Origin *; } location / { if ($request_filename ~* ^.*?/([^/]*?)$) { set $filename $1; } if ($filename ~* ^.*?\.(eot)|(otf)|(ttf)|(woff)$){ add_header 'Access-Control-Allow-Origin' '*'; } } With all of the following combinations: add_header Access-Control-Allow-Origin *; add_header 'Access-Control-Allow-Origin' *; add_header Access-Control-Allow-Origin '*'; add_header 'Access-Control-Allow-Origin' '*'; Of course, I've restarted nginx after every change. The headers just don't get sent at all no matter what I do. I have the default Ubuntu apt-get build nginx which should include the headers module by default... How do I check what modules are installed, or what else could be causing this error?

    Read the article

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

  • duplicity fail: not promping for password: "Running 'sftp user@host' failed"

    - by Thr4wn
    I have two linode VPS accounts and I want to back up one onto the other (the reasons are mainly for fun and to practice server administration.) the short version Duplicity isn't even asking for my password, but immediately says "invalid SSH password" (but I can ssh into the other server). why? the long version When I run duplicity /home/me scp://[email protected]//root/backup I get Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) Invalid SSH password Running 'sftp [email protected]' failed (attempt #2) Invalid SSH password Running 'sftp [email protected]' failed (attempt #3) And it says Invalid SSH password immediately with no opportunity for me to actually type the password. When I type duplicity full -v9 --num-retries 4 /home/me scp://[email protected]//root/backup I get Main action: full Running 'sftp [email protected]' (attempt #1) State = sftp, Before = 'Connecting to 97.107.129.67... [email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) I can ssh into [email protected] fine, and in fact have the ip in known_hosts before I tried any of this. serer 1 (from which I'm running the duplicity command) is Linode's default Ubuntu 8 setup with only a handful of programs installed via apt-get. server 2 (represented by x.x.x.x) is literally only Linode's default Ubuntu 8 setup I previously tried using SystemImager -- would that have changed settings in a destructive way? (I have removed and rebooted since then) Isn't Duplicity supposed to prompt for password? Am I using it wrong? are there common mistakes/dependencies I need to know about? Is there any way that x.x.x.x could be setup that could make this not work (I used Linode's default Ubuntu 8 setup and barely )?

    Read the article

  • How do I determine the cause of a sustained spike in mysql queries/activity?

    - by mattmcmanus
    So this is more of a "I'm trying to learn about how this works" question rather than "there is a serious problem I can't figure out!" question. I'm setting up a VPS and have been tweaking and changing things here and there. I recently installed munin (like two days ago) and yesterday I noticed a significant increase in mysql activity. So now my curiosity is going crazy. How do I setup/access mysql's query log? I have about 5 databases on the server so I want to see which one is getting all the action. Is there anything else I can do to keep a better eye on what's going on? Here are the graphs. As you can tell, it's not that much activity at all but I'm just curious at the change. The sites that are on the server right now do not get a lot of traffic. It's running a couple drupal sites, only one of which is live. The live one hasn't had a spike in traffic and the last spike was 250 visitors so it's barely a spike at all.

    Read the article

  • ValidateCredentials() returns FALSE on First Call but TRUE on Subsequent Calls

    - by Nick Gotch
    I'm using the following code to authenticate users on my web service: using (PrincipalContext context = new PrincipalContext(ContextType.Domain, domain)) { return context.ValidateCredentials(userName, password); } The obstacle I'm running into is that the first call to ValidateCredentials() is returning false but subsequent calls return true. I placed a breakpoint at this line and in the Intermediate window I see the same results: first call returns false, second returns true, even though nothing was changed (by me) between calls. The 'domain' is String.Empty but I've also tried it with the actual domain name and get the same results. I'm not that versed in network administration so any help would be appreciated,

    Read the article

  • PHP scripts randomly becoming really slow to respond - Database lockup?

    - by webnoob
    Hi All, I wasn't sure whether to post this here or on stackoverflow, so apologies if its in the wrong place. I have about 7 php scripts running on a centOS VPS. Each of these scripts contacts a game server and processes the logs, with the logs it either does some database queries or sends info back to the game server. I am having an issue where some of the scripts will randomly become REALLY slow to respond and I don't know where to start with my debugging. Each script connects to its own database schema but on the same MySQL server. Each script will do about 4 inserts per second and twice as many select statements on their respective databases. I thought a database lockup may cause the issue but some console messages that are read from the database are sent to the game servers console without issue every 30 seconds, even when the script is slow to responding to other commands. Non of the scripts are using a lot of memory or CPU power. About 0.1% each. I know this information is really vague but I don't know linux very well at all (in fact, top is about my limit) and I really don't know where to start debugging this. Thanks.

    Read the article

  • iptables port forwarding works only for localhost

    - by Venki
    Below is my iptables config. I used this for my accessing a node js website running in port 9000 through port 80. This works fine only if access the website through local host / loop back. When I try to use the ip of eth0, which is assigned by my router through dcp. this does not work, when I use ip like 192.168.0.103 to access the website. I am not able to figure what is wrong here, Already burnt a day in this, still not able to figure out :( Edit: ( more information) Earlier, I was using this configuration to develop the website, i had configured the domain name to point to 127.0.0.1 in the /etc/hosts file. It was working fine, but now I am trying to deploy the website in a vps with static ip, This configuration does not work with both static IP. # redirect port 80 to port 9000 *nat :PREROUTING ACCEPT [57:3896] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [4229:289686] :POSTROUTING ACCEPT [4239:290286] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 -A OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 COMMIT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT -A INPUT -p tcp --dport 9000 -j ACCEPT -A INPUT -j REJECT

    Read the article

  • exim4 redirect mail sent to *@domain1.example.com to *@domain2.example.com

    - by nightcoder
    Current situation: We have a VPS that hosts a website example.org. Exim is configured to work as a smarthost. All emails sent through exim are successfully relayed to another mail server (that is working on example.com). Goal: To forward mail sent to *@example.org to *@example.com, i.e. change the recipient's address from *@example.org to *@example.com. Problem: If I send email to address *@example.org, then it seems exim doesn't change the address, it still relays the message to another mail server but recipient is still *@example.org. Maybe the redirect is not applied for some reason. Configuration and logs: /etc/exim4/update-exim4.conf.conf: dc_eximconfig_configtype='smarthost' dc_other_hostnames='' dc_local_interfaces='' dc_readhost='example.org' dc_relay_domains='example.org' dc_minimaldns='false' dc_relay_nets='0.0.0.0/32' dc_smarthost='example.com::26' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='maildir_home' /etc/exim4/conf.d/router/999_exim4-config_redirect (created by me): domain_redirect: debug_print = "R: forward for $local_part@$domain" driver = redirect domains = example.org data = [email protected] (for now data is set to a specific address for simplicity and testing) exim log when sending email to [email protected] (should be redirected to [email protected]): 2012-03-20 19:40:07 1SA4ud-0005Dw-7k <= [email protected] U=www-data P=local S=657 2012-03-20 19:40:08 1SA4ud-0005Dw-7k => [email protected] R=smarthost T=remote_smtp_smarthost H=domain2.com [184.172.146.66] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,2.5.4.17=#13053737303932,ST=TX,L=Houston,STREET=Suite 400,STREET=11251 Northwest Freeway,O=HostGator.com,OU=HostGator.com,OU=Comodo PremiumSSL Wildcard,CN=*.hostgator.com" 2012-03-20 19:40:08 1SA4ud-0005Dw-7k Completed So, the address is not changed :( Please help! I'm trying to make it work for half a day already :(

    Read the article

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • Wordpress Directory Permission to allow uploads, plugin folders, etc

    - by user1015958
    I have a wordpress pre-made site which were developed on my localmachine, and i uploaded it too a vps running on debian6, using nginx, mysql, php. Following this guide: 1) Create an unprivilaged user, this could be say 'karl' or whatever, and make them belong to the www-data group. So that if I were to login as karl and create a web root in say /home/karl/www/ , all the files will be owned by karl:www-data 2) Set up nginx as the user www-data in nginx.conf 3) Set up PHP-FPM to run as www-data 4) Place your files in /home/karl/www/[domain name maybe]/public_html/, upload as 'karl' so you don't have to chown everything again. when i type ls -l inside public_html/ it shows that all the files inside are owned by karl:karl. But the public_html directory is owned by karl:www-data. I chmod 0755 the folder wp-content but i still get the error: ERROR: Path ../wp-content/connection_images does not seem to be writeable. I know i shouldn't set it too 777 due to security reason, how should i set it too proper permission? and what should i set also to allow my users to upload,write posts,edit articles? Sorry for my english by the way.

    Read the article

  • Redis 2.0.3 would not let go of deleted appendonly.aof file after BGREWRITEAOF

    - by Alexander Gladysh
    Ubuntu 10.04.2, Redis 2.0.3 (more details at the end of the question). My AOF file for Redis is getting too large, to the point where it soon would threaten to take whole free disk space on my small-HDD VPS box: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / $ ls -la total 3866688 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:11 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-r----- 1 redis redis 3923246988 2011-03-02 00:14 appendonly.aof -rw-rw---- 1 redis redis 32356467 2011-03-02 00:11 dump.rdb When I run BGREWRITEAOF, the AOF file shrinks, but disk space is not freed: $ ls -la total 95440 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:17 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-rw---- 1 redis redis 65137639 2011-03-02 00:17 appendonly.aof -rw-rw---- 1 redis redis 32476167 2011-03-02 00:17 dump.rdb $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / Sure enough, Redis is still holding the deleted file: $ sudo lsof -p6916 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... redis-ser 6916 redis 7r REG 202,0 3923957317 918129 /var/lib/redis/appendonly.aof (deleted) ... redis-ser 6916 redis 10w REG 202,0 66952615 917507 /var/lib/redis/appendonly.aof ... How can I workaround this issue? I can restart Redis this time, but I would really like to avoid doing this on a regular basis. Note that I can not upgrade to 2.2 (upgrade to 2.0.4 is feasible though). More information on my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid $ uname -a Linux my.box 2.6.32.16-linode28 #1 SMP Sun Jul 25 21:32:42 UTC 2010 i686 GNU/Linux $ redis-cli info redis_version:2.0.3 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:32 multiplexing_api:epoll process_id:6916 uptime_in_seconds:632728 uptime_in_days:7 connected_clients:2 connected_slaves:0 blocked_clients:0 used_memory:65714632 used_memory_human:62.67M changes_since_last_save:8398 bgsave_in_progress:0 last_save_time:1299014574 bgrewriteaof_in_progress:0 total_connections_received:17 total_commands_processed:55748609 expired_keys:0 hash_max_zipmap_entries:64 hash_max_zipmap_value:512 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=1,expires=0 db1:keys=18,expires=0

    Read the article

  • URLCallback with JAAS on WAS?

    - by Dean J
    I extended the JAAS javax.security.auth.spi.LoginModule, and installed it into a WAS server. It works; all logins go through the code in this new class, and if it says to not let them login, they're prevented from logging in. The root problem: I don't want it to filter logins for the admin console (/ibm/console), but I do want it to filter logins for other things on the server. I think that with the available setup, the login module applies to everything installed on the server, including the administration screens. I'd like to solve that by getting the URL of the page that triggered the call to the LoginModule. If I were using WebLogic, I'd use a URLCallback to get the URL. Does anyone know if Websphere Application Server has any parallel functionality to that, or if there's another workaround?

    Read the article

  • Custom Application Page does not recognize tagPrefix defined in custom web.config

    - by Hasan Khan
    I have a custom application page integrated in Centeral Administration. My application pages are placed in a subfolder in Template\Admin folder. I placed my own web.config in my subfolder and added tagPrefixes in the control section. However when I open my application page ASP.NET throws the exception that the tag SharePoint:InputFormTextBox was not recognized. What am I doing wrong? Or it doesn't work this way? What can i do to achieve this?

    Read the article

  • Custom flash uploader breaks only on Media Temple

    - by LaserWolf
    I've built a flash uploader to upload files up to 100MB using a php backend. It works wonderfully on our dev server, on a hostgator vps, and on one of our clients' servers running Debian. It will not work on our Media Temple (dv)3.5 and I don't know why. The upload will start but will choke after a few seconds with this flash error message: ioerror: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2038: File I/O Error. URL: http://..._upload.php"] The problem seems to be specific to the asynchronous nature of the flash uploader because if I try a straight php upload it works fine. The php.ini settings are set to allow such a large upload as well. Also, I've thoroughly googled flash, 2038, I/O error, etc but have yet to find anything that helped. Here's the weird part though: We work in Seattle. It won't work from the office. It won't work from home. But, while on the phone with MT's support, they were able to upload a file through our flash uploader just fine. I'm not sure where his office was located but I think it was Atlanta. So the problem also seems specific to physical location. Has anyone run into this sort of problem before?

    Read the article

  • GlassFish v3: Security related updates + Repository/Publisher?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the contents of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the dev repository for security updates, or does it contain unstable updates? (The name suggests unstable updates, but the version numbers, like "3.0.1,0-11:20100331T082227Z" leave me guessing. The build is more than a week old, so it's obviously not "nightly" or "weekly", but what is it?) Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • Syntax Error when setting up Domain Alias

    - by Poundtrader
    I'm attempting to set up a domain alias via Pre VirtualHost Include and I'm recieving the following error: Error: An error occurred while running: /usr/local/apache/bin/httpd -DSSL -t -f /usr/local/apache/conf/httpd.conf Exit signal was: 0 Exit value was: 1 Output was: --- Syntax error on line 15 of /usr/local/apache/conf/includes/pre_virtualhost_global.conf: CustomLog takes two or three arguments, a file name, a custom log format string or format name, and an optional "env=" clause (see docs) --- Basically I have two domains, the main domain has an opencart installation within a directory (/buy) and I'm attempting to use the multi-store function which allows you to administer multiple stores on multiple domains via the one opencart dashboard. My issue is that I have the opencart installations within the /buy directory so I have been given the following code which should allow me to use this functionality over the multiple cPanel accounts within the same VPS. <VirtualHost 87.117.239.29:80> ServerName newdomain.co.uk ServerAlias www.newdomain.co.uk Alias /buy /home/originaldomain/public_html/buy/ DocumentRoot /home/newdom/public_html ServerAdmin [email protected] ## User newdom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup newdom newdom </IfModule> <IfModule mod_ruid2.c> RUidGid newdom newdom </IfModule> CustomLog /usr/local/apache/domlogs/newdomain.co.uk-bytes_log “%{%s}t %I .\n%{%s}t %O .” CustomLog /usr/local/apache/domlogs/newdomain.co.uk combined ScriptAlias /cgi-bin/ /home/newdom/public_html/cgi-bin/ </VirtualHost> Does anyone know how to get this code to work? Thanks,

    Read the article

  • WCSF vs MEF, What's Best For Me?

    - by Galilyou
    Hey fellows, I'm working on a large web application that includes many modules (CRM, Inventory, Administration, etc.) What I want to accomplish is to be able develop each of these modules independently (the UI, Core Logic, DataAccess Logic, and all) and then integrate them all together into a core module (this integration should only be a change in the configuration file). For example, if I have a core module named Host, I should be able to add the CRM module to the host module by simply adding this line to the host's configuration file: <module name="CRM" /> I did some reading on the WCSF, and found that it can help integrate some modules together, but it really doesn't offer so much help in terms of integrating those UI elements. Some friends suggested MEF for the job, but I haven't looked at that yet. What do you guys think? Is it possible to achieve this level of modularity, and how much work do I have to put into it to get it working?

    Read the article

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >