Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 514/701 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • Vserver: secure mails from a hacked webservice

    - by lukas
    I plan to rent and setup a vServer with Debian xor CentOS. I know from my host, that the vServers are virtualized with linux-vserver. Assume there is a lighthttpd and some mail transfer agent running and we have to assure that if the lighthttpd will be hacked, the stored e-mails are not readable easily. For me, this sounds impossible but may I missed something or at least you guys can validate the impossibility... :) I think basically there are three obvious approaches. The first is to encrypt all the data. Nevertheless, the server would have to store the key somewhere so an attacker (w|c)ould figure that out. Secondly one could isolate the critical services like lighthttpd. Since I am not allowed to do 'mknod' or remount /dev in a linux-vserver, it is not possible to setup a nested vServer with lxc or similar techniques. The last approach would be to do a chroot but I am not sure if it would provide enough security. Further I have not tried yet, if I am able to do a chroot in a linux-vserver...? Thanks in advance!

    Read the article

  • How to use a local Leopard Server Mail server acting "like" an Exchange mail server

    - by Richard Chevre
    We have a local Exchange 2003 server (company .local) who is collecting POP3 mail accounts on a distant (company .com) mailserver. The mails are collected by the Exchange server every 5-10 minutes and stored locally (on company .local), so the users can read them without going on the "real" mail server (company.com) What was explaned to me is that the mail collection is made with POP Now we are migrating on Snow Leopard Server. We have chosen to use a new extension for our local domain: .leo So our mailserver's FQDN is mail.company.leo, and the users have a user [email protected] formated mail address. A) All works fine except that I can't find how to tell the mail.company.leo that he must retreive the mails from the "real" public server (mail.company.com) I'm hoping to use IMAP and not POP. I can send mail using SMTP relay from mail.company.leo but (I know it's trivial) answering is not possible, even if I specify the reply-to as [email protected] (this seems to be related to A) ) I don't know if it's very complicated (I suspect not, but...) to achieve what I want to do, and I'm not a genius. But as I'm a little bit lost, I hopesomebody can or will help me. Solving this will allow us to use iCal invitations too, so a lot of services depends of these mailserver settings Some of you discuss the fact thta we choose to use a "new" tld with the .leo extension. We have no problem for that, we could use .local. no problem ;) We used .leo instead of .local just to differentiate the two systems (Exchange and SnowLeopardServer). The question was not about that, it was just to know if we can set a SnowLeopard mail server to act like an Exchange Server. Again thank you for your advice and help Richard Thanks in advance Richard

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • sudo ./starling start works well but sudo service starling start fails

    - by Keating Wang
    sudo ./starling start works well but sudo service starling start fails $ sudo ./starling start * Starting Starling Server... [ OK ] $ sudo ./starling stop * Stop Starling Server... [ OK ] $ sudo service starling stop * Starting Starling Server... /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:247:in `to_specs': Could not find starling (>= 0) amongst [minitest-1.6.0, rake-0.8.7, rdoc-2.5.8] (Gem::LoadError) from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:256:in `to_spec' from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems.rb:1229:in `gem' from /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling:18:in `<main>' The error above is 'cannot find gem starling' Following the starling file(located in /etc/init.d, rwxrwxrwx): set -e LOGFILE=/var/log/starling/starling.log SPOOLDIR=/var/spool/starling PORT=22122 LISTEN=127.0.0.1 PIDFILE=/var/run/starling.pid NAME=starling DESC="Starling" INSTALL_DIR=/home/keating/.rvm/gems/ruby-1.9.2-p290/bin/ DAEMON=$INSTALL_DIR/$NAME SCRIPTNAME=/etc/init.d/$NAME OPTS="-d" . /lib/lsb/init-functions d_start() { log_begin_msg "Starting Starling Server..." start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- $OPTS || log_end_msg 1 log_end_msg 0 } d_stop() { log_begin_msg "Stopping Starling Server..." start-stop-daemon --stop --quiet --pidfile $PIDFILE || log_end_msg 1 log_end_msg 0 } case "$1" in start) d_start ;; stop) d_stop ;; restart|force-reload|reload) d_stop sleep 2 d_start ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" exit 3 ;; esac exit 0

    Read the article

  • DRBD on a disk with existing file system that takes all the place

    - by Karolis T.
    I'm currently trying to simulate the environment via XEN. I have installed two debian systems with such FS layout: cltest1:/etc# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 6.0G 417M 5.2G 8% / tmpfs 257M 0 257M 0% /lib/init/rw udev 10M 16K 10M 1% /dev tmpfs 257M 4.0K 257M 1% /dev/shm Host cltest2 is identical. Here's my drbd.conf global { minor-count 1; } resource mysql { protocol C; syncer { rate 10M; # 10 Megabytes } on cltest1 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.186:7789; meta-disk internal; } on cltest2 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.187:7789; meta-disk internal; } } I have not created filesystem on drbd0 Starting DRBD via init.d script errors out with: Starting DRBD resources: [ d(mysql) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted. [mysql] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/xvda2 /dev/xvda2 internal --set-defaults --create-device failed - continuing! Running: drbdadm create-md mysql gives: cltest1:/etc# drbdadm create-md mysql md_offset 6442446848 al_offset 6442414080 bm_offset 6442217472 Found ext3 filesystem which uses 6291456 kB current configuration leaves usable 6291228 kB Device size would be truncated, which would corrupt data and result in 'access beyond end of device' errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command 'drbdmeta /dev/drbd0 v08 /dev/xvda2 internal create-md' terminated with exit code 40 drbdadm aborting As I understand, all of my problems are because I don't have unallocated disk space on xvda2. What are my options besides shrinking FS and connecting a separate physical disk? Can't the meta-data be stored on a file in the local filesystem?

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • Cisco ASA and static IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • Why won't Google Chrome install on my Windows 7?

    - by quakkels
    I can tell something is wrong with my computer. A little history: Google Chrome was not running correctly. When i was using facebook, any link on face that ran an ajax javascript function would not work. No error was being thrown, it just wasn't working. I uninstalled Chrome and reinstalled it. On re-install, a toolbar automatically installed with Chrome. BitTorrentBar. I did not install BitTorrentBar, it installed on it's own. Bad. When BitTorrentBar was active, then ajax functions on facebook worked. When I deactivated or uninstalled BittorrentBar from Chrome extensions, then ajax stopped working. I ran scanned the computer with Avast (free version) and SpyBot Search and Destroy (also free). They found nothing wrong. I uninstalled Chrome using Control Panel and then I went through my file system deleting anything that had "google", "BitTorrent", "chrome", or "conduit". Conduit seems to distribute BitTorrentBar. After doing all that, I tried to reinstall Chrome. I got this error: System.ComponentModel.Win32Exception: The system cannot find the file specified at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo startInfo) at System.Diagnostics.Process.Start() at System.Diagnostics.Process.Start(ProcessStartInfo startInfo) at ClickOnceBootStrap.ClickOnceEntry.Main() This error interupts the Chrome install every time! computer scans don't show anything wrong. So then, I thought this might be a .net framework error. Perhaps I deleted something I wasn't supposed to. So I reinstalled .NET Framework 4 and checked the repair option. I also ran Windows Updates. When I tried to download Chrome, again I got the same error as listed above. What should I do?

    Read the article

  • tomcat 5.5 startup script on Ubuntu server

    - by Registered User
    Can any one share their Tomcat startup script I am looking for a tomcat start up script on a Ubuntu machine. My Ubuntu is 10.04 server. The tomcat is 5.5.30. It is in /opt/apache-tomcat-5.5.31 I tried a script #!/bin/bash # # tomcat # # chkconfig: # description: Start up the Tomcat servlet engine. # Source function library. . /lib/lsb/init-functions RETVAL=$? CATALINA_HOME="/opt/apache-tomcat-5.5.31" case "$1" in start) if [ -f $CATALINA_HOME/bin/startup.sh ]; then echo $"Starting Tomcat" /opt/apache-tomcat-5.5.31/bin/startup.sh fi ;; stop) if [ -f $CATALINA_HOME/bin/shutdown.sh ]; then echo $"Stopping Tomcat" /opt/apache-tomcat-5.5.31/bin/shutdown.sh fi ;; *) echo $"Usage: $0 {start|stop}" exit 1 ;; esac exit $RETVAL but it did not worked after a reboot. But the same script works if I do /etc/init.d/tomcat start or /etc/init.d/tomcat stop I have done update-rc.d tomcat defaults as it is a Ubuntu server but on reboot all of this fails to work.

    Read the article

  • Why does Mysql Xampp restart only when i run the mysqld.exe file manually?

    - by Ranjit Kumar
    I am using mysql-xampp v3.0.2 version. while restarting the mysql server first it show me the running status and after 2or3s it stops running automatically. So as of now i got a temporary solution like going into xampp installation folder Xampp-mysql-bin-running the msqld.exe file. i dont know whether it is the correct solution or is there any alternate solution to be made !! please suggest me errorlog 120629 15:29:59 [Note] Plugin 'FEDERATED' is disabled. 120629 15:29:59 InnoDB: The InnoDB memory heap is disabled 120629 15:29:59 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120629 15:29:59 InnoDB: Compressed tables use zlib 1.2.3 120629 15:29:59 InnoDB: Initializing buffer pool, size = 16.0M 120629 15:29:59 InnoDB: Completed initialization of buffer pool InnoDB: The first specified data file D:\xampp\xampp\mysql\data\ibdata1 did not exist: InnoDB: a new database to be created! 120629 15:29:59 InnoDB: Setting file D:\xampp\xampp\mysql\data\ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... 120629 15:29:59 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile0 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile0 size to 5 MB InnoDB: Database physically writes the file full: wait... 120629 15:30:00 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile1 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile1 size to 5 MB InnoDB: Database physically writes the file full: wait... InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: 127 rollback segment(s) active. InnoDB: Creating foreign key constraint system tables InnoDB: Foreign key constraint system tables created 120629 15:30:02 InnoDB: Waiting for the background threads to start

    Read the article

  • Google Chrome does not launch after some months without using it in Windows XP/Seven

    - by kokbira
    Well, I have in computers I use with Windows XP or Seven more than one browser installed on them, generally Internet Explorer 8, Firefox 4, Opera 11 and Google Chrome. I often use Firefox, but I want to use Google Chrome sometimes because I have a lot of addons and webapps on it. The issue is: when I try to execute Chrome after some months without using it, it does not function. Using Proccess Explorer or Task Manager, I can see that there is not any Google proccesses running. Then I reinstall it and all functions. But if I do not use it for some months again, it will not function... Is it an update problem? Must I use Chrome everyday or is there another way to avoid that issue? PS: I installed English and Portuguese last versions (how to get the version numbers when it does not execute?), not at the same time, and it continues to do not launch... PS2: There is a running Google Update proccess that is launched in startup: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run    Name:            Google Update    Type:            REG_SZ    Data:            "C:\Users\Ubirajara\AppData\Local\Google\Update\GoogleUpdate.exe" /c

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • Lock down Wiki access to password only but remain open to a subnet via .htaccess

    - by Treffynnon
    Basically we have a Wiki that has some sensitive information stored in it - not the best I know but my predecessor set it up. I want to be able to request password access from any one who is not on the local network subnet. Those on the local subnet should be able to proceed without entering a password. The following .htaccess does not seem to work any more as it is letting non-local access without requiring the password: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any order deny,allow And I cannot work out why. The WikkaWiki it is supposed to be protecting was recently upgraded, which clobbered the .htaccess file so I restored the above from memory/googling. Maybe I am missing an important directive? The full .htaccess is as follows: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any SetEnvIfNoCase Referer ".*($LIST_OF_ADULT_WORDS).*" BadReferrer order deny,allow deny from env=BadReferrer <IfModule mod_rewrite.c> # turn on rewrite engine RewriteEngine on RewriteBase / # if request is a directory, make sure it ends with a slash RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*/[^/]+)$ $1/ # if not rewritten before, AND requested file is wikka.php # turn request into a query for a default (unspecified) page RewriteCond %{QUERY_STRING} !wakka= RewriteCond %{REQUEST_FILENAME} wikka.php RewriteRule ^(.*)$ wikka.php?wakka= [QSA,L] # if not rewritten before, AND requested file is a page name # turn request into a query for that page name for wikka.php RewriteCond %{QUERY_STRING} !wakka= RewriteRule ^(.*)$ wikka.php?wakka=$1 [QSA,L] </IfModule>

    Read the article

  • Plesk + Apache + PHP (FastCGI): Constant session permissions problems, conflicts between HTTP / HTTPS

    - by Hans Engel
    I've just moved a collection of sites over to a brand-new server, running Apache 2.2.3, PHP 5.3, and Plesk 10.1.1. I am having problems with file permissions on PHP sessions, which are being stored in /var/lib/php/session. I originally set the permissions like so for this folder: drwxrwx--- 2 apache psacln 8192 Mar 22 23:25 session This worked fine, for HTTP sessions. Files were being saved in that folder with these permissions: -rw------- 1 client1 psacln 0 Mar 22 23:24 sess_507... -rw------- 1 client2 psacln 0 Mar 22 23:25 sess_8o1... The problem, however, is that PHP scripts accessed via HTTPS do not seem to be run by the same client1 or client2 user. I deleted files in the session directory and accessed a login page via HTTPS to see how sessions were being saved when initiated via this protocol: -rw------- 1 apache apache 0 Mar 22 23:25 sess_507... So, for whatever reason, sessions initiated by clients browsing with HTTPS were being saved by apache:apache, while sessions from HTTP clients were saved with someclient:psacln. What I'd like to ask: How can I avoid this problem with session permissions? When sessions are created via unencrypted HTTP and a client visits an HTTPS portion of the site, permission errors are shown, since apache:apache tries to access the session save created by someclient:psacln. The converse is also true. Can I change the user which runs the Apache HTTPS server, via Plesk or the command line? If not, can I have PHP sessions save with rw-rw---- permissions, and then add apache to the psacln group? Any other suggestions on how to fix this issue?

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /mychosendir/cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • How can I cache a Subversion password on a server, without storing it in unencrypted form?

    - by Zilk
    My Subversion server only provides access via HTTPS; support for svn+ssh has been dropped because we wanted to avoid creating system users on that machine just for SVN access. Now I'm trying to provide a way for users to cache their passwords for a while, without leaving them stored on the filesystem in unencrypted form. This is no problem for Gnome or KDE users, because they can use gnome-keyring and kwallet, respectively. IIRC, TortoiseSVN has a similar caching mechanism, too. But what about users on a non-GUI system? Some context: in this case, we have a development/testing server where one project has been checked out into the Apache htdocs directory. Development for this project is almost complete, and only minor text/layout changes are performed directly on this server. Nevertheless, the changes should be checked into the repository. There's no kwallet and no gnome-keyring on this system, and the ssh-agent can't help because the repository is accessed via https instead of svn+ssh. As far as I know, that leaves them the choice of entering the password every time they talk to the SVN server, or storing it in an insecure way. Is there any way to get something like what gnome-keyring and kwallet provide in a non-GUI environment?

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • NetBackup prefers "Scratch" tapes over dedicated tapes

    - by wfaulk
    I have a NetBackup 6.0MP7 installation running on Windows Server 2003. It functions as the only Master Server and Media Server. I swap a full set of tapes in and out every week, but leave a set of tapes with their Volume Pool set to "Scratch" in all the time. The weekly tape sets then get rotated back in after a period of time. Largely, this works fine. I seldom actually need the scratch tapes, but every once in a while, a backup will run over what I have dedicated to the task. However, one week's set of tapes consistently gets declined in favor of the scratch pool. The backup policies are the same for every week, they all have "Policy Volume Pool" set to "NetBackup", and all of the tapes for every week (beside the scratch tapes) have had their pools assigned as "NetBackup", definitely including the week that always gets ignored. That said, it doesn't ignore all of the NetBackup pool tapes for that week. It does usually write to two or three of them, but it writes to like 20 of the scratch tapes. (I haven't thought to look to see if it's always the same two or three tapes.) And this problem never seems to occur for any other week. It doesn't load the tapes and then reject them; it never seems to try to use them at all. They are not flagged as frozen. They are all active and unassigned when I swap them in. The tapes are in a Quantum PX510 tape library. The NetBackup server is attached to the library/robot via fibrechannel going through an HP-branded Brocade switch. I'm not an expert on NetBackup at all. I don't really even know where to look. Any advice on logs to look at or logging to enable or really anything at all would be appreciated. I'll keep an eye on the question and update it if anyone needs any more info to help.

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • Can't Login to phpPgAdmin

    - by Devin
    I'm trying to set up phpPgAdmin on my test machine so that I can interface with PostgreSQL without always having to use the psql CLI. I have PostgreSQL 9.1 installed via the RPM repository, while I installed phpPgAdmin 5.0.4 "manually" (by extracting the archive from the phpPgAdmin website). For the record, my host OS is CentOS 6.2. I made the following configuration changes already: PostgreSQL Inside pg_hba.conf, I changed all METHODs to md5. I gave the postgres account a password I added a new account named webuser with a password (note that I did not do anything else to the account, so I can't exactly say that I know what permissions it has and all) phpPgAdmin config.inc.php Changed the line $conf['servers'][0]['host'] = ''; to $conf['servers'][0]['host'] = '127.0.0.1'; (I've also tried using localhost as the value there). Set $conf['extra_login_security'] to false. Whenever I try to log in to phpPgAdmin, I get "Login failed", even if I use successful credentials (ones that work in psql). I've tried to go through some of the steps noted in Question 3 in the FAQ, but it hasn't worked out well so far there. It likely does not help that this is my first day working with PostgreSQL. I'm farily familiar with MySQL, but I have to use PostgreSQL for the project I'm working on. Could anyone offer some help for how to set up phpPgAdmin on CentOS 6.2? If I've done something terribly wrong in my configuration so far, it's no big deal to blow something/everything away, as it's not like I've stored any data there yet! I appreciate any insight you may have!

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >