Search Results

Search found 16797 results on 672 pages for 'directory traversal'.

Page 315/672 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • OS X: Storing MySQL data securely, on an encrypted FileVault image using a soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • Can I use a single SSLCertificateFile for all my VirtualHosts instead of creating one of it for each VirtualHost?

    - by user65567
    I have many Apache VirtualHosts for each of which I use a dedicated SSLCertificateFile. This is an configuration example of a VirtualHost: <VirtualHost *:443> ServerName subdomain.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/users/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/users/publ`enter code here`ic"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl/server.key SSLCertificateChainFile /private/etc/apache2/ssl/ca.crt </VirtualHost> Since I am maintaining more Ruby on Rails applications using Passenger Preference Pane, this is a part of the apache2 httpd.conf file: <IfModule passenger_module> NameVirtualHost *:80 <VirtualHost *:80> ServerName _default_ </VirtualHost> Include /private/etc/apache2/passenger_pane_vhosts/*.conf </IfModule> Can I use a single SSLCertificateFile for all my VirtualHosts (I have heard of wildcards) instead of creating one of it for each VirtualHost? If so, how can I change the files listed above?

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • Google Chrome warning that Javascript is disabled

    - by kirchoffs415
    I hope somebody can help. I keep getting the following message when I log on: Your Javascript is disabled. Limited functionality is available. It will stay for maybe a day sometimes two. I have uninstalled javascript and reinstalled but still the same. I am using chrome. Any help would be grateful many thanks Dominic My system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

  • Move 53,800+ files into 54 separate folders with ~1000 files each?

    - by ane
    Trying to import 53,800+ individual files (messages) using Gmail's POP fetcher. Gmail understandably refuses, giving the error: "Too many messages to download. There are too many messages on the other server." The folder in question looks like similar to: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S /usr/home/customer/Maildir/cur/1203677194.V57I586f26M688004.mail.net:2,S /usr/home/customer/Maildir/cur/1203679158.V57I586f2bM182864.mail.net:2,S /usr/home/customer/Maildir/cur/1203680493.V57I586f33M740378.mail.net:2,S /usr/home/customer/Maildir/cur/1203685837.V57I586f0bM835200.mail.net:2,S /usr/home/customer/Maildir/cur/1203687920.V57I586f65M995884.mail.net:2,S ... Using the shell (tcsh, sh, etc. on FreeBSD), what one-line command can I type to split this directory full of files into separate folders so Gmail only sees 1000 messages at a time? Something with find or ls | xargs mv maybe. Whatever is fastest. The desired output directory would now look something like: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S ... /usr/home/customer/set1/ (contains messages 1-1000) /usr/home/customer/set2/ (contains messages 1001-2000) /usr/home/customer/set3/ (etc.) Ideally, cron could run another command to automatically reverse the process in 1000 message increments every hour. So Gmail only sees & downloads 1000 at a time.

    Read the article

  • php on ubuntu 13.10 won't get parsed

    - by fefe
    I'm facing a strange issue with a fresh installed ubuntu 13.10 apache2 with mysql and php. My php won't get parsed and I have been doing every changes what I found during my researches PHP 5.5.3-1ubuntu2 (cli) (built: Oct 9 2013 14:49:12) sudo a2enmod php5 apache2.conf <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> /etc/apache2/mods-enabled/php5.conf </FilesMatch> # Deny access to files without filename (e.g. '.php') <FilesMatch "^\.ph(p[345]?|t|tml|ps)$"> Order Deny,Allow Deny from all </FilesMatch> # Running PHP scripts in user directories is disabled by default # # To re-enable PHP in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule>

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • Mac OS X Lion Apache Server not Found

    - by Burak Erdem
    After upgrading to Lion 10.7.2 today, Apache virtual hosts are not working anymore. When I go to http://XYZ.localhost, it say "server not found". I am using Apache on my Mac OS X Lion and until today, it was working fine. I can access http://localhost but I can't access http://XYZ.localhost My /etc/hosts file is like below; 127.0.0.1 XYZ.localhost My /etc/apache2/extra/httpd-vhosts.conf file is like below; <VirtualHost *:80> ServerName XYZ.localhost DocumentRoot /Library/WebServer/Documents/XYZ <Directory /Library/WebServer/Documents/XYZ> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I think I once had this problem too, after another OS X update, but I can't remember how I solved it. Is it a user permission issue? Or is there something wrong with Apache or any other setting? EDIT: It seems like my /etc/hosts file is not working correctly. Even if I add something like 127.0.0.1 apple.com it still goes to the real apple.com. Maybe this might help to solve the problem.

    Read the article

  • ProxyPass for specific vhost with mod_rewrite

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • Add folder name to beginning of filename

    - by shekhar
    I have a directory structure as below: Folder > SubFolder1 > FileName1.abc > Filename2.abc > ............. > SubFolder2 > FileName11.abc > Filename12.abc > .............. > .......... etc. I want to rename the files inside the subfolders as: SubFolder1_Filename1.abc SubFolder1_Filename2.abc SubFolder2_Filename11.abc SubFolder2_Filename12.abc i.e. add the folder name at the beginning of the file name with the delimiter "_". The directory structure should remain unchanged. Note: Beginning of file name is same. e.g. in above case File*. I made below Script for /r "PATH" %%G in (.) do ( pushd %%G for %%* in (.) do set MyDir=%%~n* FOR %%v IN (File*.*) DO REN %%v "%MyDir%_%%v" popd ) Problem with the above script is that it is taking only one Subfolder name and placing it to the beginning of file name irrespective of the folder.

    Read the article

  • IIS 8 URL Redirect on site level

    - by jackncoke
    I am trying to do a simple 301 perm redirect to another url in IIS 8. The end results would be if i navigated to domain2.com i would end up on domain1.com. We are moving from IIS 6 to a new server and have aprox 600+ sites that will be configured on this IIS 8 box. All of these sites run a property CMS and are looking at the same directory for source code. In IIS 6 i would just go to the Home directory tab of each site and check the box that says "Permanent Redirect" and provide a URL. With IIS 8 there is "HTTP Redirect" and this looks like it would do the trick but it is being applied to all the sites in IIS 8. Not on the site level like it use to be in IIS 6. I also looked into URL Rewriting module for IIS 8 but it seems to take rules in the style of a firewall and i am not sure if i could effectly create rules that would cater to 600+ sites. I am looking for the easiest way to have redirects on my site level so that that customers with multiple domains can have there sites redirect to there main domain for seo purposes. I feel like this was so easily achieved in IIS 6 that i must be overlooking something in the new version.

    Read the article

  • Supervisord doesn't stop nginx process

    - by Lennart Regebro
    I'm using Supervisor a lot, and in this project I have an nginx process managed by Supervisord. The relevant parts of the configuration is this: [supervisord] logfile=/home/projects/eceee-web/prod/var/log/supervisord.log logfile_maxbytes=5MB logfile_backups=10 loglevel=info pidfile=/home/projects/eceee-web/prod/var/supervisord.pid ; childlogdir=/home/projects/eceee-web/prod/var/log nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) directory=/home/projects/eceee-web/prod [program:nginx] command = /home/projects/eceee-web/prod/bin/nginx redirect_stderr = true autostart= true autorestart = true directory = /home/projects/eceee-web/prod stdout_logfile = /home/projects/eceee-web/prod/var/log/nginx-stdout.log stderr_logfile = /home/projects/eceee-web/prod/var/log/nginx-stderr.log The /home/projects/eceee-web/prod/bin/nginx command will start nginx in the foreground, it does not deamonify itself. Still, stopping it will fail: supervisorctl stop nginx Will not give any answer, but the process will continue. Any idea what? This is on OS X Darwin, with Supervisor 3.0a9 and nginx 0.7.65.

    Read the article

  • Create "raw disk file" from WIM file

    - by Joe Baltimore
    First timer here. I've searched around here, but haven't found a question like the one I have. Apologies if I missed it. The challenge at hand: produce a "raw disk image file" from a given WIM file. What I am pursuing so far is to use imagex.exe with the "/apply" operation to take the WIM and lay it down in a directory on a server. That seems to produce all the necessary "stuff" I need in that directory. How would I take that content and produce a "raw disk image file"? I'm told the definition of "raw disk image file" is a block-by-block copy of the disk image, which I hope is the output of the "imagex.exe /apply" command I use currently, but stored in a single file I can hand back to another system in our solution. imagex.exe /apply image.wim 1 R:\WimImagePoint I would like to take the contents of R:\WimImagePoint and produce the elusive (to me) "raw disk image file". ISO is not what they want, nor is anything requiring winPE. Any pointers? External utilities' references are welcome. Would like to avoid unmanaged code solutions as much as possible, but will entertain them if that's the only route. Also, I am not married to the idea of imagex /apply as the starting point, it's just the comfort zone so far.

    Read the article

  • Why won't IIS serve my website? - 404 Page Not Found

    - by Giffyguy
    Built a brand new server, with a fresh copy of Windows Server 2003 Enterprise x86 Edition. Installed the .NET Framework 1.1, 2.0, 3.5, and 4.0 Added the "Domain Controller" and "Application Server" roles. Created a new website, pointed it to a local directory: C:\Inetpub\angryoctopus.net\ Added the appropriate headers: angryoctopus.net, www.angryoctopus.net, TCP port 80, all IPs Moved the website content into the local directory. Configured the default document in IIS: Default.aspx Enabled ASP.NET for this website, and set it to the correct version: 2.0.50727 Configured the zone angryoctopus.net in DNS. Tested DNS lookup here to ensure DNS was functional. Opened website in VS 2008 and re-built (and debugged) to ensure the content was functional. I can clearly see that IIS is responding normally, by browsing directly to my server's IP address. Since this does not use the angryoctopus HTTP header, the default website is displayed instead: the "Under Construction" page. And yet, after all of this, angryoctopus.net still returns 404. Does anybody know what could be wrong? What troubleshooting steps have I forgotten? Is there a command-line diagnostic that might provide more information?

    Read the article

  • Lost Permission on Files using wrong chmod syntax Centos 5.5

    - by alloutfallout
    Hello, I was trying to remove write permissions on an entire directory, and I used the incorrect command: chmod 644 -r sites/default I meant to type chmod -R 644 sites/default The result was this: chmod: cannot access `644': No such file or directory $ ls -als sites total 24 4 drwxr-xr-x 5 user group 4096 Jan 11 10:54 . 4 drwxrwxr-x 14 user group 4096 Jan 11 10:11 .. 4 drwxr-xr-x 4 user group 4096 Jan 5 01:25 all 4 d-w------- 3 user group 4096 Jan 11 10:43 default 4 -rw-r--r-- 1 user group 1849 Apr 15 2010 example.sites.php I fixed the permissions on the default folder with $ chmod 644 sites/default But, the following ls shows a all the files with red backgrounds and question marks. I can't access any files unless I am root. $ ls -als sites/default total 0 ? ?--------- ? ? ? ? ? . ? ?--------- ? ? ? ? ? .. ? ?--------- ? ? ? ? ? default.settings.php ? ?--------- ? ? ? ? ? files ? ?--------- ? ? ? ? ? settings.php When I log in as root, I can edit all of the files, and their permissions appear correctly. I do not know how to undo the damage caused by using -r with chmod instead of -R. Any Suggestions?

    Read the article

  • Ram question in VMware Server 2

    - by ToreTrygg
    Hi, I understand from the VMware Server 2 documentation that VMware Server 2 is capable of running a 64-bit guest OS underneath a 32-bit host OS, as long as the hardware running the box is 64-bit capable. Here's my situation. We currently have an underutilized XEON X3220 Quad Core 64bit Server, running Server 2003, 32-bit and 2gb of RAM (the motherboard is capable of 8gb ram). The server is currently used mainly for file and print services. It is also running Active Directory, Novell eDirectory and Groupwise 6.5. We are planning a micration to Microsoft Exchange, so the Novell eDirectory and Groupwise services will eventually be purged from this box, leaving only Active Directory, File and Print services. Being that this server is underutilized we are hoping to save hardware costs and virtualize our new Exchange investment. My question is this. Will VMware allow access to the "invisible" extra memory that Windows 32-bit won't see. Meaning, if we increase the full amount of system ram to 8gb (yes, I know the 32-bit host OS will only see a maximum of 4gb), will I be able to assign maybe 5gb to the new Server 2008 64-bit OS running Exchange and leave 3gb for the Guest OS (or maybe even a 6, 2 split). The second part of that would be, would it be better to just convert the main OS currently running to an image, convert the machine itself to ESXi and run both OSes as images under ESXi. Downtime for this box is critical, so my preference is most definitly with the first option because it presents very minimal downtime. Doing the second would make downtime quite a few hours to image the machine and then convert the image to a VMware Image.

    Read the article

  • Tomcat Custom MBean

    - by Darran
    Does anyone know how to deploy a custom MBean to Tomcat? So far I`ve found this http://www.junlu.com/list/3/8871.html. I copied my jar with my MBean to Tomcat lib directory so the Custom class loader should pick it up. I then followed the instructions but I kept getting the exception below. My MBean does definitely have a public constructor. If I removed the jar from the tomcat lib directory I get the same message which suggests its not picking up my jar or my jar is being loaded after the Apache MBean Modeler is running in Tomcat. 06-Aug-2010 12:14:23 org.apache.tomcat.util.modeler.modules.MbeansSource execute SEVERE: Error creating mbean Bean:type=Bean javax.management.NotCompliantMBeanException: MBean class must have public constructor at com.sun.jmx.mbeanserver.Introspector.testCreation(Introspector.java:127) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:2 at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:1 at com.sun.jmx.mbeanserver.JmxMBeanServer.createMBean(JmxMBeanServer.java:393) at org.apache.tomcat.util.modeler.modules.MbeansSource.execute(MbeansSource.java:207) at org.apache.tomcat.util.modeler.modules.MbeansSource.load(MbeansSource.java:137) at org.apache.catalina.core.StandardEngine.readEngineMbeans(StandardEngine.java:517) at org.apache.catalina.core.StandardEngine.init(StandardEngine.java:321) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:411) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • Problems compiling coreutils-8.5 on Solaris 5.10 on Intel platform

    - by PP
    I am having trouble compiling coreutils-8.5 on Solaris 5.10 on the Intel platform using cc. Firstly I had the following error during ./configure: checking whether <wchar.h> uses 'inline' correctly... no configure: error: <wchar.h> cannot be used with this compiler (/tool/sunstudio12.1/bin/cc -xc99=all -g -D_REENTRANT). This seemed similar to the problem in this question. The solution was to edit configure and replace the reference of -xc99=all to -xc99=all,no_lib. This permitted the configure to complete. Then I ran /usr/sfw/bin/gmake and it progressed until I received the following message: Making all in src gmake[2]: Entering directory `/home/peterp/src/coreutils-8.5/src' gmake all-am gmake[3]: Entering directory `/home/peterp/src/coreutils-8.5/src' CCLD chroot Undefined first referenced symbol in file eaccess ../lib/libcoreutils.a(euidaccess.o) ld: fatal: Symbol referencing errors. No output written to chroot What could cause this problem? PS I was only compiling coreutils because I wanted colour ls.

    Read the article

  • How make rewrite rules relative to .htaccess file.

    - by Kendall Hopkins
    Current I have an .htaccess file like this. RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f [OR] RewriteCond %{REQUEST_URI} ^/(always|rewrite|these|dirs)/ [NC] RewriteRule ^(.*)$ router.php [L,QSA] It works create when the site files are in the document_root of the webserver (ie. domain.com/abc.php - /abc.php). But in our current setup (which isn't changeable), this isn't ensured. We can sometimes have arbitrary folder in between the document root and folder of the .htaccess file (ie. domain.com/something/abc.php - /something/abc.php). The only problem with is that is the second RewriteCond no longer works. Is there anyway to dynamically check if the accessed path by a path relative to .htaccess file. For Example: If I have a site where domain.com/rewrite/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/rewrite/index.php FORCED TO REWRITE -> domain.com/rewrite/rewrite/index.php If I have a site where domain.com/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/index.php FORCED TO REWRITE -> domain.com/rewrite/index.php

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

  • What could cause random files being downloaded without permission?

    - by Dustin
    I have been having issues lately with a certain directory. It seems someone is placing files into it, or something of that sort, and any attempt to delete them is successful, HOWEVER they reappear over time (maybe not the exact same ones, but random files). I will provide you the information I can and several pictures of my problem: sandbox.mys4l.com/visual/files/b1.jpg Files like this have been appearing in my /visual/ folder, and I have no clue where they are coming from. sandbox.mys4l.com/visual/files/b2.jpg This is what is inside on of those weird files, it appears to be nothing problematic. sandbox.mys4l.com/visual/files/b4.jpg As you can see, in the time it took me to take the first picture, more odd files showed up. These log files are also being uploaded to this directory, and I know I didn't put them there. sandbox.mys4l.com/visual/files/b7.jpg This inside one of these mysterious .log files, I'm not sure what it's all about. These files only appear to be going into this specific area, and I'm not sure of their origin, only that they will not go away. I have done a full system scan at least twice with an up-to-date virus scan, and have looked for an unknown script which may be writing them there. Nothing has come up, so I come to you guys as I hear this is the best place to find answers. Hope this problem has a solution!

    Read the article

  • Centos INODES usage

    - by MSTF
    We are using Centos & cPanel server but we have a important problem for INODES usage. "df -i" command showing for / directory using 6 million inodes!. When I check number of files for / directory, it has few thousand files. df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 6578176 6567525 10651 100% / tmpfs 8238094 1 8238093 1% /dev/shm /dev/sdi1 61054976 169 61054807 1% /backup /dev/sda1 51296 38 51258 1% /boot /dev/sda2 0 0 0 - /boot/efi /dev/sdc1 7290880 1252 7289628 1% /database /dev/sdb2 4096000 53258 4042742 2% /home /dev/sdd1 7290880 3500 7287380 1% /home2 /dev/sde1 7290880 68909 7221971 1% /home3 /dev/sdg1 7290880 68812 7222068 1% /home5 /dev/sdh1 7290880 695076 6595804 10% /home6 /dev/sdf1 7290880 58658 7232222 1% /tmp df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 99G 30G 65G 32% / tmpfs 32G 0 32G 0% /dev/shm /dev/sdi1 917G 270G 601G 32% /backup /dev/sda1 788M 80M 669M 11% /boot /dev/sda2 400M 296K 400M 1% /boot/efi /dev/sdc1 110G 1.5G 103G 2% /database /dev/sdb2 62G 1.1G 58G 2% /home /dev/sdd1 110G 79G 26G 76% /home2 /dev/sde1 110G 3.9G 101G 4% /home3 /dev/sdg1 110G 51G 54G 49% /home5 /dev/sdh1 110G 64G 41G 62% /home6 /dev/sdf1 110G 611M 104G 1% /tmp SDA disk just have Operating System and cPanel. There is no account, database, tmp on SDA disk. Why SDA using high inodes? Note: All disks is SSD 120GB Thanks.

    Read the article

  • VirtualHost on WAMPSERVER not working

    - by Martin C
    I currently have WAMPSERVER 2.2 set up on my PC. I'm trying to set up a new host called pplocal.local I made the changes in httpd.conf to uncomment this: Include conf/extra/httpd-vhosts.conf Then, I edited httpd-vhosts.cong and I added the following: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/" ServerName localhost </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/pp/" ServerName pplocal.local <Directory "E:/wamp2/www/pp/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> CustomLog "E:\wamp2\logs\pplocal-access.log" common ErrorLog "E:\wamp2\logs\pplocal-error.log" Im my windows 'hosts' file I added: 127.0.0.1 localhost 127.0.0.1 pplocal.local Then, I restarted apache. If I type localhost in my browser I get the files at E:/wamp2/www/ If I type pplocal.local in my browser I get the files at E:/wam2/www/ instead of those at E:/wamp2/www/pp/ I have followed several tutorials and can't see what I'm doing wrong. I'm new to editing the files associated with apache so any advice is appreciated. Thanks

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >