Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 496/692 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • How do I share a PHP 5.4 server between OS X and VMWare?

    - by Ben
    I'm running PHP 5.4 on OS X which allows me to set up a virtual server for any directory, using this Terminal command: php -S localhost:8000 This will then set up http://localhost:8000 which works great, but what I would like to do is share this server with the instance of Windows that I have running through VMWare in order to test in Internet Explorer. I was wondering if this is possible and if it is, how do I go about setting it up? Currently trying to visit http://localhost:8000 gives me 'This page cannot be displayed'. I'd really appreciate any help that you can give me on this as I don't have much experience with virtual machines/networking. Thanks in advance.

    Read the article

  • What You Said: How You Set Up a Novice-Proof Computer

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tips and tricks for setting up a novice-proof computer; read on to see how your fellow readers ensure friends and relatives have a well protected computer. Image available as wallpaper here. If you only listen to a single bit of advice from your fellow readers, let that advice be the importance of separate and non-administrative user accounts. Grant writes: I have two boys, now 8 and 10, who have been using the computer since age 2. I set them up on Linux (Debian first, now Ubuntu) with a limited rights account. They can only make a mess of their own area. Worst case, empty their home directory and let them start over. I have to install software for them, but they can’t break the machine without causing physical damage (hammers, water, etc.) My wife was on Windows, and I was on Debian, and before they had their own, they knew they could only use my computer, and only logged in as themselves. All accounts were password protected, so that was easy to enforce. What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • YUM and RPM crash due to the liblua-5.1 library being missing

    - by A troubled linux newbie.
    I've been playing around with a LiveUsb install of basic Fedora with persistence. I attempted to install moonscript, which requires Lua and LuaRocks. After installing Lua and discovering there were flaws in the install which prevented LuaRocks from working, I used rpm to force Lua off so I could use yum to re-install it. The result was an error of this sort being yielded by both rpm and yum: There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: liblua-5.1.so: cannot open shared object file: No such file or directory I've concluded from this that my Lua version installed a library which both yum and rpm are now connected to. Is there anyway to fix this without reformatting my drive and installing everything from scratch?

    Read the article

  • Expression web ftp: Stuck at "Listing subsites"

    - by FrankPython
    When I try to use the Expression Web 4 built-in ftp I see the message ""Listing subsites in.." and soon afterward "passive ftp not available". If I switch to active, I get "active ftp not available". There are no subsites. It is a simple directory with one html page. Backend is a normal IIS6 server. FTP to the same IP with other FTP clients works fine! Any idea if Expression web has some specific requirements? It is our own dedicated server. (Please no tips to use another tool, for this specific project Expression Web is a requirement).

    Read the article

  • centos cron job running php file

    - by user50946
    I have a php file under php called test.php set to run every 5th minute of the hour. When ever I run the file manually (by going to the web browser and runnint the path) it works fine. But when the cron job tries to run it I get the error message my cron job is #### Delete Records 5 * * * * /var/www/html/phpsysinfo/cronUpdateLeadBucketOnEnergycAlliance.php my phpfile is (path : /var/www/html/phpsysinfo/phpfile) <?php require("dbconnect.php"); $sql = mysql_query("DELETE FROM list where status <> 'LEAD'") or die(mysql_error()); ?> and the error that I get is: /var/www/html/phpsysinfo/phpFile.php: line 1: ?php: No such file or directory /var/www/html/phpsysinfo/phpFile.php: line 2: syntax error near unexpected token `"dbconnect.php"' /var/www/html/phpsysinfo/phpFile.php: line 2: `require("dbconnect.php"); thanks

    Read the article

  • Secure against c99 and similar shells

    - by Amit Sonnenschein
    I'm trying to secure my server as much as i can without limiting my options, so as a first step i've prevented dangerous functions with php disable_functions = "apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, mysql_pconnect, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_decode" but i'm still fighting directory travel, i can't seems to be able to limit it, by using a shell script like c99 i can travel from my /home/dir to anywhere on the disc. how can i limit it once and for all ?

    Read the article

  • Error 404 when accesing newly created ASP.NET website on IIS 7.0

    - by Wodzu
    I've created an ASP.NET website and published it to a file from visual studio. Then I've copied my folder to the inetpub\wwwrot directory. Next under IIS I've converted this folder to the application. Unfortunately, when I try to acces it like this: http://localhost/myappname I am getting 404 error. I was thinking that maybe IIS is not configured to process aspx files, but under http://localhost/default.aspx there is a working sharepoint website. Have you got any ides where might be the problem?

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • How to upgrade to Windows 8.1 on a machine with a Users folder on a separate drive?

    - by ahsteele
    I tried to upgrade from Windows 8 to Windows 8.1. Unfortunately, during the upgrade process I receive the following error: Sorry, it looks like this PC can't run Windows 8.1. This might be because the Users or Program Files folder is being redirected to another partition. Which is accurate in that I have my Users directory on my D: drive and Windows installed on my C: drive. I do this because my C: drive is an SSD drive and D: drive is a spinning rust drive where I keep my data. Is it possible to upgrade to Windows 8.1 from a Windows 8 install with a redirected Users folder? I do not consider a full reinstall of Windows 8 with a non-mapped Users folder and then upgrading that installation to be "upgrading."

    Read the article

  • How to deny access to disabled AD accounts via kerberos in pam_krb5?

    - by Phil
    I have a working AD/Linux/LDAP/KRB5 directory and authentication setup, with one small problem. When an account is disabled, SSH publickey authentication still allows user login. It's clear that kerberos clients can identify a disabled account, as kinit and kpasswd return "Clients credentials have been revoked" with no further password / interaction. Can PAM be configured (with "UsePAM yes" in sshd_config) to disallow logins for disabled accounts, where authentication is done by publickey? This doesn't seem to work: account [default=bad success=ok user_unknown=ignore] pam_krb5.so Please don't introduce winbind in your answer - we don't use it.

    Read the article

  • I just recursively chmod'd everything under / to 750. Any tips?

    - by Ouairz
    I won't be the first and I won't be the last, I suppose. While playing around with the find command, I made a whoops and it would appear that instead of changing the permissions of the ~/web directory to 750, it changed the permissions of the entire filesystem (/) to 750, however I'm not certain, but any attempt to investigate is thwarted by Permission denied messages. For everything. This was the offending command: sudo find ~/web . type d -exec chmod 750 {} If I'm not mistaken, the Ubuntu team disabled root logins as a safety precaution so I'm out of ideas. I'm (obviously) a total newbie when it comes to file permissions so I was wondering if anyone had some good or even some bad advice to share. I've mentally prepped myself to losing everything on the computer which is only of mild consequence, since I have backups, but I did do a bit of work on this box over the week and it would be a shame to lose it all due to a boneheaded mistake. If you are reading this message, ask yourself, have you backed up any of your work recently? Thanks in advance for any insights. Feel free to scold me for using sudo carelessly

    Read the article

  • Batch edit (not rename) file properties in windows

    - by Jay
    I have a large directory of downloaded shareware. I keep track of what i have by individually editing the properties of each program. However, some of the programs are multipart .rar types. And i have at least a few hundred programs so far. I am looking for a utility that will let me batch edit file properties such as Title, Author, Summary, and Comments, so I don't have to edit each file or file part individually. Windows doesn't let me do this in Explorer. Powerdesk has a proprietary system, but it isn't preserved when moving or copying files. Any Suggestions?

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • Using slapcat to backup LDAP

    - by rsw
    I'm running an OpenLDAP directory on a Debian server, using the hdb backend. I've been wondering about backups, and did som reading on the net. Slapcat seems to be the way to go, but I keep seeing these posts speaking about it being dangerous to use it while slapd is running. In what way is this dangerous? I'm planning to run these backups during the night, and no writing will be done to the database during the night - reads will probably occur though. If there's any other backup solution better suited for this, I'd gladly hear about it.

    Read the article

  • How cloudfront works?

    - by Dharmik Bhandari
    I'm planning to Implement CDN(Content Delivery Network) of Amazon which is known as CloudFront in ASP.NET MVC3 with c#. I've googled about it but little bit confuse about few things mentions below. Is it compulsory that we have to uploads all static resources to CDN Network first and then we can use or Is it manageable by Amazon to crawl site static resources which is predefine folder or directory of sites? Is Amazon automatic update its copies when we anything change in static resources or every time we have to upload updated resources to CDN network.

    Read the article

  • Cannot chown my own files from NFS

    - by valpa
    We have a NFS server provide home directory for many account, which provided by a NIS server. I have account A and B. In /home/A, I try to copy "cp -a /home/B/somedir ~/". Then I found in /home/A/somedir, all files are owned by user A. Then if I do "chown -R B:B somedir", I got "Operation not permitted" error. I am user A, "cp -a" didn't preserve the original user (B). Then I cannot chown my own files. Any suggestion? I fix my own issue by "chmod 777 /home/A", "su - B" and "cp -a somedir /home/A/", and "su - A", then "chmod 755 /home/A". But it is not a good solution.

    Read the article

  • Nginx HTTPS redirects causing loop

    - by Ben Chiappetta
    I've been banging my head against the wall trying to figure this out, so if anyone can help I'd appreciate it. My Nginx conf has three different redirect loops, haven't been able to get any of the three to work right. The three problem areas are: Redirecting memcache directory to SSL Redirecting accounts directory to SSL Redirecting SSL to www if non-www nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log notice; sendfile on; #tcp_nopush on; keepalive_timeout 65; proxy_set_header X-Url-Scheme $scheme; #gzip on; rewrite_log on; include /etc/nginx/conf.d/*.conf; } conf.d/default.conf: server { listen 80; server_name <redacted>.net; rewrite ^(.*) http://www.<redacted>.net$1; } server { listen 80; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; #charset koi8-r; access_log /var/log/nginx/host.access.log main; root /var/www/html; index index.php index.html index.htm; location =/memcache { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } location /accounts { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; try_files $uri = 404; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } conf.d/ssl.conf: # HTTPS server # server { listen 443; server_name <redacted>.net; rewrite ^(.*) https://www.<redacted>.net$1; } server { listen 443 default_server ssl; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; proxy_set_header X-Forwarded_Proto https; proxy_set_header Host $host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_set_header X-Forwarded-Ssl on; set $https_enabled on; ssl_certificate <redacted>.crt; ssl_certificate_key <redacted>.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; root /var/www/html; index index.php index.html index.htm; location /memcache { auth_basic "Restricted"; auth_basic_user_file $document_root/memcache/.htpasswd; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include /etc/nginx/fastcgi_params; try_files $uri = 404; } }

    Read the article

  • Windows 7 blocks network access to network-installed apps

    - by VokinLoksar
    Windows 2008 R2 domain. Users, running Windows 7 Enterprise, are trying to run some software from a network share. Specifically, I've tested this with MATLAB and PuTTY. When starting, MATLAB has to contact a licensing server to get its license. This action fails for regular users when they start MATLAB from the network share. However, if they copy the installation directory to a local disk everything works fine. Running MATLAB as an admin user from the network share also works. Same story with PuTTY. If the executable is launched from the share, regular users cannot connect to any servers. Something is blocking network communications for programs that are launched from a network drive. Here's the only other mention I could find of the same problem: https://social.technet.microsoft.com/Forums/en-US/w7itpronetworking/thread/4504b192-0bc0-4402-8e00-a936ea7e6dff It's not the Windows firewall or the IE security settings. Does anyone have any clue as to what this is?

    Read the article

  • JDBC CLASSPATH Not Working

    - by AeroDroid
    I'm setting up a simple JDBC connection to my working MySQL database on my server. I'm using the Connector-J provided by MySQL. According to their documentation, I'm suppose to create the CLASSPATH variable to point to the directory where the mysql-connector-java-5.0.8-bin.jar is located. I used export set CLASSPATH=/path/mysql-connector-java-5.0.8-bin.jar:$CLASSPATH. When I type echo $CLASSPATH to see if it exists, everything seems fine. But then when I open a new terminal and type echo $CLASSPATH it's no longer there. I think this is the main reason why my Java server won't connect to the JDBC, because it isn't saving the CLASSPATH variable I set. Anyone got suggestions or fixes on how to set up JDBC in the first place?

    Read the article

  • How to package static content outside of web application?

    - by chinto
    Our web application has static content packaged as part of WAR. We have been planning to move it out of the project and host it directly on Apache to achieve the following objectives. It's getting too big and bloating the EAR size resulting in slower deployment across nodes. Faster deployment times. Take the load of Application Server Host the static content on a sub domain allowing some browsers (IE) to load resources simultaneously Give us an option to use further caching such as Apache mod_cache apart from the cache headers we send out to browsers. We use yuicompressor-maven-plugin to aggregate and minimize JS file. My question is how do package and manage this static content out side of the web application? My current options are. New maven war project. Still use the same plugin for aggregation and compression. Just a plain directory in SVN and use YUI/Google compressor directly. Or is there a better technology out there to manage static content as a project?

    Read the article

  • emacs not load (.emacs) configuration file

    - by ant2009
    I am using emacs on ubuntu 9.04. I have my emacs configuration file in ~/.emacs.d directory. My emacs file is called .emacs I have some basic configuration. However, everytime I start emacs it never loads my configuration and I have to keep doing it manually using i.e. M-X Transient-mark-mode My emacs file is listed below: ;; Emac customization file path (add-to-list 'load-path "~/emacs.d") ;; Use font lock mode (global-font-lock-mode t) ;; Highlight cursor line (global-hl-line-mode t) ;; Highlight selected region (transient-mark-mode t) I want to add to this configuration instead of manually added entries. Many thanks for any advice,

    Read the article

  • Tomcat / Railo stop responding with no error output

    - by andrewdixon
    This is going to sound very vague and I'm sure it will be voted down for not giving enough information however I don't really have any to give as you will see. We have an AWS instance running Amazon Linux, Apache, Tomcat and Railo and from time to time the Tomcat/Railo simply stops responding to requests and there are no errors output in the catalina.out file or any of the other log files in the Tomcat logs directory. When I issue the command to restart Tomcat/Railo the restart scripts sits there for a while then says that Tomcat has not responded so it has killed it off and then it starts up again and everything is fine until it happens again, anything from a couple of minutes to a couple of days later. I have done my best to check other logs on the server but have found no messages at all to indicate why Tomcat/Railo has given up and stopped responding. Can anyone suggest any reason why it might be doing this and / or any other log file(s) that we could check to see what is happening. Thanks. Andrew.

    Read the article

  • How do you configure IIS 7 to use a subdirectory as the default document?

    - by Mark Rogers
    So I have a website running on a discount asp.net account, and I put an asp.net mvc app in a subdirectory. If my url is 'www.website.com' and my app is in directory 'sample', then 'www.website.com/sample' will execute the mvc app. My problem is that I want the app to be shown when you go to 'www.website.com' not just 'www.website.com/sample'. I have access to the IIS Manager, and I'm sure there are many ways to do this. What's the best way to do this?

    Read the article

  • tar incremental backup is backing everything up, every time

    - by Cyclic
    I made an incremental backup about 10 months ago (on Jan 27, 2013), creating a .snar metadata file. Now, when I try to make an incremental backup using tar --create --file=dropbox_incremental_1.tar --listed-incremental=dropbox_0.snar Dropbox the command just re-backs up everything. I'm not an expert at Unix timestamps, but I noticed that virtually all of my directory timestamps are way more recent than the last time they changed. For my actual files, they look like this: Access: 2013-03-12 19:04:51.000000000 -0500 Modify: 2012-09-30 15:10:47.000000000 -0500 Change: 2013-03-12 19:04:51.306209672 -0500 The 'Modify' timestamp seems correct, but the files were definitely not changed (at least not doing anything that I know of) at the time they say they were. These files still seem to go into the incremental archive. What's happening here? Is there a way to tell tar to look at the 'modify' timestamp? Isn't this what it's supposed to be doing?

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >