Search Results

Search found 42468 results on 1699 pages for 'default program'.

Page 537/1699 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • dnsmasq Client TTL

    - by user548971
    I have a situation where my hosts file is constantly changing. Because of this I don't want clients to cache ip addresses resolved using the hosts file. Here is the command that starts dnsmasq for me: /usr/sbin/dnsmasq -K -R -y -Z -b -E -S 8.8.8.8 -l /tmp/dhcp.leases -r /tmp/resolv.conf.auto --stop-dns-rebind --rebind-localhost-ok --dhcp-range=lan,192.168.2.2,192.168.2.249,255.255.255.0,12h -2 eth0 In looking at this site: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html I see that the -T option has this description: -T, --local-ttl=<time> When replying with information from /etc/hosts or the DHCP leases file dnsmasq by default sets the time-to-live field to zero, meaning that the requester should not itself cache the information. This is the correct thing to do in almost all situations. This option allows a time-to-live (in seconds) to be given for these replies. This will reduce the load on the server at the expense of clients using stale data under some circumstances. My command doesn't have the -T option. Do I need it or does dnsmasq default TTL to zero without it?

    Read the article

  • Desktop appliciations are unable to launch my browser in Windows 8 [migrated]

    - by Alex Ford
    I have a fresh copy of Windows 8 Pro installed from MSDN. I have Google Chrome installed (stable channel) and it is set as my default browser. I even went into Control Panel Default Programs to ensure that Chrome had all its defaults. When other desktop applications try to launch my browser they always fail. For example, while trying to install the Android SDK for Windows the installer accurately detected that I did not have the JDK installed. It provides a friendly button to visit java.oracle.com. When pressing this button, nothing happens at all. You can see that here: http://youtu.be/XXL8GhuWWg0 If it were only that application that was having issues I wouldn't think anything of it but I have been encountering similar issues all over the place. Probably the most irritating one is when visual studio has updates; clicking the update button does nothing. http://www.youtube.com/watch?v=zwd1mn3TId0 You can see in that screencast that Visual Studio is not able to launch the browser no matter what I click. The update button doesn't do anything and neither do the two links in the update's description. Any suggestions? I'm assuming it's a Windows issue since it is happening in multiple applications.

    Read the article

  • Start Chrome by command line, but adding some arguments to make it login into your Google account automatically

    - by jim
    Is there a way to start Chrome calling it from the command line (using Linux), but providing it some argument to make it login into some Google account automatically? I'm looking for something like google-chrome -account foo -pass bar that I can easily put in a bash script later. A little background: I have a laptop connected to my TV, which is currently using just a mouse for user interaction. There's no google account logged in by default, and that's the way I want to keep it, so my kids can't come across videos and pictures in google and youtube that they are not supposed to see (e.g.: adult content, or anything marked as not appropriate for kids by the google's safe search filters). The bad thing about this is that there are some music videos in youtube that requires you to be logged in to see, usually those we (the adults) used to sing when playing karaoke... as the only input available is a mouse, I'm looking for a way to start with my google account without having to type the whole thing usin the on-screen keyboard. You may think "Why you can't use the keyboard, if the laptop is right there?". Well, it's in a kind of uncomfortable position - too high for me without a chair or something, as it's right above the furniture in where the TV is located. Is there a way to make this scriptable? If not, do you know any other workaround? Note: using the remember me after logging off or alike options are discarded, as the safe-search chrome version must be always the default version to run.

    Read the article

  • 85 Hz on old/new driver looks the same like 75 Hz on previous one?

    - by jon
    I have old philips 107T5 CRT and Nvidia graphics card. I used old Nvidia driver (but it wasn't 'legacy' one when I installed it) for few years but recently I decided to install other Linux distribution. I used 75 Hz refresh rate and 1024x768 resolution on my previous distribution. After I installed the new distribution I had to install a Nvidia driver so I downloaded one from the Nvidia site (this time only legacy supported my card so I downloaded legacy and installed it). It wasn't automatically updating xorg.conf but I had my previous xorg.conf copy and I used it. When I run X I could only choose 85 and 75 Hz, 85 was checked as default. And now what shocks me: that default 85 Hz looks identically like 75 Hz on previous driver looked (at least to me). I tried 75 Hz out of curiosity and it's too bright, hurts, etc. But on the previous driver 75 Hz wasn't hurting my eyes. Why is it different? It's the same number after all, so it should always give the same results, right? That's my first question. Second question: Is 85 Hz OK for that monitor model? Would it break it? I tried to find the optimal refresh rate for this model but couldn't find it.

    Read the article

  • Can I restore one of my user's profiles in Vista?

    - by Rod
    My youngest daughter uses my 4 year old laptop, which has Windows Vista installed. Somehow she got some Trojan (Vista Internet Security). (I'd love to know how that happened, seeing as how she is a standard user, and I have VIPRE as my AV.) Anyway, I ran a deep anti-virus scan using VIPRE, which identified it. I decided to delete everything that it identified. Now she cannot use anything in her profile. If she tries to bring up the browser, it recycles over and over again a dialog box asking which program to use. If I try to run any program at all, it doesn't know what to do. For example, it is totally lost trying to run the command line. If I bring up Windows Explorer and navigate to Windows\System32 and try to run the command line, or anything at all from there, it goes "Huh?" What in heck has happened?? Is it possible to fix this, and if so, how? As an aside, I can log into my account (my account on that machine is an administrator) and it works fine.

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Sharepoint: authenticating users via forms authentication

    - by sbee
    My problem is the following(sharepoint Newbie) , i want to change the default zone from being a Windows Authenticated Zone to a Forms Authenticated Zone ,thereby forcing the site collection administrator to log in via forms authentication and not windows also the sharepoint users will be accesing the site internally my goal is to effectively replace windows authentication with forms authentication as my company does not have active directory installed. So far i have created an ASP Application that adds the users to the database,the database was created via the .Net Framework Asp tool(Asp reg_sql),however when i change the default zone to the AspNetSqlMembershipProvider(Forms) and attempt to add my site collection administrator via the Central admistrator, i get the following error "No Exact Match found" as shown on the screenshot. My inkling is that somehow the people picker is failing to read the users from the database but reasearch on correcting that thus far has proved fruitless. I have made all the relevant changes on the these sites(Central admin site,My test site & Add Users site) config files.Changes are the following(Membeship Provider,Connection String,People Picker) i left out the role provider for now as it is optional. Help on this would ge highly appreciated...

    Read the article

  • How does rsync --daemon know which way it is being run?

    - by Skaperen
    I am wanting to run rsync over an SSL/TLS encrypted connection. It does not do this directly so I am exploring options. The stunnel program looks promising, although more complicated than designed due to the need to hop connections with the -r option. However, I do find there is a -l option to run a program. I am assuming this works by having two processes, one to carry out the SSL/TLS work, and one to be the worker which the client is communicating to. These would then communicate by a pipe pair or two way socket between them. What struck me as odd when I surveyed a number of web pages to see how to properly set this up is that whether running as a standalone daemon, or under a super daemon like inetd, the arguments for rsync are the same. How does rsync --daemon know whether it should open a socket and listen on it for many connections, or just service one connection by communicating with the stdin/stdout descriptors is has when it starts up (which really would go through the extra process to handle the encryption, description, and SSL/TLS protocol layer)? And then I need to find a way to wrap the client to have it do SSL/TLS in one simple command (as opposed to connection hopping that stunnel seems to favor).

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • Get Matrox Millenium video card working in Ubuntu 9.10

    - by wcoenen
    I have installed Ubuntu 9.10 on an old PC and it is mostly working, except for some heavy drawing defects that show up whenever I start dragging a window or scrolling inside a window or menu. It looks like the video driver copies the rectangle being moved to the wrong location. I have taken a look in /var/log/Xorg.0.log and the following line shows the detected video card: (--) PCI:*(0:0:8:0) 102b:0519:0000:0000 Matrox Graphics, Inc. MGA 2064W [Millennium] rev 1, Mem@ 0xf9800000/16384, 0xfb000000/8388608, BIOS @0x????????/65536 (==) Using default built-in configuration (30 lines) (==) --- Start of built-in configuration --- Section "Device" Identifier "Builtin Default mga Device 0" Driver "mga" EndSection How do I fix the drawing defects? It turned out that the 24 bit color depth (automatically selected by ubuntu 9.10) was the problem; apparantly the mga driver doesn't handle this well for cards with little memory. I took the following steps to resolve the issue (you can skip the first three steps if you already have a semi-working xorg.conf file): Reboot ubuntu in recovery mode, to get a root console without X running. Run Xorg -configure to generate a xorg.conf.new file Copy the file to /etc/X11/xorg.conf with cp xorg.conf.new /etc/X11/xorg.conf (assuming it didn't exist yet; that's why I generated it) Open the new config file with sudo nano /etc/X11/xorg.conf and make sure the screen section is configured for 16 bit color depth like this: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 16 Modes "1024x768" EndSubSection EndSection I can't guarantee those were the only important changes I made - I tried a few things in my attempts to create a valid xorg.conf file. But I'm pretty sure that the screen section was the important part.

    Read the article

  • How to configure my 404 response

    - by Evylent
    How would I be able to correctly redirect a person who visits my site to my 404 page? I have already created my 404.php file as: <!DOCTYPE html> <html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Page not found | Twilight of Spirits</title> <link rel="stylesheet" href="http://forum.umbradora.net/template/default/css/404.css"> <link rel="icon" type="image/x-icon" href="/favicon.png"> </head> <body> <div id="error"> <a href="http://forum.umbradora.net/"> <img src="/forum/template/default/images/layout/404.png" alt="404 page not found" id="error404-image"> </a> </div> <div id="mixpanel" style="visibility: hidden; "></div></body></html> My .htaccess file is: ErrorDocument 404 http://forum.umbradora.net/404.php Now when I go to my site and enter a false link such as mack.php or total.html, I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Any ideas on how to solve this? I have tried switching from subdomain to my normal path, still get errors.

    Read the article

  • Apache Virtual Hosts behind Cisco Router

    - by Theo
    I'm setting up an Apache 2.2 Ubuntu web server for internal services that is also supposed to be accessed from outside our LAN. Our LAN has a single external IP that is the external IP of our RV042 Cisco router. We have set up several A records on our external DNS server that point to this IP. Our internal DNS server resolve the same records to the internal IP of our web server, so computers from inside the network can access them using the same address as if they were outside. We forwarded the router's external 80 port to our web server's 80 port. I have set up one Virtual Host for each domain name in our list, and my httpd.conf is something like this: ServerName web.domain.com NameVirtualHost *:80 <VirtualHost *:80> ServerName alfresco.domain.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /alfresco http://localhost:8080/alfresco ProxyPassReverse /alfresco http://localhost:8080/alfresco ProxyPass /share http://localhost:8080/share ProxyPassReverse /share http://localhost:8080/share </VirtualHost> <VirtualHost *:80> ServerName crm.domain.com DocumentRoot /var/www/sugarcrm </VirtualHost> Now, this works if we are in our LAN. However, if we are outside of our LAN we reach our web server's default page saying: It Works! This is the default web page for this server. But we can't reach the virtual hosts, as if the domain name is not being preserved when the router forward the packets to the web server. Am I doing something wrong? How can I check what is going on? What should be the settings to make this work from outside?

    Read the article

  • Sever 2008 R2 Powershell Script runs manually, but not as a scheduled task

    - by Aeisor
    I have a Powershell script that runs manually using the Powershell ISE; However, when run as a scheduled task using an administrator's credentials the task does not run with the expected results. The script: $request=new-object System.Net.WebClient $request.DownloadFile("...url...", "C:\path\to\file.csv") The administrator user has Full Control of both the script and the folder it is writing to. The url exists and responds in a reasonable time (<1s). If I run the task manually the status is 0x41301 ("Currently Running") until I eventually end it. I have set the task up using both of these methods: Start a Program: C:\path\to\PS.PS1 Start a Program: C:\windows\system32\WindowsPowerShell\v1.0\powershell.exe with additional options -noninteractive -command "C:\path\to\PS.PS1" Using option 1, the task history shows it has opened an instance of notepad.exe but never terminates it. Using option 2 it completes the task but doesn't download / create the file. I have used Set-ExecutionPolicy Unrestricted as this is not a signed script. Any ideas?

    Read the article

  • Exchange 2003 Internet Mail Size Limits

    - by scampbell
    I have unsuccessfully tried to increase per user incoming mail size settings by editing their user account settings on our Exchange server, but large incoming mail from external domains is still blocked using the default global settings. After reading here: http://support.microsoft.com/default.aspx?scid=kb;en-us;322679 I see that All Internet e-mail messages use the global setting for limits on sending and on receiving. The message categorizer evaluates the sender's sending limit and the recipient's receiving limit. In example 2 earlier, a user with a user mailbox limit of 3 MB could receive messages from another user with a 3-MB sending limit. Because Internet users use the global setting, they can send only a 2-MB message. Which to me is madness! Surely if I want to allow a user to receive mail up to a certain size then I should be able to set it as such? Is there a specific way of getting round this? Would setting the global defaults high and setting a lower, say 10MB, limit on the SMTP connector do the trick? Thanks.

    Read the article

  • Packet flooding while configuring a Debian L2TP/IPSec client?

    - by Joseph B.
    I'm currently at my wits end trying to configure an L2TP over IPSec VPN connection on my Debian using openswan and xl2tp box connecting to a server of unknown configuration. I've managed to successfully establish the connection and everything appears to be working well until I attempt to set the VPN connection as my default route, at which point I see a massive flood of packets simultaneously being transmitted (on the tune of ~1.5 GB in about 2min) until the server drops my connection. Prior to this network traffic on all my interfaces is minimal. According to iftop the majority of this traffic appears to be coming out of port 12, although I can't seem to figure out how to finger a specific process. If I instead just route traffic destined for 74.0.0.0/8 through it I'm able to access Google's servers through the VPN without issue. My xl2tp.conf file is: [lac vpn-nl] lns = example.vpn.com name = myusername pppoptfile = /etc/ppp/options.l2tpd.client My options.l2tpd.client file is: ipcp-accept-local ipcp-accept-remote refuse-eap require-mschap-v2 noccp noauth idle 1800 mtu 1410 mru 1410 usepeerdns lock name myusername password mypassword connect-delay 5000 And my routing table looks like: Destination Gateway Genmask Flags Metric Ref Use Iface 10.5.2.1 * 255.255.255.255 UH 0 0 0 ppp0 10.0.50.0 * 255.255.255.0 U 0 0 0 eth0 10.50.0.0 * 255.255.0.0 U 0 0 0 eth0 10.0.0.0 * 255.255.0.0 U 0 0 0 eth0 192.168.0.0 * 255.255.0.0 U 0 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default * 0.0.0.0 U 0 0 0 ppp0 I'm seeing absolutely nothing in auth.log and syslog during this time and can't seem to find any other log files it might be writing to. Any suggestions would be appreciated!

    Read the article

  • Cannot configure hostname keeps on changing after reboot CentOS 6 + nginx [on hold]

    - by The Wolf
    I just finished this tutorial I found online: http://www.unixmen.com/install-lemp-nginx-with-mariadb-and-php-on-centos-6/ Now, I am having trouble making a hostname, you can see the result at: http://www.intodns.com/busilak.com here are my confs /etc/hosts 127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4 # Auto-generated hostname. Please do not remove this comment. 198.49.66.204 host.busilak.com busilak.com host ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 /etc/sysconfig/network NETWORKING="yes" GATEWAYDEV="venet0" NETWORKING_IPV6="yes" IPV6_DEFAULTDEV="venet0" HOSTNAME="host.busilak.com" /etc/nginx/conf.d/default.conf server { #listen 80; #server_name host.busilak.com; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } Question: Is there anything I should have done? I just want to use my domain: busilak.com as default domain for my server, such that when I open busilak.com it would point readily to my VPS ip address.

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Excel controls not visible for certain users

    - by Nossidge
    One of the users of an Excel program I've written is having a weird problem. None of the control objects (Command Button, ComboBox, etc.) are visible to him when he opens the file on his laptop. He is using Excel 2003, the same version I used to create the program, and enables macros using the pop-up when the file loads. I have Googled this, and have found these people who seem to be having the exact same problem, with various versions of Excel. Unfortunately, none of their questions were answered. I can't really explain it any better than this user: If I enter design mode and pull a control from the control toolbar onto a sheet all I see are the drag handles. When not in design mode I have to feel around with the mouse and can click the button which executes the button click code correctly and opens another sheet where again I have to feel around for the buttons to return me to the original sheet. The button I managed to click is now visible but as soon as I click anywhere on the sheet it disappears. I have verified that the visible property of the buttons is set and that the Show All Objects on the Options View tab is selected. If I pull buttons from the Forms toolbar onto a sheet they are visible. If I try to find Objects using F5 when not in design mode Excel reports no objects on the sheet. So, Super Users, can you help? UPDATE: Thanks for your replies, but much like the person in the ozgrid link, the problem has gone away. Not sure why it went, but I can confirm that the user rebooted again and also started up other Excel files that didn't contain controls in the interim. Perhaps that fixed it, or maybe it'll be back again. I'll keep udating with progress, and close if the problem doesn't reoccur for the next few days. Thanks again.

    Read the article

  • Vagrant VM Fails to Boot

    - by Rob Wilkerson
    I have a Vagrant environment that requires me to forward port 80 so I bring it up under sudo on an OS X machine. This has always been fine until I recently upgraded to Vagrant 1.2.2. Now it fails to boot. [default] Waiting for VM to boot. This can take a few minutes. [default] Failed to connect to VM! Failed to connect to VM via SSH. Please verify the VM successfully booted by looking at the VirtualBox GUI. Because I'm running under sudo, the machine never gets added to the Virtualbox GUI, but that's always been the case for this environment. I don't get any indication that there was a problem with the additions -- a potential source of this error, from what I've read. I can bring things up just fine if I change to using port 8080 on the host machine. I can't use the app, but the VM itself loads up and provisions nicely. As far as I can tell, the only thing that's changed is: I upgraded my Vagrant version I updated the project's Vagrantfile to use v2 syntax. Anyone have any idea what I might be missing? I thought I'd be able to find this pretty easily, but it's quickly becoming a very real problem.

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Basics about file/folder permissions on Win 7

    - by Altar
    Hi. Under Win XP I never touched the permissions of a file/folder. I was happy with the way it worked. But recently I installed Windows 7 on a drive that previously hosted Windows XP. Now, some programs do not have 'read' and/or 'write' access to their own folders - and I am not talking about system folders like 'Program Files' but normal folders like 'C:\my data\my own folder\program folder'. I see that for folders created under Win XP I have some user groups that do not exist for 'normal' folders (folders created by me recently under Windows 7). For example, for the Win XP folder I have: Creator owner System Account unknown(S-1-5-21 blablabla... Admins Users For Win7 folders I have: Authenticated users System Admins Users How should I proceed? Should I give the right to the "Users" account to write to XP folders? Should I make the old folders (the XP folders) to have the same groups of users as the normal (Win7) ones by adding the "Authenticated users" account to those folders? Should I delete the "Account unknown" account from my system? (In this case, how?) Many thanks.

    Read the article

  • RDP suggesting a user name when connecting to a server

    - by Neolisk
    Prior to Event X, RDPing to Server 2003 always caused the user name appear blank and Login to be enabled, so you could pick to which domain you would log in. For us it's either local or our domain. Since a recent Event X a domain + user name is being suggested for every server and it's not the most recently used user name. If you remove it manually from RDP dialog, it's still being pre-populated for you, and then at the next available opportunity it returns into General/User name option of RDP dialog. So user name field comes pre-populated and you cannot change to log in locally (only if you manually erase domain specifier - everything before \) - Log in to option is disabled by default. We did not do any changes to our domain or client machines, so I am suspecting some Windows update caused it (and this being Event X). Interesting fact - it does not consistently happen on all machines, and some can login to some servers fine, while other servers keep suggesting a default user name. What could be that Event X and is there a way to fix it? EDIT: I tried this - How to clear remote desktop connections history and specifically this part of it: reg delete "HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\UsernameHint" /f The problem still persists.

    Read the article

  • Dual boot Windows 8 and Ubuntu 12.10 across a reboot

    - by AK4749
    My Setup: I have two separate SSDs, and each contains an independently bootable OS - W8 and U12.10. From my extremely limited knowledge, this means each has a functioning EFI partition(?). My default boot order (GA-Z68XP-UD3P mobo with UEFI firmware update) boots the UEFI partition containing windows first, but if I enter the BIOS I can select the "ubuntu" entry to successfully boot ubuntu. Both drives are GPT, and are EFI boots. What I want to do: Reboot Windows 8, re-enter W8 (this is happening now due to the default boot order). What I want to change, however, is to boot into Ubuntu if i reboot from ubuntu. Essentially, I would like to work within one OS unless I consciously choose otherwise. Normally, I would not even ask to something I thought was impossible, but... Why I think this is possible: When trying EasyBCD to add ubuntu to the W8 UEFI bootloader, I noticed an "iReboot" addon or something that allows you to select which OS to boot into from within the OS. Note that I ended up not using the NeoGrub entry to chain Ubuntu off the W8 bootloader because I couldn't get much help with it. Is this possible? Have I had too much coffee and gone insane? Thank you all for your time, AK

    Read the article

  • How to find what files / directories are not copied yet?

    - by user8676
    Hi all, I found the following 'nice' situation: An archive of few disks (actually three disks) which has a bunch of photos (more or less) organized. Well, this is good. A big disk shared on a network which has a bunch of photos which has another folder structure (even if is somewhat recognizable for a human being) than the archive described above, but some of the files on this big network share are the same with the files from the archive. Well, this is bad. What we need is to move the different (new) files from the network share in the archive (perhaps we'll use for this a new disk added to archive). The program that we need is different from a regular File Duplicate Finder program because usually the File Duplicate Finder finds the duplicates from all sources comparing each file with another. We want to find the differences between the two sources. It is fine for us to have a report generated in text file which after this we'll use to do our move. A Windows solution will be preferred. Any ideas? TIA

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >