Search Results

Search found 16602 results on 665 pages for 'directory'.

Page 572/665 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • Understanding how IE's SmartScreen works

    - by Kevin Donn
    Today I downloaded an update to our mail server on my dev machine using IE9 on Win7 Pro. I directed IE to save the file on our server's shared drive so I could install it later. When the download finished, IE showed a red banner at the bottom and said that, ".exe is not commonly downloaded and could harm your computer." There were three buttons, "Delete", "Actions", and "View downloads". I selected "Actions" just because I had never seen this before. It showed a "SmartScreen Filter" dialog basically giving three choices: "Don't run this program (recommended)", "Delete program", and "Run anyway". I just canceled the dialog because I didn't want to run it in the first place; I just wanted to download it so I could run it later on the server. So when I did try to run it, it would blow up immediately saying, "Setup was unable to create the directory - Error 5: Access is denied." I tried unblocking the file, "Run as Administrator" even though I already was Administrator, turning off UAC, etc. Cutting to the chase, I finally downloaded the file again, ran WinMerge on the two and it showed they were identical, except the new one ran fine. I went back to my dev machine, downloaded the file through Firefox and then ran it on the server, again fine. But when I tried again through IE, again SmartScreen showed its red banner and somehow clobbered the file even though it was stored on another machine, and WinMerge can't tell the difference between it and a good file. I've looked around on the web for how SmartScreen works, but they all give user-level descriptions of it. What I want to know is, what does it do to that file to make it unrunnable on another machine? Thanks

    Read the article

  • How to merge arbitrary snapshot into base vdi in Virtualbox

    - by jmathew
    I botched a transfer of a VM from one harddisk to the other. Now I'm left with the base vdi and a whole bunch of snapshots. My steps Copied old VM directory over to new HDD Deleted old VM and added new VM using using Machine-add and providing the old XML file Couldn't add base vdi file due to conflict so changed the UUID of base vdi with VBOXMANGE.EXE internalcommands sethduuid <path/to/vdi> Attempt to rollback to a snapshot, but it seems the VM is looking for the snapshots on the old HDD (which is formatted and gone) This is the error (networked is the name): Failed to restore the snapshot networked of the virtual machine lfs. Could not open the medium 'H:\vm\ft.vdi'. VD: error VERR_PATH_NOT_FOUND opening image file 'H:\vm\ft.vdi' (VERR_PATH_NOT_FOUND). Result Code: E_FAIL (0x80004005) Component: Medium Interface: IMedium {53f9cc0c-e0fd-40a5-a404-a7a5272082cd} The old HDD was drive H: the new one is drive N: How can I modify the snapshots/VM to look in N:\vm\ft.vdi for the base vdi? I've already set the default settings in VirtualBox in general (default vm/vm snapshot location). Or if not that how can I merge the old snap shot with the base vdi given that the only things that have changed is the base vdi's UUID?

    Read the article

  • Setting up xpra for client use in OS X

    - by Jonathan
    I've been trying to get xpra to run on OS X for the last few days to connect to my Ubuntu server. Note that there's a GUI for it called shifter, but that (at least on OS X) is still far too buggy. For those who don't know what xpra is, if you know what screen is, it's like screen for GUI X Windows apps tunneled over ssh. You can render a remote X app locally so it's faster than sending a series of compresses screen shots (like VNC), but with xpra you can disconnect and reconnect on different computers. To get the basic functionality you can just type "ssh -X server.location" and any GUI app you open from the command line will open locally. I've been able to get xpra to build by doing the following: Download pari-all-0.0.6.tar.gz from the xpra site listed under upstream and untar it. Issue the following Mac Ports command (Dependencies thanks to RogBlog): sudo port install python25 python26 py26-pyrex py26-gtk xorg-libXtst py25-gobject py25-gtk py25-nose py26-nose xorg-libXdamage xorg-libXcomposite xorg-libXtst xorg-libXfixes In the upstream list of v0.0.06 patches (NOT 0.0.8pre!) on the xpra site listed above, download mswindows-conditional-pyrex.patch. Open the patch with your favorite text editor and change the single occurrence of "win" in it to "darwin". Apply the patch to setup.py. Run do-build in the command line. Now where I'm stumped: how do I run xpra? The build produces a sub directory called install/bin in which xpra is located, but when I try to run it I get the following error: Traceback (most recent call last): File "./xpra", line 4, in import xpra.scripts.main ImportError: No module named xpra.scripts.main There is a file called main.py under xpra/scripts, but I don't know any python and I'm not sure if this is what it's looking for, and what to do with it even if it is. My goal is to set up xpra so I can install it into /usr/bin (or some other common path for executables) and execute it whenever I please. What do I do next?

    Read the article

  • First web server questions

    - by Graeme
    Hi there, Just looking for some help/suggestions with this. I require my own server for an upcoming project that will be hosting users websites. I want to build a control panel the user can log into and modify their website which will be stored elsewhere on the server. This all seems easy enough, It's just managing domains and emails that confuse me. What should I look for to manage domain names and point them to the correct website and also what would be the best way to manage email accounts/set up new ones etc. I want to avoid cPanel/WHM if possible, I'm looking to control most things through the control panel I will be building. So any suggestions on this would be useful as well, as I will be wanting to add email accounts through php (Can be done using a shell I assume?). I will also be wanting to measure bandwidth used on the websites contained in each users directory, any suggestions on making this possible? I'm really looking for some suggestions on what software to use to set this up, any advice would be really helpful! Thanks, Graeme

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • What should be monitored to troubleshoot file sharing problems?

    - by RyanW
    I'm running into some problems with a file share used by an ASP.NET web application. With this configuration, there are 2 web servers (win2k8 web) that connect to a file server (win2k8 enterprise), reading and writing files using a file share. Recently, one of the web servers has begun encountering an error accessing the file share: IOException: The specified network name is no longer available. There does not appear to be much info on the web for explaining what's causing this and how to best fix it, so I'm looking at what I can monitor in order to get clues. I'm not sure if it's hardware, just a load issue, file size, frequency, etc. With Windows perfmon, what can I monitor on the File Server side? There's the "Files Open" object, any other good ones? What can I monitor on the web server side? EDIT: I'll add that the UNC path uses the IP address of the file server, not a name to resolve. Also the share is a single, flat directory with over 100K files.

    Read the article

  • Logging won't stop on log file after renaming/moving it.... how do I stop it?

    - by Jakobud
    Just discovered that logrotate is not rotating our firewall log. So its up to 12g in size. I need to split up the file into smaller chunks and start manually rotating them so I can get things back on track. However before I start splitting the firewall up, I need to stop the firewall from logging to the current firewall log file and force it to start logging to a new empty file. This way I'm not trying to split up or rotate a log file that is still constantly growing. I tried to simply do this: mv firewall firewall.old touch firewall I expected to see the new empty firewall file to start growing in size, but no... the firewall.old is still be logged to. Then I tried to start/stop iptables. No change. firewall.old is still the log file. I tried to move it to another directory. That didn't help. I tried to stop iptables, then change the filename and create a new firewall file and then start iptables again, but no change. How do I stop the logging on this file and force it to start logging on a new file?

    Read the article

  • How do I rename my old Program Files folder?

    - by SteveJ
    I installed a new SSD as my boot drive (C:), installed a fresh version of Windows 7 64-bit, and kept my existing SATA drive in the system (D:). I want to keep using my D: drive for file storage (no sense filling up the SSD with stuff that isn't performance critical) and I haven't formatted the D: drive because there's stuff on there I want to keep. I also want to create a new "D:\Program Files" folder so I can install apps that aren't performance-critical there. So I decided I'd rename the existing "D:\Program Files" from my old Windows install to "D:\Old Program Files" and then create a new "D:\Program Files" directory. Easy, right? I can see "D:\Program Files" just fine in Explorer. I right click, select Rename, and type "Old Program Files." I get the alert that says I need Admin permission to do this, so I press the confirm button with the shield. But the folder still appears as "Program Files" in Explorer. I jump out to the command line, and it appears as "Old Program Files" when I do a dir. I can even do mkdir "Program Files" and when I do a dir they both appear. But in the Explorer GUI, it looks like I have two "Program Files" folders. This will be confusing during app installation because I won't be able to tell which one is which. I've tried poking around in the properties tab of the old folder, but can't find anything that would explain what's causing the issue. How do I rename the old Program Files folder?

    Read the article

  • mount multiple folders with nfs4 on centos

    - by microchasm
    I'm trying to get nfs4 working here. Machine 1 (server) I have a folder and in it 2 other folders I'm trying to share independently. /shared/folder1 /shared/folder2 Problem is, I can't seem to figure out how to mount the folders independently on the client. (Machine 1 - server) /etc/exports: /var/shared/folder1 192.168.200.101(rw,fsid=0,sync) /var/shared/folder2 192.168.200.101(rw,fsid=0,sync) ... exportfs -ra (Machine 2 - client) /etc/fstab: 192.168.200.201:/folder1/ /home/nfsmnt/folder1 nfs4 rw 0 0 ... mount /home/nfsmnt/folder1 mount.nfs4: 192.168.200.201:/folder1/ failed, reason given by server: No such file or directory The folder is there. I'm positive. I think there is something simple I'm missing, but I'm totally missing it. It seems like there should be a way in fstab to tell nfs which folder on the server I want to mount. But I can only find references to what looks like a root mount point (e.g. 192.168.1.1:/) which I assume is handled by exports on the server. But even with the folders set up in exports, there doesn't seem to be an apparent way to pich and choose which gets mounted. Is it not possible to mount separate folders from the same server to different mount points on the client? Any help appreciated.

    Read the article

  • EventID 1058 Code 5, Sysvol is subdir of Sysvol - how to fix?

    - by nulliusinverba
    I have been trying to resolve this error, like many others: The processing of Group Policy failed. Windows attempted to read the file \domain.local\SysVol\domain.local\Policies{3EF90CE1-6908-44EC-A750-F0BA70548600}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following: a) Name Resolution/Network Connectivity to the current domain controller. b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller). c) The Distributed File System (DFS) client has been disabled. Error code: 5 = Access Denied. The incredibly helpful post is this one (http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/A_1073-Diagnosing-and-repairing-Events-1030-and-1058.html). Quoting from this post: HERE IS A LIST OF POTENTIAL PROBLEMS THAT CAN LEAD TO 1030 AND 1058 EVENT ERRORS: --Sometimes the permissions of the file folders that contain Group policies (the Sysvol folder) can be corrupted. --Sometimes you have problems with NetBIOS: --Sometimes the GPO itself is corrupt, or you have a partial set of data for that GPO. --Sometimes you may have problems with File Replication Services, which almost always indicates a problem with DNS --Sysvol may be a subfolder of itself: Sysvol/Sysvol I have the problem listed where sysvol is a subfolder of sysvol. The directory structure is: -sysvol -domain -staging -staging areas -sysvol (shared as "\\server\sysvol") -domain.local -ClientAgent -Policies -scripts Interestingly, the second sysvol folder is the one that is shared as "\server\sysvol". This makes me confident this is the issue with the permissions and error code 5. Also interestingly, my server 2008 R2 servers can see it fine - my server 2008 servers cannot, and get the error. This is consistent across all my servers. This latter fact makes me uncertain what I need to do to fix this up. Do I, e.g., simply move the shared sysvol folder up a level to replace the non-shared one? Any help greatly appreciated. Cheers, Tim.

    Read the article

  • Understanding !d command in sed with respect to saves

    - by richardh
    I have a directory of tab-delimited text files and some have comments in the first few lines that I would like to delete. I know that the first good line starts with "Mark" so I can use /^Mark/,$!d to delete these comments. After this deletion I have several other replacements that I make in the (new) first line that has variable names. My question is, why do I have to save sed's output to get my script to work? I understand that if I line is deleted, then the output doesn't proceed downstream because there is no output. But if I don't delete (i.e., !d) then why do I have to save to file? Thanks! Here is my shell script. (I'm a sed newbie, so any other feedback is also appreciated.) #!/bin/bash for file in *.txt; do mv $file $file.old1 sed -e '/^Mark/,$!d' $file.old1 > $file.old2 sed -e '1s/\([Ss]\)hareholder/\1hrhldr/g'\ -e '1s/\([Ii]\)mmediate/\1mmdt/g'\ -e '1s/\([Nn]\)umber/\1o/g'\ -e '1s/\([Cc]\)ompany/\1o/g'\ -e '1s/\([Ii]\)nformation/\1nfo/g'\ -e '1s/\([Pp]\)ercentage/\1ct/g'\ -e '1s/\([Dd]\)omestic/\1om/g'\ -e '1s/\([Gg]\)lobal/\1lbl/g'\ -e '1s/\([Cc]\)ountry/\1ntry/g'\ -e '1s/\([Ss]\)ource/\1rc/g'\ -e '1s/\([Oo]\)wnership/\1wnrshp/g'\ -e '1s/\([Uu]\)ltimate/\1ltmt/g'\ -e '1s/\([Ii]\)ncorporation/\1ncorp/g'\ -e '1s/\([Tt]\)otal/\1ot/g'\ -e '1s/\([Dd]\)irect/\1ir/g'\ $file.old2 > $file rm -f $file.old* done

    Read the article

  • Node.js Build failed: -> task failed (error#2)?

    - by Richard Hedges
    I'm trying to install Node.js on my CentOS server. I run ./configure and it runs perfectly fine. I then run the 'make' command and it produces the following: [5/38] libv8.a: deps/v8/SConstruct - out/Release/libv8.a /usr/local/bin/python "/root/node/tools/scons/scons.py" -j 1 -C "/root/node/out/Release/" -Y "/root/node/deps/v8" visibility=default mode=release arch=ia32 toolchain=gcc library=static snapshot=on scons: Reading SConscript files ... ImportError: No module named bz2: File "/root/node/deps/v8/SConstruct", line 37: import js2c, utils File "/root/node/deps/v8/tools/js2c.py", line 36: import bz2 Waf: Leaving directory `/root/node/out' Build failed: - task failed (err #2): {task: libv8.a SConstruct - libv8.a} make: * [program] Error 1 I've done some searching on Google but I can't seem to find anything to help. Most of what I've found is for Cygwin anyway, and I'm on CentOS 4.9. Like I said, the ./configure went through perfectly fine with no errors, so there's nothing there that I can see. EDIT I've got a little further. Now I just need to upgrade G++ to version 4 (or higher). I tried yum update gcc but no luck, so I tried yum install gcc44, which resulted in no luck either. Has anyone got any ideas as to how I can update G++?

    Read the article

  • Missing access log for virtual host on Plesk

    - by Cummander Checkov
    For some reason i don't understand, after creating a new virtual host / domain in Plesk a few months back, i cannot seem to find the access log. I noticed this when running /usr/local/psa/admin/sbin/statistics The host in question is being scanned Main HTML page is 'awstats.<hostname_masked>-http.html'. Create/Update database for config "/opt/psa/etc/awstats/awstats.<hostname_masked>.com-https.conf" by AWStats version 6.95 (build 1.943) From data in log file "-"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Jumped lines in file: 0 Parsed lines in file: 0 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 0 new qualified records. So basically no access logs have been parsed/found. I then went on to check if i could find the log myself. I looked in /var/www/vhosts/<hostname_masked>.com/statistics/logs but all i find is error_log Does anybody know what is wrong here and perhaps how i could fix this? Note: in the <hostname_masked>.com/conf/ folder i keep a custom vhost.conf file, which however contains only some rewrite conditions plus a directory statement that contains php_admin_flag and php_admin_value settings. None of them are related to logging though.

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • Office365 Exchange: Cannot open shared two calendars in Outlook

    - by Mark Williams
    The problem: Outlook won't open the calendars on another user's mailbox and and a room mailbox, even when users have permission. Note: This problem is affecting more than one account on more than one machine. So I have a room mailbox and a personal mailbox on Exchange, both with shared calendars. There is a security group called "Scheduling Users" that have editor rights on both of these calenders. The room mailbox was created using PowerShell, per the instructions posted online (http://help.outlook.com/140/ee441202.aspx). Sharing worked on both of these folders initially. Users can still access these folders using OWA. So on to the problem. When users try to open these calendars in Outlook they receive one of the following messages. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. Cannot open this item. Cannot open the free/busy information. The attempt to log on to Microsoft Exchange has failed. What I have tried so far: Resetting the permissions on both of the mailboxes. I deleted the security group permissions on both mailboxes, applied the change, then waited a bit and gave the permissions back. Deleted the OST file of the shared calendar from the Outlook data directory That is all I have been able to find online. Any thoughts? I have been going back and forth with the Office365 support folks for a while and they seem stumped too.

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

  • setting up tracd behind mod_proxy?

    - by FilmJ
    I'm having trouble setting up mod_proxy and tracd. Seems almost all the search results for this problem take me to the built-in trac documentation page that mentions it as an option. I have several VirtualServers already running on the box in question, so running tracd on port 80 or 443 is not an option, but I do want to make my trac server accessible on this machine without exposing an additional port via the firewall. Making things even more complicated is that I have multiple trac repositories being served by the same instance of tracd, and so I want to set it up so: http://trac.abc.com is proxy'd to localhost:8000/projects/abcproject, and http://trac.def.com is proxy'd to localhost:8000/projects/defproject. Currently, the setup I have below results in 100% 403 errors. The server is running as www-data and the directory where all trac files are stored is owned by www-data, AND tracd (as show below) is running as www-data, so not sure where it's getting hung up. The relevant configuration on /var/apache2/sites-enabled/trac.abc.com: ProxyPass / http://localhost:8000/abcproject ProxyPassReverse / http://localhost:8000/abcproject The relevant configuration on /var/apache2/sites-enabled/trac.def.com: ProxyPass / http://localhost:8000/defproject ProxyPassReverse / http://localhost:8000/defproject The command used to instantiate tracd: tracd -a defproject,/var/www/vhosts/trac-common/users.htdigest,DEFProject -a abcproject,/var/www/vhosts/trac-common/users.htdigest,ABCProject -p 8000 -b localhost -e /var/www/vhosts/trac-common/projects If I access the site at http://localhost:8000/ everything works fine, but if I try to access via any of the proxy'd hosts I end up with 403 at every turn. I've used mod_proxy successfully as described above for other servers, such as couchdb, so maybe this has to do with the headers sent by tracd??

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

  • On Windows 2008 R2, how do I back up DHCP if the DHCP .mdb database is always busy?

    - by johnny
    I get this from my backup software. C:\WINDOWS\system32\dhcp\dhcp.mdb : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50tmp.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\tmp.edb : The process cannot access the file because it is being used by another process. My questions: Should I be doing a manual backup of DHCP via command line tools or maybe with MMC, Action, Backup before I run my backup? Is the %SystemRoot%\System32\DHCP\Backup directory always kept up to date? (which does get backed up by backup software) I'm answering my own question but the registry key is set up for 3c, 60 minutes, I believe. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\BackupInterva This is not the included backup software for Windows. It is another product, but I have seen this with every backup software I've ever used.

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • Java website on Tomcat PHP website on Apache - how to get PHP web pages into Java web pages?

    - by Venkat
    We have a Java web application deployed on Tomcat. We also setup Apache and mod_proxy_ajp to route web requests (port 80/443) to Tomcat. We would like to deploy a PHP application on the same Apache server - probably under a subdirectory (/var/www/ourapp). Now we would like to access & display web pages from PHP application within web pages generated by Java application. Planning to implement Single Sign-on as well. Example: Web page from java has (JQuery Tabs) and we like to display the PHP web page within a tab while all other HTML comes from java application. Can you please give a overall picture of how to proceed about this? Mainly 1. how we should install/setup our PHP application on same Apache server which is used to route web requests to Tomcat? i.e. either setup sub domain or install in sub directory 2. How to bring PHP pages into present web pages (generated by java). Can we use AJAX requests or should go for Java PHP Bridge/ Querces such applications? Thank you for your time in advance. Regards.

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >