Search Results

Search found 14344 results on 574 pages for 'path'.

Page 448/574 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • Reverse proxy apache to weblogic problem

    - by Zlatoroh
    Hello I have apache 2.2 server and welogic 11g running on web server. Apache is set for revers proxy on port 8080, weblogic serves two web pages and it's on port :7001 first page: localhost:7001/e-SPP/app second page: localhost:7001/e-sprejem/app I would like to access this two pages with apache like so: localhost:8080/e-SPP/app localhost:8080/e-sprejem/app Listen 8080 ServerName localhost:8080 <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPreserveHost On RewriteEngine On <Location /e-SPP/app> ProxyPass localhost:7001/e-SPP/app ProxyPassReverse localhost:7001/e-SPP/app </Location> <Location /e-sprejem/app> ProxyPass localhost:7001/e-sprejem/app ProxyPassReverse localhost:7001/e-sprejem/app </Location> This configuration opens my pages bust it's black anw white because CSS and JS aren't loaded! Path to the css over proxy looks like this : localhost:8080/e-SPP/css/style.css which doesn't open the CSS if I change the port to 7001 the it works !!! localhost:7001/e-SPP/css/style.css What should I do that CSS and JS are loaded? Interesting is favicon which is being loaded http://localhost:8080/e-SPP/images/new/favicon.gif Thanks for your help!

    Read the article

  • Windows 7 Can't Connect to Network Drive on Windows XP

    - by Alex Yan
    I have a Windows XP desktop and a Windows 7 laptop both connected to a TrendNET TEW-432BRP router, which is connected to the Internet. They both have static IPs. The desktop has an external hard drive connected to it. The laptop is wireless and the desktop is wired. I enabled sharing on the external hard drive about two years ago when I bought it. I mapped it as a network drive on the laptop. I think it was yesterday, the laptop just stopped recognizing any of the computers on my network (When I open network, my laptop's the only one on it). I also get an error message "An error occurred while connecting A: to \CERTIFIED-DATA\Expansion Microsoft Windows Network: The network path was not found. The connection has not been restored" when I try to connect to the network drive. Both computers run Avast, and there hasn't been any problems with it. This has happened before but I never figured out why and how to fix it. It's usually fixed when I reinstall the OS of the affected system. Edit: I can't navigate the computer using \\CERTIFIED-DATA. I get a message saying "Windows cannot access \CERTIFIED-DATA. Check the spelling of the name, Otherwise, there might be a problem with your network" I clicked diagnose on the message and it failed to find anything wrong I clicked diagnose on my wireless connection, and it just keeps trying to check if something is wrong with the connection I can ping it successfully

    Read the article

  • Are Windows Domain Service Accounts Really Necessary?

    - by Zach Bonham
    One of the biggest problems we have in automating application deployments is the idea that running IIS AppPools and Windows Services under domain service accounts is a 'best practice'. Unfortunately, this best practice sometimes causes deployment headaches in that either we need to provision a new domain level service account quickly, or once we have the account, we now need to manage the account credentials. I had a great conversation about not making domain level service accounts a requirement and effectively taking one of two approaches: Secure at the node level using machine account(domain\machine$) and add the node to appropriate ActiveDirectory/Sql groups/roles Create local app specific accounts on each machine (machine\myapp) and add that account to appropriate ActiveDirectory/Sql groups/roles (the password here can change per deployment, it doesn't need to be stored) In both cases, it seems that its easier to manage either adding an account to appropriate group/role, or even stand up new, local account, than it is to have to provision a new domain level account and manage those credentials. This would hopefully ease the management burden on ActiveDirectory, Sql Server and Operations teams as there would be no more password management. We've not actually been able to implement this in practice yet. I am coming from a development background, so I'm curious as to how many ways this approach could go wrong? Can we really get rid of domain level service accounts with this direction? I'd appreciate any thoughts from anyone who has taken this path! Thanks! Zach

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

  • How to set up a serial connection to a Windows 7 computer

    - by oli_arborum
    I need to set up a "dial in" connection to a Windows 7 (Ultimate) computer via a serial null-modem cable to be able to connect from a Windows XP client to that computer and exchange data over IP. Question 1: How do I do that? I did neither find the information via Google nor in the MSDN. Seems like no one tried ever before... ;-) I already managed to install a legacy modem device called "Communications cable between two computers" and found the menu entry "New Incoming Connection..." in Network and Internet Network Connections. When I finish this wizard I get the message that the "Routing and Remote Access service" cannot be started. In the event viewer I see the following error messages: "The currently configured authentication provider failed to load and initialize successfully. The requested name is valid, but no data of the requested type was found." (Source: RemoteAccess, EventID: 20152) "The Routing and Remote Access service terminated with service-specific error The requested name is valid, but no data of the requested type was found." (Source: Service Control Manager, EventID: 7024) The Windows 7 installation is "naked", i.e. no additional software or services are installed. Question 2: Am I on the right path to set up the connection? Question 3: How can I get the Routing and Remote Access service running?

    Read the article

  • How to configure a Web.Config file to allow custom 404 handling while still displaying on-page 500 e

    - by Mark
    To customize 404 handling and based on the hosting company's suggestion, we are currently using the following web.config setup. However, we quickly realized that with this configuration, any page error (500 error) are also getting redirected to this custom error page. How can I modify this config file so we can continue to handle 404 with custom file while still able to view on-page error? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.webServer> <httpErrors errorMode="DetailedLocalOnly" defaultPath="/Custom404.html" defaultResponseMode="ExecuteURL"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="/Custom404.html" responseMode="ExecuteURL" /> </httpErrors> </system.webServer> <system.web> <customErrors mode="On"> <error statusCode="404" redirect="/Custom404.html" /> </customErrors> </system.web> </configuration>

    Read the article

  • Symbolic links not working in MySQL

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • openvpn& iptables -- portforwarding and gateway

    - by Smith.Lai
    The problem is similar to this scenario: iptables rule still take effect after deleted Scenario: There are several clients(C1~C10) providing some services, such as SSH,HTTP..... The clients are actually a personal computer behind NAT. Their IP might be 192.168.0.x For easily access these machines through internet, I built a OpenVPN server(S1). All the C1~C10 connect to S1 with VPN address 10.8.0.x If A user(U1) wanna access C1 SSH through internet, he can connect to S1 with port "55555", and S1 port forward 55555 to 10.8.0.6:22 echo 1 /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 55555 -j DNAT --to-destination 10.8.0.6:22 It works well until I mark the following in the openvpn server.conf: I marked this because I think this will make all connection go through S1 ;push "redirect-gateway" |-------(NAT)--------| (C1)--| (INTERNET)----(U1) |-----(VPN)----(S1)--| The C1~C10 have their own path to access internet resource through NAT . The server loading would be heavy if all C1~C10 connection go through S1 (for example, C1 is sending data to C2, or C1 is downloading data from a FTP site). Is there a way to solve this quandary?

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • Upgrading Visio 2000 to Visio 2007

    - by dirtside
    I have Microsoft Visio 2000 SR 1, and recently purchased Microsoft Office Visio Standard 2007 with the understanding (supported by the product info and some other research) that I'd be able to upgrade. However, when I install 2007, it tells me it can't find a previous install of Visio, but... it's right there! Here's the exact message: "Setup can't find a version of Microsoft Office on your computer. If Office is installed on a disk or network share, click the browse button to select the appropriate disk or share... (etc.)" No matter which directory or drive I pick (various Office installs, the old Visio install, various subdirectories) it gives the following message: "The path you have chosen does not point at a qualifying upgradeable product. Click 'Retry' to try again or 'Cancel' to quit setup." Any ideas? This is a legit copy of Visio 2007 (purchased from Amazon) and the copy of Visio 2000 is legit as well. I'm not sure what exactly the installer is looking for that it would consider a "qualifying upgradeable product". A specific file?

    Read the article

  • samba "username map" stopped to work

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user Kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems. p.s. i've asked this question at serverfault but no answer came. Maybe I have more luck with this forum. Sorry for duplicate if any of you reads both.

    Read the article

  • hardlinking takes a lot of space

    - by mr_schlomo
    I made an rsync incremental backup script for my server that will copy a MySQL database backup and a specified folder path to a remote server. Here's the code on Github. Code excerpt from lines 53-57: ############### Create most current hand link echo "Creating most current hard link on backup server $most_recent_backup_link" ssh $remote_backup_server rm -rf ${most_recent_backup_link} ssh $remote_backup_server cp -alv ${remote_backup_folder}/backup-${backup_folder_name}/ ${most_recent_backup_link} I'm having a problem with creating the most current hard links on the backup server (lines 53-57 in the program). Everything works, and rsync only copies about 1-2MB of data. But the hard link copy process uses about 30MB of data. I get a huge laundry list of files that haven't changed and the only ones that have changed are very small in size. Normally this isn't a problem, but when you backup every hour, the backup should be as small as possible. For example, the last backup I did, rsync transferred 1.3MB. But the backup directory grew 35MB. Why are the hard links taking up so much hard drive space?

    Read the article

  • Update git on mac

    - by Meltemi
    I can't remember how I installed git a while back....but now it's living in /usr/bin/git and needs to be updated. I don't care how (pre-compiled or build my own) but what I don't want is another version existing somewhere else. i vaguely remember curl(ing) down the source & compiling it. but not positive. anyway, what's the easiest way to keep Git up-to-date under Mac OS X? Side question: I'm not that familiar with git. once it's installed is it ENTIRELY contained within its directory? so, in my case, everything about git on my machine (excluding the actual code repositories of course) is in /usr/bin/git/ ? If so then can I just move git around with a simple mv -R /usr/bin/git /opt/git? Then update my $PATH and everything should work as before? if so then i supposed i could just install again by any method and to any directory...and then move the new one into /usr/bin replacing the old version?!? Or is this bad?

    Read the article

  • Win7 Command Prompt drives not available

    - by jmerrill
    I have the opposite problem compared to the author of this question: Hard drive access denied from Windows Explorer (but works from command prompt as Admin) I can see all the drive letters for a particular server in Windows Explorer, and can navigate through them exactly as would be expected. The drive letters are displayed in Explorer in parens to the right of the path info -- finalpathportion (\\server\otherpathportions) (driveletter:) e.g. jmerrill (\\server\users) (H:) But the drive letters are not usable in a "Run as Administrator" command prompt. They have worked in the past, but I have since rebooted. I thought that perhaps I had to start a new command prompt having visited them in Explorer -- but that did not help. "net use" in the command prompt shows Unavailable H: \\server\users\jmerrill Microsoft Windows Network with similar info for the other drives. I can do net use h: /d net use h: \\server\users\jmerrill for each drive, and get the letters to be available in the command prompt. It is perhaps obvious that I don't think that it should be necessary to do that. Does anyone have any ideas?

    Read the article

  • Virtualhost setup for Ruby on Rails application (mod passenger)

    - by Ingo86
    Hi all, I'm trying to install Redmine under apache. The apache server works on a local network. My apache setup consist on a single virtual host. I can get insto different directories using simply the path corresponding: http://ip_address/folder_of_the_project_1 How can I setup the virtualhost to make redmine works in this situation? Here is my current virtualhost setup: NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/ RailsBaseURI /redmine <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/www/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Thank you, Ingo86

    Read the article

  • How do I run a stable Windows XP kvm guest on Ubuntu 10.04?

    - by Jean-Paul Calderone
    I have three Windows XP guests running on a recently upgraded 64-bit Ubuntu 10.04 system. Occasionally (on the order of once every few days), one of the guests will become non-responsive and the kvm process on the host which is running that guest will start consuming 100% CPU. It will continue to do so until it is killed. When restarted, it will be fine for a while, and then the issue repeats. The kvm command line used to run all three guests is this: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name bigdog21vmxp1 \ -uuid ea47ff84-125b-16f7-9a4d-a6d0d8bab46a \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/bigdog21vmxp1.monitor,server,nowait \ -monitor chardev:monitor \ -localtime \ -boot c \ -drive file=/var/lib/libvirt/images/windowsxp-1.qcow2,if=ide,index=0,boot=on,format=qcow2 \ -net nic,macaddr=54:52:00:02:06:0e,vlan=0,name=nic.0 \ -net tap,fd=58,vlan=0,name=tap.0 \ -chardev pty,id=serial0 \ -serial chardev:serial0 \ -parallel none \ -usb \ -usbdevice tablet \ -vnc 127.0.0.1:1 \ -k en-us \ -vga cirrus \ -soundhw es1370 Why do the systems misbehave this way sometimes? And what configuration can I change in order to fix it? Or, if the problem is due to a bug in kvm, what is the process for isolating a kvm failure so that the developers have a chance of fixing it?

    Read the article

  • Database types for customer analytics

    - by Drewdavid
    I am exploring a paid solution to start providing better embedded, dashboard-style analytics information to our website customers/account holders, but would like to also offer an in-house development option to our team. The more equipped I am with specifics (such as the subject of this question), the better the adoption rate from the team (or so I have found), regardless of the path we choose Would anyone care to summarize a couple of options for a fast and scalable database type through which we would provide the following: • Daily pageviews to a users account pages (users have between 1 and 1000 pages) • Some calculated/compounded metrics (such as conversion rate, i.e. certain page type viewed to contact form thank you page ratio) • We have about 1,500 members (will need room to grow); the number of concurrently logged in users will for the question's sake be 50 I ask because our developer has balked at providing this level of "over time" granularity (i.e. daily) due to the number of space it would take up in a MYSQL database To avoid a downvote I have asked specifically for more than one option, realizing that different people will have different solutions. I will make amendments to my question if so guided by answering parties Thank you for sharing your valued answers :)

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • How do I handle a request for "index.php/somepath" in Nginx config?

    - by Arne
    I'm currently attempting to serve Icinga from nginx. The new Icinga-Web ui attempts to load a file from URL http://SERVER/icinga-web/index.php/appkit/squishloader/javascript (leading to a 404). How do I setup a location for this scenario? How do I get nginx to serve index.php and pass the full request path in a way Icinga understands? This is my current nginx config for icinga (adapted from the default apache config): server { server_name SERVER; root /opt/icinga/icinga-1.2.1; location /icinga-web/js/ext3 { alias /opt/icinga/icinga-1.2.1/lib/ext3; } location /icinga-web { alias /opt/icinga/icinga-1.2.1/pub; } location /icinga/cgi-bin { alias /opt/icinga/icinga-1.2.1/sbin; } location /icinga { alias /opt/icinga/icinga-1.2.1/share; } location ~ \.php([\?/].*)?$ { include fastcgi_params; fastcgi_pass 127.0.0.1:61000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Any thoughts? Thanks!

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Sharepoint Workflow "Failed on Start" only when powershell import script is called from task scheduler

    - by Matt Keller
    I created a simple PowerShell script that takes an XML file in a local directory on our sharepoint server and imports it into a specific SharePoint form library. (Content management enabled library if that makes any difference) This script works flawlessly if i run it from the PowerShell command line manually. I call it like such: ".\script_name.ps1". It completes without error and the item is imported into the form library successfully. The workflow begins on the item and everything is happy dandy. However, i run into issues when i setup a scheduled task using Windows Server 2008 R2's task manager. The task runs the script without error and it does actually import the XML into the form library. I looks perfectly normal just as if i had run the script manually. However, after about 10 or 20 minutes the workflow status for that item changes from "In progress" to "Failed on Start (Retrying)". The scheduled task in question is a basic task and has only one action. (Start a program) The "program/script" box is set to "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" and the "Add arguments" box is set to the path of the actual ps1 script. (C:\scripts\sharepoint_import.ps1) I've tried running the task as various users. I've also tried with and without the "Run with highest privileges" check box. Nothing seems to work. For reference, here is the script i am using to import items into the form library.

    Read the article

  • SVNParentPath directory authorization

    - by James
    The question is a bit stupid but I can't get it sorted. I have a server with SVN that uses the SVNPath directive in httpd.conf and all works fine with path authorizations. Now I'm installing a second serer where I'm going to use SVNParentPath directive and I've got it all running except I can't get the authorization part quite right. From what I understand it's the same as when you use SVNPath but you need to specificy the repo name before the folder name.. My SVNParentPath is /srv/svn/ and I created a directory /srv/svn/testproj and then ran svnadmin create /srv/svn/testproj Now i'm configuring my authorization file: [/] * = svnadmin = rw adusgi = rw [testproj:/svn/testproj] demada = rw degari = rw scarja = rw Now if I try to commit /svn/testproj using user svnadmin or adusgi all is fine. If I try for example demada it doesn't work... (I've run the htpasswd2 commands for the user obviously. The directory is correct or atleast thats how I use the directory with the SVNPath server thats already running, the part I think I'm getting wrong is the repo name, I just used the directory name but what am I really supposed to put there?? Thank you, James

    Read the article

  • 2nd instance of mysql closes/doesnt start no warnings/errors?

    - by acidzombie24
    I have an external HD and i'd like to run a 2nd mysql instance on it. I used the windows installer to install/configure mysqld as a service on windows7. I took the my.ini from C:\Program Files\MySQL\MySQL Server 5.5\my.ini Then edited the port (client and mysqld), datadir and innodb_data_home_dir. After running this command "C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" --defaults-file="f:/dev/my.ini" I found an error which was all about the innodb_data_home_dir directory not existing. After that I ran the command again. Mysqld simply starts up for a second then immediately closes. I see no message in my command prompt. I know this command line args are correct as i see the mysqld service using the same one except a different my.ini path. Also it did tell me about the directory not existing so i know it is reading the new ini file. How do i figure out why this 2nd instance of mysqld is closing? How do i get 2 instance running? I'm using v 5.5

    Read the article

  • Setting SVN permissions with Dav SVN Authz

    - by Ken
    There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >