Search Results

Search found 19093 results on 764 pages for 'max path'.

Page 636/764 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • optimize mod_rewrite in htaccess

    - by clarkk
    I got some mod_rewrite conditions in a .htaccess file which I have extended from time to time.. But I don't think its very well written (I'm still quite new to mod_rewrite) Some times requests end up in infinite loops And just now I added SSL to the file.. When requesting https:// I get a 404 error The requested URL /_secure/_secure/ was not found on this server. Somehow it adds an extra _secure to the path? .htacces # set language RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_URI} ^/(da|en)/(.*)(\?%{QUERY_STRING})?$ [NC] RewriteRule ^(.*)$ /%2?%{QUERY_STRING}&set_lang=%1 [L] # put 'www' as subdomain if none is given RewriteCond %{HTTP_HOST} ^([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.%1/$1 [L,R=301] # rewrite subdomain RewriteCond %{HTTP_HOST} ^(admin|files)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_(admin|files)/ [NC] RewriteRule ^(.*)$ /_%1/$1 [L] # redirect to subdomain RewriteCond %{HTTP_HOST} ^www\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^_([^/]+)/ http://$1.%1/ [L,R=301] # start SSL on 'secure' subdomain if not started RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^(secure)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ https://%1.%2/$1 [L,R=301] # rewrite 'secure' subdomain RewriteCond %{HTTP_HOST} ^(demo|secure)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_secure/ [NC] RewriteRule ^(.*)$ /_secure/$1 [L] # rewrite 'api' subdomain RewriteCond %{HTTP_HOST} ^api\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_api/ [NC] RewriteRule ^(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)? /_api/?%{QUERY_STRING}&v=$1&i=$2&k=$3&a=$4&t=$5&f=$6 [L] # redirect non-active subdomain to 'www' RewriteCond %{HTTP_HOST} !^(admin|api|demo|files|secure|www)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.domain.com [L,R=301] # hide file extensions RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !\.php$ [NC] RewriteCond %{REQUEST_URI} ^/([^/]*)/(?:([^/]*)/)?(?:([^/]*)/)?$ [NC] RewriteRule ^(.*)$ /%1.php?%{QUERY_STRING}&subpage=%2&subsection=%3 [L]

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • A failed disk (Pay for professional service or SpinRite?)(new edit)

    - by huggie
    EDIT: After much negotiating and begging and seeing through promotion smoke screen, thanks to the nice representative who took my case, I now know that the engineer has already fixed my NTFS partition (I guess it might be a bad block in the partition table?). She told me that the problem was considered minor, and I should be able to boot normally and just copy stuff out. Whew..I'm glad I didn't agree to the NTD $16,000 deal. New question (should this be in a new thread?): is it safer to use the linux "dd" command or is it better to boot normally into Windows XP and just copy stuff out? EDIT2: Thanks to all the help. I give the best answer to Console as it's most directed related to my question. But many suggestion are helpful and informational. ---- ORIGINAL POST BELOW --- Hi, in my previous post (You don't need to read but it's at http://superuser.com/questions/48838/windows-xp-a-disk-read-error-occurred), I said that my hard disk was not booting and is showing "a disk read error occurred". I took it to a recovery professional. A representative responded today told me that the NTFS partitions have a "NTFS partition system crash". I have no idea what that means. The engineer handling my drive will not be available for contact till tomorrow. Now the company charges me NTD (New Taiwan Dollar) $16,000 to recover lost data, that's kind of a lot considering that my graduate student monthly stipend is currently NTD $32,000 (max. allowed by regulation, may be lower, may change depend on funding). Now I'm weighting in between the options. Option A: let the professional recovers it with the half of my monthly stipend. If file/directories I designated are not recovered I don't pay a penny. (other than the initial examination fee of NTD $1000 which I've already paid.) Option B: let me try SpinRite, if failed, back to Option A. I spoke to the representative at the company they recommended me not to handle it on my own (yeah of course that's what they all want to say, right?), and at the price tag the disk error is probably relatively minor and data recoverable. But the representative really did not have detailed information of the disk failure so I didn't take her recommendation readily. Though one thing I heed was that she said that what they would do is to duplicate the disk before attempting discovery, so there would be no data loss (Is this true? can't duplicating invoke further data loss?). That sounds very good to me. Or maybe a third option: Option C: Negotiate with them to pay them to duplicate the disk hopefully for a much smaller price tag. Let me try SpinRite, if failed, back to Option A. This is a difficult decision. Ultimately I want my data back, but if a cheaper way is available to achieve the same thing... Can operating with SpinRite also corrupt data in someway? I've no idea what happened to my drive. I'll attempt to contact the engineer and hope to get it clarified and make an edit here.

    Read the article

  • Must have local user to authenticate Samba to AD?

    - by Phil
    I've got a CentOS 5.3 server with Samba running. I've joined this server to my domain in the hopes of allowing AD users some access to my Samba shares. I've found that this works, but only as long as the AD username that I'm trying to authenticate with is also a local user on the server. In other words, if I'm trying to access a share, and try to authenticate with the AD username "joe", I get errors unless I create a user named 'joe' on the server. I don't have to create a matching password or anything....the local user's password is always blank, so I do know that the authentication is actually happening against the AD. Here's my smb.conf file: [global] workgroup = <mydomain> server string = <snip> netbios name = HOME security = ADS realm = <mydomain.com> password server = <snip> auth methods = winbind log level = 1 log file = /var/log/samba/%m.log [amore] path = /var/www/amore browseable = yes writable = yes valid users = DOMAIN\user1 DOMAIN\user2 DOMAIN\user3 DOMAIN\user4 I would assume that my kerberos settings are fine, as I've joined the domain and can use wbinfo to see users and groups. However, I can provide that info if necessary. Anyone have any ideas?

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • Restricting access to one controller of an MVC app with Nginx

    - by kgb
    I have an MVC app where one controller needs to be accessible only from several ips(this controller is an oauth token callback trap - for google/fb api tokens). My conf looks like this: geo $oauth { default 0; 87.240.156.0/24 1; 87.240.131.0/24 1; } server { listen 80; server_name some.server.name.tld default_server; root /home/user/path; index index.php; location /oauth { deny all; if ($oauth) { rewrite ^(.*)$ /index.php last; } } location / { if ($request_filename !~ "\.(phtml|html|htm|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|xlsx)$") { rewrite ^(.*)$ /index.php last; break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } It works, but does not look right. The following seems logical to me: location /oauth { allow 87.240.156.0/24; deny all; rewrite ^(.*)$ /index.php last; } But this way rewrite happens all the time, allow and deny directives are ignored. I don't understand why...

    Read the article

  • How to configure a Web.Config file to allow custom 404 handling while still displaying on-page 500 error detail?

    - by Mark
    To customize 404 handling and based on the hosting company's suggestion, we are currently using the following web.config setup. However, we quickly realized that with this configuration, any page error (500 error) are also getting redirected to this custom error page. How can I modify this config file so we can continue to handle 404 with custom file while still able to view on-page error? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.webServer> <httpErrors errorMode="DetailedLocalOnly" defaultPath="/Custom404.html" defaultResponseMode="ExecuteURL"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="/Custom404.html" responseMode="ExecuteURL" /> </httpErrors> </system.webServer> <system.web> <customErrors mode="On"> <error statusCode="404" redirect="/Custom404.html" /> </customErrors> </system.web> </configuration>

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • How do I configure Jetty (via jettyrunner) so that it names a character set in the Content-Type response header?

    - by Pointy
    I use Jetty (via the oh-so-handy Jetty Runner) for day-to-day web application testing. One thing I've recently stumbled on is the fact that I don't get a character set called out in the "Content-Type" response header all the time. I do get it in response to my application's XMLHttpRequest transactions, but not for plain old pages loaded by <a> links or whatever. I've read a little bit about how to set up a Jetty config file, but I've never been able to completely understand that; all servlet containers are complicated, and while Jetty is pretty simple it's just weird enough that I don't grok the overall idea. Thus, all I do to launch my app is to run the Jetty Runner .jar file with a couple of simple arguments to set up the port number and logfile path, and then I just give it the .war file to run. It works great — except for the missing character set :-) Anybody have a quick sample config file that might fix this? edit — oh if it matters, I'm running Jetty 7.0.0 RC3; I've also tried with a slightly newer version (still 7.something) with exactly the same issue. All my testing is on Ubuntu.

    Read the article

  • Last step in HDD Recovery (fixing windows)

    - by Atom Computing
    My dad’s hard drive corrupted which was a result of many bad sectors. Anyway, I made a clone of the drive and have now repaired it totally (recreating the MBR and MFT) and doing a series of ChkDsk's on it. I can now see all the files and folder on it and it is all intact. I currently have it as a slave in my computer (where I was doing all the repairs). When putting it back into the computer, it comes up with "A disk read error occurred: Press Ctrl + Alt + Del to Restart". I don’t know why this is happening but think it might have something to do with file permissions. I have tried a start-up recovery on the Vista boot CD and it found no problems. When trying to apply file permissions (and creating file perms for the SYSTEM group (as it didn’t have any for SYSTEM group)) it couldn't apply them for some of the System32 folder files. I have tried applying them as admin and with as powerful privileges I can get. All to no avail. When it is in my PC I can boot it up (I added it into my bootloader) and it boots up fine except when it logs in it comes up with the error - "Rundll32.exe - Windows cannot access the specified device, path or file. You may not have the appropriate permissions to access the item" This message keeps coming back and nothing loads at all. Any help would be greatly received as I have got so far with the data recovery and want to avoid a reformat at all costs due to the vast number of programs installed and I don’t have much time on my hands! Thanks

    Read the article

  • Microsoft Server 2003 Explorer shows duplicate local shares

    - by user52167
    Hi folks, I am new here and I could really use some advice please. I am having a problem with our file server. When I try to browse the shared folders using explorer, several of the shared folders all appear to have the same name. Whenever I attempt to rename one of the affected folders, all the affected folders name also change. Our File Server is Windows Server 2003 R2. I am logged on directly to the server using remote desktop. When I open the folder all is as it should be, the proper content is there and the address bar displays the correct folder name and path. The share names are correct, so everything that needs to access the shared folder/files can do so. Also when I browse to the folder using the command-line all it as it should be there too. The only issue seems to be the incorrect display name when browsing using explorer. Can anyone offer any advice or help as to how to resolve this issue please? It would be most appreciated. Thanks

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • TCP/UDP hole punching from and to the same NAT network

    - by Luc
    I was wondering if tcp/udp hole punching would still work when you are in the same network (behind a NAT), and what the packet's path would be. What happens when using hole punching on the same network, is that it will send a packet out with the same destination and source address. Only the source and destination port would differ. I imagine a router with NAT loopback enabled will handle this as it should, but how about other routers? Would they drop the packet, or would a router (the first?) from the ISP bounce the packet back after which it gets handled okay? I'm wondering because I was thinking about using this technique to circumvent a block between peers in a network (like a school network where clients can only access the internet, but any contact with each other is blocked). The only other option is to use a man in the middle as proxy (tunnel?). The disadvantage of this is that you have to have a server with significantly more bandwidth than one that would only do hole punching. Also the latency would increase significantly.

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • Running Solr on VPS problem

    - by Camran
    I have a VPS with Ubuntu OS. I run solr om my local machine (windows xp laptop) just fine. I have configured Jetty, and Solr just the same way as on my computer, but on the server. I have also downloaded the JRE and installed it on the server. However, whenever I try to run the start.jar file, the PuTTY terminal shows a bunch of text but gets stuck. I could pase the text here but it is very long, so unless somebody wants to see it I wont. Also, I cant view the solr admin page at all. Does anybody have experience in this kind of problem? Maybe java isn't correctly installed? It is a VPS so maybe installation is different. Thanks UPDATE: These are the last lines from the terminal, in other words, this is where it stops every time: INFO: [] webapp=null path=null params={event=firstSearcher&q=static+firstSearcher+warming+query+from+solrconfig.xml} hits=0 status=0 QTime=9 May 28, 2010 8:58:42 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 28, 2010 8:58:42 PM org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener newSearcher INFO: Loading spell index for spellchecker: default May 28, 2010 8:58:42 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@63a721 main Also you should know that I installed jetty by just dragging the folders from my HD to the VPS server.

    Read the article

  • Samba/Winbind issues joing to Active directory domain

    - by Frap
    I'm currently in the process of setting up winbind/samba and getting a few issues. I can test connectivity with wbinfo fine: [root@buildmirror ~]# wbinfo -u hostname username administrator guest krbtgt username [root@buildmirror ~]# wbinfo -a username%password plaintext password authentication succeeded challenge/response password authentication succeeded however when I do a getent I don't get any AD accounts returned [root@buildmirror ~]# getent passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin puppet:x:52:52:Puppet:/var/lib/puppet:/sbin/nologin my nsswitch looks like this: passwd: files winbind shadow: files winbind group: files winbind #hosts: db files nisplus nis dns hosts: files dns and I'm definitely joined to the domain: [root@buildmirror ~]# net ads info LDAP server: 192.168.4.4 LDAP server name: pdc.domain.local Realm: domain.local Bind Path: dc=DOMAIN,dc=LOCAL LDAP port: 389 Server time: Sun, 05 Aug 2012 17:11:27 BST KDC server: 192.168.4.4 Server time offset: -1 So what am I missing?

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • screen scraper templates for various websites

    - by intuited
    I'm looking specifically for a convenient way to locally archive posts from this and other similar sites. I'd like to separate the question itself from the answers, or maybe crop the question and store it, keeping the page title. Obviously I don't need to store the menu or the various other site interface chrome. The best way to do this would seem to be to associate an XSLT template with a match on the URL and use that template to pull the various relevant informations and format them. My two-part question: Is there a tool specifically built for this task? I.E. something that takes a URL and checks it against a map of path-matching expressions to templates, and outputs the result of applying the template to that resource? xmlto seems to be most of the way there, and could probably just be called from a script that does the pattern-matching, but something already integrated would be more convenient. Is such a URL_pattern-to-XSLT_template map publicly available somewhere? Question 2.5: Is it legal to do this with sites like this one that have public licenses on their content?

    Read the article

  • Fix elements within objects at their position? (Prevent movement when resizing)

    - by Skadier
    I would like to know if it is possible to fix elements at their absolute position within custom elements in InDesign CS5? I created a kind of speech bubble and I would like to place a stripline within this bubble to separate two content areas. Just a little scheme to show the desired layout as Pseudo-Markup :D <speech-bubble> <textbox>HEADER SECTION</textbox> <stripline> <textbox>Some other text</textbox> </speech-bubble> I created something like this but with two separate elements which aren't connected. So I have to select both of them in order to move the whole bubble. Then I tried to connect them using Object->Paths->Create linked path but then the stripline moves and the HEADER SECTION moves too. All in all I would like to have a speech bubble which can be resized in order to hold more text but it shouldn't make the HEADER_SECTION larger or move the stripline. Hope you understand what I mean :D Thanks in advance!

    Read the article

  • Audit file removal (auditctl)

    - by user1513039
    For some reason, some script or program is removing a pid file for the service on the linux server (centos5.4 / 2.6.18-308.4.1.el5xen). I suspect a faulty cron script, but manual investigation did not lead me to it. And i still want to track it down. Have been using auditctl rule: auditctl -w /var/run/some_service.pid -p w Which helped me to see something, but not quite exactly what i wanted: type=PATH msg=audit(11/12/2013 09:07:43.199:432577) : item=1 name=/var/run/some_service.pid inode=12419227 dev=fd:00 mode=file,644 ouid=root ogid=root rdev=00:00 type=SYSCALL msg=audit(11/12/2013 09:07:43.199:432577) : arch=x86_64 syscall=unlink success=yes exit=0 a0=7fff7dd46dd0 a1=1 a2=2 a3=127feb90 items=2 ppid=3454 pid=6227 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=38138 comm=rm exe=/bin/rm key=(null) Problem here is that i see ppid of the script that removed the file, but at the analysis time the (p)pids are already invalid as probably scripts/programs have been shutdown. Imagine a cron script deleting the file. So i need some way to expand/add audit rule(s) to be able to trace the parents of the /bin/rm at the time of removal. I have been thinking to add some rule to monitor all process creation, something like: auditctl -a task,always But this happen to be very resource intensive. So i need help or advice how to combine these rules, or how to expand any of the rules to help track the script/program. Thanks.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >