Search Results

Search found 26555 results on 1063 pages for 'active directory explorer'.

Page 554/1063 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Can't configure frame relay T1 on Cisco 1760

    - by sonar
    For the past few days I've been trying to configure a data T1 via a Frame Relay. Now I've been pretty unsuccessful at it, and it's been a while, since I've done this so please bare with me. The ISP provided me the following information: 1. IP address 2. Gateway address 3. Encapsulation Frame Relay 4. DLCI 100 5. BZ8 ESF (I think the bz8 was supposed to be b8zs) 6. Time Slot (1 al 24). And what I have configured up until now is the following: interface Serial0/0 ip address <ip address> 255.255.255.252 encapsulation frame-relay service-module t1 timeslots 1-24 frame-relay interface-dlci 100 sh service-module s0/0 (outputs): Module type is T1/fractional Hardware revision is 0.128, Software revision is 0.2, Image checksum is 0x73D70058, Protocol revision is 0.1 Receiver has no alarms. Framing is **ESF**, Line Code is **B8ZS**, Current clock source is line, Fraction has **24 timeslots** (64 Kbits/sec each), Net bandwidth is 1536 Kbits/sec. Last module self-test (done at startup): Passed Last clearing of alarm counters 00:17:17 loss of signal : 0, loss of frame : 0, AIS alarm : 0, Remote alarm : 2, last occurred 00:10:10 Module access errors : 0, Total Data (last 1 15 minute intervals): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs Data in current interval (138 seconds elapsed): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs sh int: FastEthernet0/0 is up, line protocol is up Hardware is PQUICC_FEC, address is 000d.6516.e5aa (bia 000d.6516.e5aa) Internet address is 10.0.0.1/24 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:20:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog 0 input packets with dribble condition detected 191 packets output, 20676 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Serial0/0 is up, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) LMI enq sent 157, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 23, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 2/0, interface broadcasts 0 Last input 00:24:51, output 00:00:05, output hang never Last clearing of "show interface" counters 00:27:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1152 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 23 packets input, 302 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 1725 input errors, 595 CRC, 1099 frame, 0 overrun, 0 ignored, 30 abort 246 packets output, 3974 bytes, 0 underruns 0 output errors, 0 collisions, 48 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Serial0/0.1 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never Serial0/0.100 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU Internet address is <ip address>/30 MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never And everything seems to be accounted for to me, but apparently I'm missing something. My issue is that I'm stuck on interface up, line protocol down, so the T1 doesn't go up. Any ideas? Thank you,

    Read the article

  • Make nginx config like apache2 virtualhosts

    - by user2104070
    I have web server with apache2 with many subdomains on it like, domain.com, abc.domain.com, def.domain.com etc. etc. Now I got a new nginx server and want to set it up like apache2, so to test I created configs (2 files in /etc/nginx/sites-available/ and link to them from sites-enabled/) as shown, domain.config: server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /srv/www/; index index.html index.htm; # Make site accessible from http://localhost/ server_name domain.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } abc-domain config: server { listen 80; listen [::]:80; root /srv/www/tmp1/; index index.html index.htm; # Make site accessible from http://localhost/ server_name abc.domain.com; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } } but when I access with domain.com I am getting index.html from /var/www/tmp1 only. Is there something I'm doing wrong in the nginx config?

    Read the article

  • Is it logical that file system acls would be corrupted in a way that adds permission for another user?

    - by wilbbe01
    I was having issues on a shared hosting provider with the host's web server instance not serving some files. I asked the companies support about the issue and they responded with the results of getfacl on my home directory, and added the necessary line to allow their web server to obtain the necessary permissions. All is working happily now, but I noticed a line in the getfacl that was for what appeared to be another username to which I had no relation. I asked them about this and their response was that it was likely some minor corruption and that I could remove the unwanted line with the setfacl -x option. I know I never added the user to my home directory, and I also find it weird that that could truly happen due to corruption. So now that it is fixed I'm a little bit weary of whether or not they were trying to cover up a problem they accidentally gave someone permissions to my account, or if this kind of thing can really be corrupted in that way. Especially when that user is a real user on the same server. Any thoughts? Thanks.

    Read the article

  • Server redirect

    - by Tyy
    I have asp.net app XYZ which requires SSL. I am supposed to work with IIS that has only one Web Site - DefaultWebSite - containing multiple apps and virtual directories (3rd party). So, my app is located at domain.com/XYZ. I have to meet these conditions: 1.) requesting DefaultWebSite (domain.com) will run my app. It could redirect to ../XYZ, but I would rather not to. If it has to be done this way, requesting DefaultWebSite or my app over both HTTP and HTTPS will always ends up with redirecting to https://domain.com/XYZ 2.) I can't touch any other apps or virtual directories, can't create additional Web Site, can't set DefaultWebSite to require SSL, ... EDIT: 3.) transmitted data (GET or POST) must be preserved I tried to: set Web Site root directory to my app, but it this caused other apps to crash because of my Web.config (not sure why). set up HTTP redirect on DefaultWebSite to https://domain.com/XYZ. This seems to work correctly, but this doesn't work if user requests my app directly (redirected to domain.com/XYZ/XYZ, or redirect loop). set up Default Document, but this seems to work only if it is located in the Web Site root directory. I know I could write simple .aspx with Response.Redirect, but... is there any better solution? Am I missing something?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by Ram
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide.Following directives were added to httpd.conf. AddTyoe application/x-httpd-php LoadModule php5_module /usr/lib/httpd/modules/libphp5.so <FilesMatch "\.php$"> SetHandler application/x-httpd-php </FilesMatch> File locations are: docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin File privileges are: [root@linuxdev1 html]# ls -Z -rwxr-xr-x root root index.php drwxr-xr-x root root phpMyAdmin -rw-r--r-- root root phpMyAdmin-3.4.3.2-english.tar.gz drwxr-xr-x root root test1 Any help is appreciated.

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Connecting PC to TV via HDMI/DVI: Windows XP doesn't allow the appropriate screen resolution

    - by Jørgen
    I have a computer that is connected to the living room TV (a Panasonic) via HDMI. There is no other monitor connected. My problem is that the computer, which is running Windows XP, does not allow me to set the proper resolution for the TV. Both the graphics adapter and the TV should support the 1280x720 resolution, but it cannot be selected - the only available options are 1280x600 and 800x600, both in the "native" Windows dialog box and the custom Intel graphics options dialog box. Do anyone have a suggestion for a solution for this? Things I've thought of: Setting the resolution directly in the registry (where?) Installing some "custom" monitor driver (the TV manufacturer does not appear to provide any, currently the "generic" one is used) Details on the setup: Connection: DVI output on the computer via a passive DVI-HDMI adapter to the HDMI input on the TV, audio is run on a separate link, the TV is able to combine video and audio without any problem, the problem is there regardless of whether or not the audio is connected. The connection is several meters long through some walls, for this reason using a VGA cable instead is not an option. Note that the report explicitly says that the TV supports 1280x720. Still, I am not allowed to select it in Graphics Options, only 1280x600 and 800x600 is available. For 800x600, there's a lot of black around the edges; for 1280x600, the screen is "zoomed" so the edges of the monitor image (like the taskbar) is not visible. Other: The computer is running Windows XP. More recent versions of Windows are not an option (I have no licence). Linux is probably not an option (some of the video streaming sites I plan to use do not support it, I think) I wrote the rest of the details below. Thanks for any help!! TV: Panasonic TX-L32X10Y, European version; a 720p 32" quite "regular" LCD TV. Allowed resolutions according to manual: Signal name: 640x480 @60HZ Horizontal frequency: 31.47 kHz Vertical frequency: 60Hz Signal name: 750/720) /60p Horizontal frequency: 45.00 kHz Vertical frequency: 60Hz Signal name: 1,125 (1,080) / 60p Horizontal frequency: 67.50 kHz Vertical frequency: 60Hz (this is exactly how the manual presents it. PC via D-SUB (VGA cable) and "regular" HDMI have more alternatives.) Messing with the "zoom" settings on the TV does not affect the available resolution options on the computer. Computer: The following is a printout from one of the graphics adapter option pages. I think it covers most of it. The computer is a Dell. INTEL(R) EXTREME GRAPHICS 2 REPORT Report Date: 04/17/2011 Report Time[hr:mm:ss]: 20:18:02 Driver Version: 6.14.10.4396 Operating System: Windows XP* Professional, Service Pack 3 (5.1.2600) Default Language: English DirectX* Version: 9.0 Physical Memory: 1021 MB Minimum Graphics Memory: 1 MB Maximum Graphics Memory: 96 MB Graphics Memory in Use: 6 MB Processor: x86 Processor Speed: 2593 MHZ Vendor ID: 8086 Device ID: 2572 Device Revision: 02 * Accelerator Information * Accelerator in Use: Intel(R) 82865G Graphics Controller Video BIOS: 2972 Current Graphics Mode: 1280 by 600 True Color (60 Hz) * Devices Connected to the Graphics Accelerator * Active Digital Displays: 1 * Digital Display * Monitor Name: Plug and Play Monitor Display Type: Digital Gamma Value: 2.20 DDC2 Protocol: Supported Maximum Image Size: Horizontal: Not Available Vertical: Not Available Monitor Supported Modes: 1280 by 720 (50 Hz) 1280 by 720 (60 Hz) Display Power Management Support: Standby Mode: Not Supported Suspend Mode: Not Supported Active Off Mode: Not Supported (disclaimer: this question was also asked at the Wikipedia Reference Desk some time ago and might show up in a Google search. I got no useful answers there.)

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • apache access and error log written in same file

    - by user196075
    i have issue that access and error log are written in same file ! , the configuration in virtualhosts.conf as the following : <VirtualHost *:80> ServerName ************ ServerAdmin support@************8 DocumentRoot /var/www/html/*********.com ErrorLog /var/log/httpd/********/********.com_error_log CustomLog /var/log/httpd/********/********.com_access_log combined <Directory /var/www/html/***********.com> Options -Indexes FollowSymLinks AllowOverride All </Directory> </VirtualHost> as you see from the configuration each access and error logs should be save separately , but both logs are written in *.com_access_log , i have double check all permission , group and owner ... can't find anything wrong previous error in log file : [Thu Sep 19 14:15:02 2013] [error] [client 192.168.10.54] client denied by server configuration: /var/www/html/**********/show_has_offers.php i have tried to generate same error , i can find the hit in access log only as the following : 192.168.10.75 - - [24/Oct/2013:08:11:14 +0000] "GET /show_has_offers.php HTTP/1.1" 404 1586 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0" 0 17332 and nothing in error log !! Please advice ...

    Read the article

  • Passenger and ServerAlias not cooperating

    - by Pyzo
    I have a ruby application that runs on a server with multiple IP addresses and mutliple vhosts. Here is the configuration of the problematic virtual host: <VirtualHost 10.0.0.10:80> ServerName realname.example.com ServerAlias alias.example.com DocumentRoot /var/www/sites/example/current/public <Directory /var/www/sites/example/current/public> AllowOverride all Options -MultiViews </Directory> ErrorLog /var/log/httpd/example_error_log CustomLog /var/log/httpd/example_access_log common RailsEnv production RackEnv production </VirtualHost> When I pull up realname.example.com the Ruby on Rails application works correctly. On the other hand alias.example.com just gives me Not Found: / I'm fairly certain the correct vhost is getting used because alias.example.com produces a 404 in the correct log file. I've tried adding logging to the Passenger config and it seems to indicate that Passenger is getting the request. Note: I can't redirect alias.example.com to realname.example.com. realname is accessed using a CDN, whereas alias is directly accessed. Anyone have any ideas why this isn't working? I've been banging my head for days and I've got a similar configuration in QA that works as expected.

    Read the article

  • ProxyPass for specific vhost

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Script apparently changing file permissions on Mac OS to 000

    - by half_bit
    I wrote a little shellscript that helps installing a web application. The script itself just downloads a zip archive, extracts it and changes the permissions of the extracted files to the one needed to run the webapp. The problem now is that some users reported that after running my script, all the permissions of every file in their home directory or even on their whole computer changed to 000 (except the actual unzipped files which do have the correct permissions). The only lines in my script actually doing IO are these: URL="http://foo.com/" FILENAME="some.zip" curl --silent "$URL$FILENAME" -o $FILENAME > /dev/null echo "Unzipping...\c" if unzip -oqq $FILENAME > /dev/null then chmod -R 777 app/tmp app/webroot app/Config/database* app/configuration* chown -R www:www * rm $FILENAME echo "\t\t\tOK" exit 0 else echo "\t\t\tERROR" exit 1 fi I seriously can't explain this to myself. How can this even be possible? It is entirely possible that the users accidentally ran the script in their home directory, but that still wouldn't explain why the permissions where set to 000, not www/777.

    Read the article

  • What could cause random files being uploaded without permission?

    - by Dustin
    I have been having issues lately with a certain directory. It seems someone is placing files into it, or something of that sort, and any attempt to delete them is successful, HOWEVER they reappear over time (maybe not the exact same ones, but random files). I will provide you the information I can and several pictures of my problem: sandbox.mys4l.com/visual/files/b1.jpg Files like this have been appearing in my /visual/ folder, and I have no clue where they are coming from. sandbox.mys4l.com/visual/files/b2.jpg This is what is inside on of those weird files, it appears to be nothing problematic. sandbox.mys4l.com/visual/files/b4.jpg As you can see, in the time it took me to take the first picture, more odd files showed up. These log files are also being uploaded to this directory, and I know I didn't put them there. sandbox.mys4l.com/visual/files/b7.jpg This inside one of these mysterious .log files, I'm not sure what it's all about. These files only appear to be going into this specific area, and I'm not sure of their origin, only that they will not go away. I have done a full system scan at least twice with an up-to-date virus scan, and have looked for an unknown script which may be writing them there. Nothing has come up, so I come to you guys as I hear this is the best place to find answers. Hope this problem has a solution!

    Read the article

  • Is it possible to use different zsh menu selection behaviour for different commands?

    - by kine
    I'm using the menu select behaviour in zsh, which invokes a menu below the cursor where you can see the various possibilities. The .zshrc option i have set for this is zstyle ':completion:*' menu select=2 By default, pressing Return to select a possibility in this menu only completes the word — it does not actually send the command. For example, I might get a menu like this ~ % cd de<TAB> completing directory: [Desktop/] Development/ Pressing Return here will result in ~ % cd Desktop/ I then have to press Return a second time to actually send the command. I can modify this behaviour to make it so that pressing Return both selects the completion and sends the command by doing this bindkey -M menuselect '^M' .accept-line However, there's a problem with this: sometimes I need to complete a file or directory without sending the command. For example, I might need to do ln -s Desktop Desktop2 — with this bindkey behaviour, trying to complete Desktop will result in ln -s Desktop/ being sent as the command, and obviously I don't want that. I'm aware that just pressing space will let me get on with the command, but it's now a habit. Given this, is there a way to make it so that only some commands let you press Return once (like cd), but all other commands require pressing it twice?

    Read the article

  • Set document root for external subdomain (A Record) via htaccess

    - by 1nsane
    I have a managed server (unable to control apache settings) with the default document root of: /var/www I have a web app running in: /var/www/subdomains/app/webroot I have a dedicated domain managed by the host that has the aforementioned webroot which works perfectly. I would like to allow externally provisioned domains to point to the server/web app via A Record config. If I access the site via IP, it takes me to the index located in /var/www. I would like to configure the .htaccess in my /var/www directory to rewrite requests from the external subdomain to the /var/www/subdomains/app/webroot directory. I've done so using the following rules: RewriteCond %{HTTP_HOST} external\.domain\.com$ [NC] RewriteRule ^(.*)$ /var/www/subdomains/app/webroot/index.php?url=$1 [L,QSA] When accessing external.domain.com, the app loads properly, but the paths to things like CSS files, images, etc. are prefixed with "/subdomains/app/", causing broken links. I've tried changing the RewriteBase (both in /var/www and /var/www/subdomains/app/webroot), as I believe that's what it's designed for - but no luck. Any ideas? FYI the app is built on CakePHP. Thanks

    Read the article

  • Cannot browse Visual SVN Repositories

    - by user1783560
    I went through Unable to browse repository after setting visual SVN Server and tried to implement the answer suggested but even the IP address is not solving my problem. I have 1) Turned off firewall 2) Set the directory C:\Program Files (x86)\VisualSVN Server security rights to "full control" for all groups including "Network service" group. 3) Set the directory D:\Repositories security rights to "full control" for most groups including "Network service" group. 4) Tried switching between secure and non-secure connection option in visual svn server manager. 5) Tried different with port numbers. Including 443 (with https) and 80 (http). Also tried giving random ports with/without https. Using computer name/ IP address in the url doesn't help. Windows network diagnostics - Troubleshooting report has following message for me. Security policy settings or firewall settings on this computer might be blocking the connection. = Detected Can someone guide me please? I have spent more than two weeks on this now without any solution. Thanks in advance! I appreciate your responses.

    Read the article

  • Accessing CIFS shares from an OS X machine incredibly slow

    - by Aron Rotteveel
    This is a longshot, because this issue seems over-reported and unanswered on the internet (see references below), but it is about time this issue is permanently solved. The facts: Server: Windows Server 2008, acting as a file server Client: OS X Lion 10.7.3. Method of connecting: directly via IP through Finder: smb://192.168.1.100/share The problem: The initial connection attempt takes about a minute. After the connection is made, it takes one more minute to show the directories in Finder. After navigating to any other directory, it takes several seconds/minutes to parse the directory, seemingly based on the size of the contents. Actually, my entire Finder has this problem after connecting. When using Finder to show my desktop, it can literally take up to a minute to load. Obviously, this is not right. I have no clue how to fix this and would appreciate any help I can get. I am unsure about other relevant information I can provide, but if there is any, please let me know so I can update the post. I seem to be not the only one having this problem: Most importantly, an apple.stackexchange.com entry. Unsolved and unanswered. Several users on the Apple support forums. Users on EduGeek.

    Read the article

  • file system damage

    - by jffrs
    I try recover the backup superblock on /dev/sda2 that contain ubuntu 12.04 LTS and partition ext4 with livecd ubuntu 10.04. the message is below root@ubuntu:/home/ubuntu# fsck.ext4 -b 163840 -B 4096 /dev/sda2 e2fsck 1.41.11 (14-Mar-2010) /dev/sda2 was not cleanly unmounted, check forced. Resize inode not valid. Recreate? yes Pass 1: Checking inodes, blocks, and sizes Programming error? block #7963637 claimed for no reason in process_bad_block. Programming error? block #11240437 claimed for no reason in process_bad_block. Root inode is not a directory. Clear? yes Inode 712 is in extent format, but superblock is missing EXTENTS feature Fix? yes Inode 98519 has compression flag set on filesystem without compression support. Clear? yes Inode 98519 has INDEX_FL flag set but is not a directory. Clear HTree index? what's the correct procedure?

    Read the article

  • Escaping %’s in file-/folder-names at the command-line

    - by Synetech
    Does anybody of a way to access files and directories that have a % in their name (which is valid) from the command-line? Specifically, if there are two %’s and the text between them happens to correspond to an environment variable. For example, if there is a file called C:\blah\%temp%.txt or a folder called C:\Program Files\%temp%\, none of the following will work because the variable gets expanded: > dir "c:\blah\%temp%.txt" > dir "c:\blah\^%temp^%.txt" > dir "c:\blah\%%temp%%.txt" > dir "c:\blah\\%temp\%.txt" > dir "c:\program files\%temp%" > dir "c:\program files\^%temp^%" > dir "c:\program files\%%temp%%" > dir "c:\program files\\%temp\%" Using wildcards will work, but does not uniquely select the file/folder and may include others: > dir "c:\blah\?temp?.txt"        (also shows ztempz.temp, 1tempa.txt, etc.) > dir "c:\program files\?temp?"   (likewise) (This is frustrating because every now and then—usually when Explorer is restarted for whatever reason—the environment variables stop expanding and some places where they are used end up creating files or directories with the environment variable in it. For example, because I configured Chromium to store its cache in a subdirectory of %temp%, if the variable expands, it is fine, but when it doesn’t, Chromium creates a directory called %temp% under its own directory and stores the cache—which can get large—there. I want to add a line to my temp-/junk-file cleaning script to automatically delete that folder if it exists, but I cannot figure out how to access it from the command-line without resorting to wildcards.)

    Read the article

  • What software will tell me if I've already downloaded a video? [closed]

    - by dave
    (I use Linux KNOPPIX (distro 7.0.2, ver 3.3.7) on hard drive.) I download videos of TV programs from the 60s and 70s (mainly from youtube). I copy the youtube URL then paste it into www.keepvid.com to download it (usually .mp4 format). Having now got dozens of such video files (and growing) on my hard drive, I'd like to organise them. WHAT I'D LIKE TO DO IS: Say I find a new vid (of a TV prog) on youtube (or another site), and I'm about to download it. It's possible that I've WATCHED IT BEFORE but have forgotten. So is there software out there that I can run which will do the following: Check for me if I've ALREADY WATCHED the vid that I'm about to download. At the moment, once I've watched a vid on my hard drive, I move the file to another directory called "Watched". But this of course doesn't alert me in the immediate way that I want. . It would crudely suffice, if the software told me AFTER I've downloaded the vid, if I've ALREADY WATCHED it (ie if it's already in my "Watched" directory, or perhaps in a "watched" list). I sometimes alter the filename of the original video file on hard drive, so this might spoil a comparison. If the software alerts me to the fact that I've already watched the vid (preferably BEFORE I download it), then this will allow me to confidently download only new vids that I haven't watched before, and save me duplicating my effort. I'd be most grateful if anyone can suggest such a piece of software, or an alternative solution. I'll be honest, I avoid software that infringes your privacy and control - you know, software that automatically does things behind your back, like upgrades itself over the internet, puts things on your hard drive that you didn't ask for, or sends information from your hard drive to websites.

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >