Search Results

Search found 59526 results on 2382 pages for 'chris kawalek(at)oracle com'.

Page 356/2382 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Reg Expression htaccess RewriteRule

    - by Rick
    I am new to using regular expressions for rewriting URL's in htaccess I need to redirect mysite.com/123 to mysite.com/, IF cookie named 'ref' is set. my current htaccess is: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_COOKIE} ref=true [NC] RewriteRule ^/([0-9]+)/$ http://www.mysite.com </IfModule> The goal is that when someone enters site with: mysite.com/111(some number) that they are redirected to the home page of the site after the cookie is set. Be nice... I'm new! ;o)

    Read the article

  • Get Jira to run on shared Windows server on port 80

    - by codeulike
    I know this can be done on Linux with JIRA, using mod_proxy, but I'm not sure if its possible on Windows: Say we have a Windows server running IIS 7.0 and serving up pages on port 80, via an address like: http://twiddle.something.com We then install JIRA on the server, it uses its bundles Apache web server to serve stuff up on port 8080, like this: http://twiddle.something.com:8080 Is there a way to configure IIS and Apache so that JIRA runs off a port 80 folder, as in: http://twiddle.something.com still hits IIS http://twiddle.something.com/Jira hits JIRA on Apache? Thanks edit: I guess we might also want to throw SSL into the mix for JIRA too....

    Read the article

  • How to connect to a remote IIS with INETMGR on WIndows 7

    - by Chris Marisic
    I can't seem to find any way to connect to a remote IIS instance with my local INETMGR on my Windows 7 machine. It shows my settings for my local IIS and everything I've tried from clicking various places on the connections panel, changing the address in the address bar and checking the various menubar menus none seem to offer connect to another machine.

    Read the article

  • SSL connection hangs as client hello (curl, openssl client, apt-get, wget, everything)

    - by Niklas B
    Hi, I've run into a problem on my Debian VPS (a xen domU) regarding SSL. Namely almost all SSL connections hangs at client hello. For example: # curl -vI https://graph.facebook.com About to connect() to graph.facebook.com port 443 (#0) Trying 66.220.146.48... connected Connected to graph.facebook.com (66.220.146.48) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs SSLv3, TLS handshake, Client hello (1): It's the same when using the openssl client. However, some of the SSL traffic works (for example https://www.nordea.se). Server #uname -a Linux server.com 2.6.26-1-xen-amd64 #1 SMP Fri Mar 13 21:39:38 UTC 2009 x86_64 GNU/Linux It does however work on my Dom 0 (the main xen host). Apt-get I can't even run apt-get update with the debian security sources (hangs on reading headers) Open SSL At the begining I thought I had an old openssl client (0.9.8o-4) since I appeared to have a newer on the Dom 0 (0.9.8g-15+lenny8) but doing a manuanl update on the openssl deb didn't help. Open SSL Client This is the full output of when the openssl client hangs: http://pastebin.com/PAjwMap9 Closing thoughts I've Googled the crap out of this, and I'm not getting any further. I've seen problems with curl, apt-get etc. but they are all specific relating to the very application - not general for the system. Any thoughts?

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • Hosting several domains on one server using IIS 7

    - by Øyvind Knobloch-Bråthen
    I have created several web sites inside IIS7 on my server. All of them use the same ip and port, but different host names. Currently I have set the host name to www.mydomain.com. Now my question is, how do I get my actual domains to target the different sites on my server. Second question. Can I set my host name to only mydomain.com to make sure that all requests to that domain is handeled by the same application? Primarily, I want both www.mydomain.com and mydomain.com to work when the user types the address in their browser.

    Read the article

  • Postfix and google apps email limit

    - by moolagain
    I installed postfix on my ubuntu vps. The server is hosting mysite.com. I set the MX records for mysite.com to use google apps for email: ASPMX.L.GOOGLE.COM ... I am using php's mail() function, and emails are working. Google apps has a 500 email / day limit. Am I using this limit? I'm not sure what postfix is doing exactly. Thanks

    Read the article

  • Postfix "mail-to-script" pipe only delivers empty messages

    - by user68202
    i have a problem here. I want that a incoming email is piped to a php script in the system through postfix. My System is running with ispconfig 3, postfix and dovecot (< virtual mailbox users are saved in mysql). I looked already into this one: How to configure postfix to pipe all incoming email to a script? ... the script is executed, but no "message" is delivered to the script. My setup so far: In ISPConfig 3 i have set up the following email route: Active Server Domain Transport Sort by Yes example.com pipe.example.com piper: 5 excerpt from my postfix master.cf: piper unix - n n - - pipe user=piper:piper directory=/home/piper argv=php -q /home/piper/mail.php so far it is working great (mail sent to [email protected]) (mail.log): Jun 21 16:07:11 example postfix/pipe[10948]: 235CF7613E2: to=<[email protected]>, relay=piper, delay=0.04, delays=0.01/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via piper service) ... and no errors in mail.err the mail.php is sucessfully executed (its chmod 777 and chown'ed to piper), but creates a empty .txt file (normally it should contain the email message): -rw------- 1 piper piper 0 Jun 21 16:07 mailtext_1340287631.txt the mail.php script ive used, is the one from http://www.email2php.com/HowItWorks if i use their (commercial) service to pipe an email to the mail.php (in a apache2 environment) through a provided "pipe-email", the message is saved sucessfully and complete. But as you can see, i dont want to use external services. -rw-r--r-- 1 web2 client0 1959 Jun 21 16:19 mailtext_1340288377.txt So, whats wrong here? I think it has something to do with the "delivering configuration" in my system...

    Read the article

  • apache mod_proxy or mod_rewrite for hide a root of a webserver behind a path

    - by Giovanni Nervi
    I have 2 apache 2.2.21 one external and one internal, I need to map the internal apache behind a path in external apache, but I have some problems with absolute url. I tried these configurations: RewriteEngine on RewriteRule ^/externalpath/(.*)$ http://internal-apache.test.com/$1 [L,P,QSA] ProxyPassReverse /externalpath/ http://internal-apache.test.com/ or <Location /externalpath/> ProxyPass http://internal-apache.test.com/ ProxyPassReverse http://internal-apache.test.com/ </Location> My internal apache use absolute path for search resources as images, css and html and I can't change it now. Some suggestions? Thank you

    Read the article

  • Cannot connect to WEBrick on home network

    - by Chris Stewart
    I'm an Android developer and often my applications require server-side code. I typically use Ruby on Rails for the web app, and during development will run the server on my local machine (Mac OS X) with WEBrick. In the morning when I get to the office, I'll run ifconfig in the console to see what IP my laptop has been given that day. I'll use that IP in my Android app when making requests to the web app in question. This all works fine, when I'm in my office. When I get home, I attempt to do the same thing, find my laptop's IP via ifconfig, set it in my app's config file, but the destination can never be found. To exclude my app from the set of hurdles, I attempt to visit the web server IP (e.g., http://192.168.1.4:3000) from my phone's browser, and it cannot connect. If I try from my laptop, which is running the web server, it works fine. If I try from another machine, on the same network, it also is unable to connect. Given this, I think I've narrowed it down to some kind of configuration in my home network, but I frankly have no idea what the cause could be. I don't have anything special at home, your basic Verizon FiOS router/modem with everything connected via Wi-Fi (Wi-Fi for both phone and laptop at work as well, fyi). I've tried disabling the firewall on my Verizon router, enabling port forwarding, and just about everything else I could do for port 3000, and nothing has changed. Dear Server Fault geniuses, please help a poor developer out. :) Edit: Some follow up items to add. My Mac's firewall is not active, and all incoming requests are allowed. I've also verified on my phone and laptop, that they're on the same network (192.168.1.4 Mac, 192.168.1.9 Phone). I have no idea why this isn't working. Edit 2: I went into System Preferences, enabled Web Sharing, and tried to view the website from my phone and it didn't connect. So it's not WEBrick or related to Rails. The firewall on my machine is off and the firewall on my router is off. Edit 3: Some progress. I set up port forwarding for port 3000 to my laptop, found the external IP, and used that and it connected fine. So, there's definitely something not quite set up correctly on my internal network.

    Read the article

  • How to remove $data stream from file in windows 8

    - by chris.w.mclean
    Windows for a while now has added an additional hidden stream to files that were downloaded from the internet. If you attempted to use these files, you'd get all kinds of odd behavior as windows was detecting this additional stream and then preventing the app / exe from getting all sorts of security clearance. But in previous versions of windows you could right click on a file, go to properties then click 'Unblock' which removed the extra stream. Windows 8 seems to be doing the additional streams trick, but I haven't yet found a way to remove them using the win 8 UI. Anyone know how to do this?

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    Hi experts, I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • Is it possible to restrict fileserver access to domain users using computers that are members of the domain?

    - by Chris Madden
    It seems domain isolation can be used to accomplish, but I'd like a solution that doesn't require IPsec, or more accurately, doesn't require IPsec on the fileserver. IPsec if done in software has a large CPU overhead and our NAS boxes don't support any kind of offload. The goal is to avoid authenticated users using non-managed machines to access network resources. Network Access Protection (NAP) and the various enforcement points looked promsiing but I couldn't find a bulletproof way to use them [which doesn't require IPsec on the fileserver]. I was thinking when a domain user accesses the NAS box it will first need a Kerberos ticket from AD, so if AD could somehow verify the computer that was requesting the ticket was in the domain I'd have a solution.

    Read the article

  • Apache Configuration Issue - website without www going to default site

    - by Brian
    I have included a copy of my virtual host file for apache below. (However I have hidden the ip address and domain name for now) My problem is that the following work: www.mydomainnamehere.org www.mydomainnamehere.com mydomainnamehere.com This one doesn't work: mydomainnamehere.org - instead of going to the document root listed below, it goes to the document root of the default site. What could be causing this? <VirtualHost [ipaddresshidden]:80> ServerAdmin [email protected] ServerName mydomainnamehere.org ServerAlias www.mydomainnamehere.org ServerAlias mydomainnamehere.com ServerAlias www.mydomainnamehere.com DocumentRoot /home/www/mydomainnamehere.org/html/ ErrorLog /home/www/mydomainnamehere.org/logs/error.log CustomLog /home/www/mydomainnamehere.org/logs/access.log combined </VirtualHost>

    Read the article

  • How to add exception for backup MX to tumgreyspf?

    - by Waleed Hamra
    I have an Ubuntu raring server running postfix/dovecot as an email server, with tumgreyspf doing greylisting and SPF checks. My problem is that I also have a backup MX server, that is supposed to store my emails temporarily, should my main server ever fails. It usually rejects receiving emails if it finds the main server online and functional. The problem is when it does need to do its job, tumgreyspf rejects all emails from the backup MX with an error like this: Jun 27 16:18:13 hamra postfix/smtpd[28732]: NOQUEUE: reject: RCPT from mxbackup.mydomain.com[x.x.x.x]: 550 5.7.1 <[email protected]>: Recipient address rejected: QUEUE_ID="" SPF Reports: 'SPF fail - not authorized'; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<mxbackup.mydomain.com> any ideas?

    Read the article

  • PC power supply & normal range for voltages reported in BIOS hardware monitor?

    - by Chris W. Rea
    I'm trying to diagnose whether my computer has an ample power supply. Sometimes when I play a video-intensive game, both monitors lose the video signal, even though the computer remains on and sound playing. A theory I have is: the video card isn't getting sufficient power. I can't imagine it's overheating because the machine is well-ventilated and the video card isn't hot to touch when this happens. Anyway, in my PC's BIOS there's a Hardware Monitor page, and among other voltages reported (such as CPU, DRAM, South Bridge, etc.) I can see the following values: 3.3V 3.152V 5V 4.944V 12V 11.872V Are those the voltages used by peripherals? What voltage should I be referencing if I want to know what my video card (PCI Express) is consuming? What is the normal range of values reported for those? My values above appear to be under by approximately 4.5%, 1.1%, and 1.1% respectively. Is that cause for concern? How else should I be determining if my power supply is "right-sized" for my PC and video card, or am I perhaps barking up the wrong tree?

    Read the article

  • Does private browsing prevent socket events?

    - by Chris Cirefice
    I usually do my work in Google Chrome (v36.0.1985.143), private browsing enabled. I use Firefox to browse Stack Exchange, in normal mode so that all my logins are persisted. Sometimes I accidentally open an SE question in Chrome, so I copy-paste the URL to Firefox and get on reading. I left the SE tab in Chrome open, and up-voted a question in Firefox. Normally, you immediately see other users' voting activity via socket event emitting (possibly Socket.IO, I don't know SE's back-end). I noticed that in my Chrome tab, I didn't see the upvote that I had cast in Firefox. I had to refresh to be able to see the vote count change. So, as the title of the question states: does private browsing prevent socket events?

    Read the article

  • How can I get JSON from Nginx Autoindex?

    - by Devrim
    How can I modify to Nginx’s autoindex so that it will generate a JSON version of the index instead of HTML? Or is there a module that already does that? I want to have this; http://u.kodingen.com/ClrG Instead of; http://u.kodingen.com/Cls2N This blog writer seems to have done it; http://u.kodingen.com/Clz3F http://lamsonproject.net/blog/2009-08-03.html But he didn't mention how.

    Read the article

  • Setting up httpd-vhosts.conf for multiple virtual hosts

    - by Chris Sobolewski
    I have a simple test setup using xampp at home, and I am getting really weird behavior when I attempt to set up multiple virtual hosts on this box. Here is my vhosts file: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName foo DocumentRoot "D:\wamp\xampp\htdocs\foo" ErrorLog logs/foo-error_log CustomLog logs/foo-access_log common <Directory "D:\wamp\xampp\htdocs\foo"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName bar DocumentRoot "D:\wamp\xampp\htdocs\bar" ErrorLog logs/bar-error_log CustomLog logs/bar-access_log common <Directory "D:\wamp\xampp\htdocs\bar"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> When I attempt to run visit the first site, it works as expected. When I attempt to run the second site, I get a weird hybrid mishmash of both sites. It's the weirdest thing.

    Read the article

  • What are these? Are they broken?

    - by Chris Nielsen
    Please excuse the poor image quality: What are the components that I've circled in red? The ones on the left look whole and solid. The ones on the right have cracked tops, and although this picture doesn't show it, there are small brown threads coming out of the top. Are the cracked ones broken, or is that supposed to happen? If they ARE broken, is this something I should worry about? This is a video card, and it appears to be fully functional: I'm using it while writing this post.

    Read the article

  • How to use wget with an input file and filenames

    - by Matt
    i have a text file that contains 10,000 url's with a unique number i want to save the file as. Each line has a 10 character code, then the URL of the image to retrieve. How can I make the input file use the first 10 characters as the wget filename? this is an example of the input file: input.txt x100083590http://image.allmusic.com/13/adg/cov200/drt200/t291/t29123q8m19.jpg b200149548http://ecx.images-amazon.com/images/I/41DoH%2BAWKEL.jpg z100151855http://image.allmusic.com/13/amg/cov200/dri400/i450/i45035hxdrb.jpg p400171646http://ecx.images-amazon.com/images/I/61cH4n34IhL.jpg wget -i input.txt would get the file but not with the preceding unique number. I want t29123q8m19.jpg (the first line) to be saved as x100083590.jpg If there is a better way to write out the input file, say with the URL first, then I can do that too, but I will never know the length of the first field. Right now the first 10 characters will always be what I want to save the wget image as. Edit This is being done in a windows environment.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >