Search Results

Search found 16539 results on 662 pages for 'persistence xml'.

Page 361/662 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Why Does DreamWeaver CS5 Discriminate between File Extensions, Even After Modding Mime Types!?

    - by Sam
    Hi folks, Even After I forced DreamWeaver CS5 to allow opening of .ast extensions as a MIME type of php5, which DreamWeaver now opens and colors correctly as described here, I still have trouble figuring out why it still discriminates between the two file extensions! Symptoms: External Files & Design View I have a file foo.php which php includes other files (e.g. the php-combined css.php and js.php). Now, when opening foo.php all functions work perfectly: the external (included) php files are all recognised correctly. However, when I change foo.php foo.ast, and open it again, It does not recognise the files extensions anymore in the top bar. Also, I lose the Design / Live View functionality.** When I change foo.ast to foo.php, all works again! Anyone any clues of why there remains a a difference between one and other extension? Note1: I have added the .ast extension to these four files, next to .php: 1 C:\Users\Sam\AppData\Local\VirtualStore\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 2 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 3 C:\Users\Sam\AppData\Roaming\Adobe\Dreamweaver CS5\en_US\Configuration\Extensions.txt 4 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\Extensions.txt Note2: sometimes, even .php files do not want to show in design view or live view. Could this be caused by a corrupted installation?

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • Replacing compiz/metacity with openbox reduces workspaces to 1

    - by Brian
    I like to use the GNOME desktop, but I prefer to replace its window manager with openbox, with 4 workspaces. However, when I run openbox --replace, the number of workspaces available drops to 1. If I go into obconf, workspaces is still configured to be 4 (~/.config/openbox/rc.xml shows the same). I can get the workspaces to reappear by changing the value in obconf to anything else, and then back to 4. I have just been dealing with this problem since Ubuntu 9.04 (now up to 10.10) since I don't reboot very often. But it's really annoying to have to reset my workspaces whenever I do have to reboot. Changing the value in rc.xml and running openbox --reconfigure does not seem to have any effect. So what is obconf doing that I'm not (sends a dbus message perhaps [EDIT: watching with dbus-monitor I see no messages when changing the workspaces value in obconf])? I was hoping there would be a cleaner way to change the window manager than just running openbox --replace at login. So my questions are: Is there a better way to specify an alternate window manager (i.e. a way that doesn't cause the workspaces to break)? If not, how can I automatically set the number of workspaces back to 4? Update: I finally got around to trying what I commented on MrShunz's answer (adding WINDOW_MANAGER=/usr/bin/openbox to ~/.gnomerc). But the effect is the same as openbox --replace.

    Read the article

  • Configuring dnsmasq to handle mx records on pfsense 2.0.1

    - by Bob B.
    I know from dnsmasq's man page that it is capable of handling mx records, but I can't seem to find anything in pfsense's web GUI or anywhere online that talks about how to include mx records. I'm running pfsense 2.0.1 on a turnkey hardware appliance. I have root shell access. I would prefer not to move away from using DNS Forwarder/dnsmasq if I can help it. I've searched for a dnsmasq.conf file, but none exists. pfsense handles everything through a centralized xml config file. That file merely designates the dnsmasq section using the tag, then drops immediate into listings for each host override you define. My understanding of pfsense's implementation: In the GUI, you can only define an override using the host, domain, IP and description. In the XML that translates to: <hosts> <host>foo</host> <domain>foo.com</domain> <ip>127.0.0.1</ip> <descr/> </hosts> The above example results in foo.foo.com resolving to 127.0.0.1, for instance. But that's it. No ability to select a record type with which to define things like MX. Anyone had any luck with this? Thank you for any insights you might have.

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • UACCEEventLog 301 Filling Event Logs

    - by rjt
    After pushing out clients for the MS Application Compatibility Toolkit on our domain via GPO, UACCEEventLog 301 occurs a few times per second in the event log. Several Thousand per hour. One test i need to do is logon with Administrator to see if these events go away while Admin, but of course that is not a fix. This is only part of the event log entry, but is the most readable and clearly indicates yet another problem with Antivirus software. But still no fix. Originally, i posted this In Words and Bytes, but then edited it to make it much easier to read. LocalMachine\Users do have Read Access to this key. For a test, i added "Domain Users" but there are many more events for other parts of the registry and for Administrators. <XML> <TYPE> UacceRegistryVirtualization </TYPE> <EXENAME>smcgui.exe</EXENAME> <EXEPATH>c:\program files\symantec\symantec endpoint protection </EXEPATH> <APINAME>RegOpenKeyA</APINAME> <REGKEYNAME> HKEY_LOCAL_MACHINE\SOFTWARE \Symantec\Symantec Endpoint Protection\AV\Storages \SymHeurProcessProtection\RealTimeScan\0 </REGKEYNAME> <RESTRICTEDBYACL>FALSE</RESTRICTEDBYACL> <DESIREDACCESS>MAXIMUM_ALLOWED</DESIREDACCESS> <REGVALUENAME></REGVALUENAME> <REGVALUETYPE>0x00000000</REGVALUETYPE> <REGVALUEDATA></REGVALUEDATA> <CURRENTGROUP>Users</CURRENTGROUP> </XML>

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • IIS7: How to block access with a web.config file?

    - by neves
    I know that IIS7 allows me to have a per directory configuration with the web.config xml file. I have a directory with some configuration files that don't want to be web accessible. A local web.config file forbidding read access to it would be a nice solution. What should be the contents of a web.config file to forbid web access to the files? Edit: I'm trying to put a web.config file with these contents in a file: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <authorization> <deny users="*" /> <!-- Denies all users --> </authorization> </system.web> </configuration> But I can still directly access a file inside the directory. What's wrong with it? How do I debug what's happening?

    Read the article

  • Apache reverse proxy with VirtualHost not serving a page

    - by Mr Aleph
    I have an Apache reverse proxy set to move requests to a Tomcat Applet. The config is similar to: <VirtualHost 100.100.100.100:80> ProxyPass /AppName/App http://1.1.1.1/AppName/App ProxyPassReverse /AppName/App http://1.1.1.1/AppName/App </VirtualHost> I also have a page called summary.html that exists on 1.1.1.1 as: http://1.1.1.1/AppName/summary.html When I browse directly to it I have no problem viewing it, however if I try to get there via the reverse proxy I get a blank page. Wireshark shows me a 503, but this one is coming from the Apache reverse proxy (IP 100.100.100.100) and not the Tomcat (IP 1.1.1.1). Should I add http://1.1.1.1/AppName/ to the config? How? I tried it but I get a blank page, however this one shows on the URL bar of the browser the internal IP of the Tomcat, so, no go. Help is appreciated. Thanks. EDIT: This is the dump from Wireshark: GET /AppName/ HTTP/1.1 Host: 100.100.100.100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Cache-Control: max-age=0 Accept-Language: en-us Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 404 Not Found Date: Tue, 30 Jan 2012 09:08:51 GMT Server: Apache Content-Length: 1 Connection: close Content-Type: text/html; charset=iso-8859-1

    Read the article

  • How do I move a linked file on Unix?

    - by r3mbol
    I have a bunch of files in one directory and links to each one of those files in another directory. So ls -l looks something like this: lrwxrwxrwx 1 rembol rembol 89 Jan 25 10:00 copyright.txt -> /home/rembol/solr/target/deploy/data/core/copyright.txt lrwxrwxrwx 1 rembol rembol 92 Jan 25 10:00 jar-versions.xml -> /home/rembol/solr/target/deploy/data/core/jar-versions.xml lrwxrwxrwx 1 rembol rembol 85 Jan 25 10:00 lgpl.html -> /home/rembol/solr/target/deploy/data/core/lgpl.html lrwxrwxrwx 1 rembol rembol 79 Jan 25 10:00 lib -> /home/rembol/solr/target/deploy/data/core/lib lrwxrwxrwx 1 rembol rembol 87 Jan 25 10:00 readme.html -> /home/rembol/solr/target/deploy/data/core/readme.html drwxr-xr-x 3 rembol rembol 4096 Jan 25 10:00 server drwxr-xr-x 2 rembol rembol 4096 Jan 25 10:00 startup Now I want to move those linked files from /home/rembol/solr/target/deploy to /home/rembol/output/. If I do that my simply calling mv, links will break. I don't want to re-link each file separately, cause there are hundreds of them (they are generated automatically). Is there some clever way to move linked files, rather than writing a script that unlinks, moves and relinks recursively for each file in each subdirectory?

    Read the article

  • Wordpress Forbidden page

    - by ffffff
    HTML without a body part is null If I read preview mode in (there is no authority) without logging in The response html is this.. <html xmlns="http://www.w3.org/1999/xhtml" lang="ja" xml:lang="ja"> <head profile="http://purl.org/net/ns/metaprof"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Script-Type" content="text/javascript" /> <meta name="generator" content="WordPress 2.9.2" /> <meta name="author" content="blog" /> <link rel="alternate" type="application/atom+xml" href="http://blog.example.com/feed/atom/" title="Atom cite contents" /> <link rel="start" href="http://blog.example.com" title="blog Home" /> <link rel="stylesheet" type="text/css" href="http://blog.example.com/wp-content/themes/blog/style.css" /> <meta name="description" content="blog" /> <title>blog - </title> </head> <body class="individual single"> </div> </body> </html> Do you have any solutions?

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • How to debug modsecurity_audit_log

    - by max87
    I was accessing www.example.com/RestAPI/index.php/tweets.json in my server. The modsec_audit.log showed the following error, but there is no related errors/warnings in modsec_debug.log. I could see the Internal Server error is logged in example-error_log. How can I debug this Internal Server error? --8560e90b-A-- [21/Mar/2012:07:01:52 +0000] T2l84H8AAAEAAGxPZ@QAAAAG x.x.x.x 33101 x.x.x.x 80 --8560e90b-B-- GET /RestAPI/index.php/tweets.json HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (X11; Linux i686; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Cookie: __utma=159129855.1463065063.1331789485.1331789485.1331789485.1; __utmz=159129855.1331789485.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); 8cb6a414cf5ec1919864de0e80bea4da=0es7dcu0p10cocfpferb2lddi0; 8926e4f3c475bb6fcacb409299f1bd27=53cf8c5e6bf78ea45096945377e6d609 Connection: keep-alive Cache-Control: max-age=0 --8560e90b-F-- HTTP/1.0 500 Internal Server Error X-Powered-By: PHP/5.3.5 Content-Length: 0 Connection: close Content-Type: text/html; charset=UTF-8 --8560e90b-H-- Apache-Handler: php5-script Stopwatch: 1332313312358005 130428 (- - -) Producer: ModSecurity for Apache/2.5.12 (http://www.modsecurity.org/); core ruleset/2.0.5. Server: Apache --8560e90b-Z--

    Read the article

  • SVG doesn't work on subdomain - some browsers try to download rather than display

    - by John Catterfeld
    I have a site with a 'development' subdomain, which displays my SVG file exactly as intended. However when I copy it across to www, or any other subdomain (e.g. 'test') some browsers try to open the file in an external editor, therefore asking me to download the file rather than displaying it. For example: http://development.mysite.com/test.svg - works http://www.mysite.com/test.svg - doesn't work The SVG file: <?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg xmlns="http://www.w3.org/2000/svg" version="1.1"> <circle cx="100" cy="50" r="40" stroke="black" stroke-width="2" fill="red" /> </svg> This happens in Firefox, Chrome and Safari, however IE9 and above display the file as intended. It is a Windows hosting, and I have <mimeMap fileExtension=".svg" mimeType="image/svg+xml" /> in my web.config file My hunch is that there must be some setting on the server which I need my hosting company to make. Can anyone suggest what might cause this issue?

    Read the article

  • Nginx order of servers

    - by scrat
    I have 3 sites on my server. All are running on gunicorn and use unix sockets to communicate with nginx which routes requests. I got three records in nginx.conf like: server { listen 80; server_name site1.com; location / { proxy_pass http://unix:/tmp/site1.sock; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } For site1, site2, site3. If they are ordered as config for site1 goes first, and then goes config for site2 and site3 everything works good. But when I change the order for example to site2, site1, site3, then site1 becomes routed to site2. What am I doing wrong? Full server nginx.conf before servers configs: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • Managing Internal Yum Repository Groups

    - by elmt
    What is the best method for handling yum groups dependencies? For example, take this comps.xml file <comps> <group> <id>production</id> <name>Production</name> <default>true</default> <description>Packages required to run</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">ssh</packagereq> </packagelist> </group> <group> <id>development</id> <name>Development</name> <default>false</default> <description>Packages required to develop</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">gcc</packagereq> </packagelist> </group> </comps> which is packaged with createrepo -g comps.xml x86_64. The ssh and gcc rpms are not installed in the x86_64 directory. If I run yum groupinstall development, yum is smart enough to pull the gcc package from the RHEL repo even though the groups are defined in my internal repository. However, is this the proper way of doing this, or should I copy the rpms to my local repository and recreate the repo?

    Read the article

  • empty file from tomcat https redirect?

    - by amertune
    I am using tomcat 6.0.20, with jdk 1.6.0_18 on 64 bit linux (tomcat downloaded from tomcat.apache.org, not installed from repositories). I have iptables redirects from port 80 - 8080 and 443 - 8443. In server.xml, the connector for port 8080 redirects to 443, and the 8443 connector has proxyPort="443". In conf/web.xml, I have added this bit at the end of the file (but still inside the <web-app</webapp tags). <security-constraint> <web-resource-collection> <web-resource-name>Protected Context</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> I also have two contexts, ROOT and mywebapp. If I go to http://myurl.com, then I get redirected to https://myurl.com. If I go to http://myurl.com/mywebapp/, then I get redirected to https://myurl.com/mywebapp/. The problem I'm having is when I go to http://myurl.com/mywebapp (no trailing slash). When I do this I get a prompt to download an empty file that has an empty name. Going to https://myurl.com/mywebapp works. I would think that a user typing myurl.com/mywebapp is far from rare. Is there something I'm missing?

    Read the article

  • Apache forwarding to tomcat shows a blank page

    - by MNS
    I have an application running on tomcat at http ://www.example.com:9090/mycontext. The host name in server.xml points to www .example.com. I do not have localhost anymore. I am using apache to forward requests to tomcat using mod_proxy. Things work fine as long as the ProxyPath is /mycontext. The server name setup in virtual host is www .abc.com and http ://www.abc.com/mycontext works fine. However I would like to ignore the context path and simply use http://www.abc.com/ to forward requests to http://www.example.com:9090/mycontext. When I do this, apache shows me a blank page. What am I missing here? I have not changed anything in server.xml except the default host to www .example.com. <VirtualHost *:80> ServerName www.abc.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://www.example.com:9090/mycontext ProxyPassReverse / http://www.example.com:9090/mycontext </VirtualHost> Thanks

    Read the article

  • How to create location blocks in nginx for a single file but have it follow the rules of another location block in addition to it's own?

    - by Ryan Detzel
    I have a location block for / that does all of my fastcgi stuff and it has a normal timeout of 10s. I want to be able to have different timesouts for certain files(/admin, sitemap.xml). Is there an easy way to do this without copying the entire location block for each location? location /admin{ fastcgi_read_timeout 5m; #also use the location info below. } location /sitemap.xml{ fastcgi_read_timeout 5m; #also use the location info below. } location / { fastcgi_pass 127.0.0.1:8014; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; }

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >