Search Results

Search found 893 results on 36 pages for 'gzip'.

Page 27/36 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Uploading to another domain gives HTTP code 405

    - by dragon112
    I'm trying to upload a file (which can be quite large) from the website of one server to the backend of another server using plupload. Lets say: domain 1 = http://www.websitedomain.com/uploadform domain 2 = http://www.backenddomain.com/uploadhandler Trying to upload i send the following: OPTIONS /main/uploadnetwork.php HTTP/1.1 Host: backenddomain.com Connection: keep-alive Access-Control-Request-Method: POST Origin: http://www.websitedomain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Access-Control-Request-Headers: origin, content-type Accept: */* Referer: http://www.websitedomain.com/uploadform Accept-Encoding: gzip,deflate,sdch Accept-Language: nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 DNT: 1 But when I try to start the upload the server returns the following: HTTP/1.1 405 Method Not Allowed Allow: GET, HEAD, OPTIONS, TRACE Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET X-Powered-By-Plesk: PleskWin Date: Mon, 01 Oct 2012 12:41:57 GMT Content-Length: 999 After doing some research I found out that a browser does this to check if the server will accept the intended message. It looks like my server doesn't feel like accepting a simple POST call even tho i use post all the time. The Google Chrome console gives the following error: XMLHttpRequest cannot load http://www.backenddomain.com/uploadhandler. Origin http://www.websitedomain.com is not allowed by Access-Control-Allow-Origin. Does anyone know how to stop the browser from checking or how i can tell my server to just accept the POST?

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • How to flash Dell Precision 390 from linux (debian)

    - by malat
    I am trying to update my BIOS: $ sudo dmidecode -s bios-version 2.1.2 With a newer one: 2.6.0. I went to this page Dell Precision System BIOS, 2.6.0 After downloading the file WS390-020600.BIN, here is what it states: $ ./WS390-020600.BIN --help Usage: WS390-020600.BIN [options] Options: --help Print this text. --version Print package versions. If no options, update the BIOS. and $ ./WS390-020600.BIN --version Dell BIOS Update Installer 1.2 Copyright 2006 Dell Inc. All Rights Reserved. ./WS390-020600.BIN: 60: ./WS390-020600.BIN: ./flash: not found Does anyone knows where this flash command can be found ? Update: it looks like this is a self-extracting archive (need bash as per comment in header). $ head -30 WS390-020600.BIN [...] Extract() { tail -n +`awk '/^__ARC__/ { print NR + 1; exit 0; }' $0` $0 | gzip -cd >$_PRG So the flash command should have been auto-generated, however the above command does not appear to be running as original author intended. I do not see anything wrong with the command though.

    Read the article

  • How should an experienced Windows SysAdmin learn Linux? [closed]

    - by Systemspoet
    I have a new hire starting in a few weeks who is an experienced Windows SysAdmin. I think he's fairly senior on the Windows side, with a pretty deep AD understanding and experience with Exchange 2007, 2010, and exchange migrations. He's done a little PowerShell but I suspect more of the "run this command to do this" variety then "write a script to do this" sort. However, we are a mixed shop and (he knows this) I expect him to become a reasonably competent Linux SysAdmin over time. I'm looking for good starting points to bring him along. I have over ten years of Linux/UNIX experience, so it all sort of seems intuitive to me, but I've been thinking about the toolkit you actually need to be productive in the Linux CLI world. Just to be able to use the machines at all, off the top of my head... vi Basic CLI stuff -- move around, rename files, copy files, tar, gzip, changing passwords, finding relevant manpages, keep track of where you are, find things in your history, etc, etc. More advanced things that I take for granted but are actually pretty hard -- doing things with 'find', extracting relevant text via 'awk' and/or 'cut', knowing when to use 'grep' and when to use 'grep -e' or 'egrep'. Distribution specific stuff... compiling software, rpm, yum, apt-get, you name it. This all seems pretty basic to me, but when I think back to 1995 when I was first learning my way, some of those things took me years to master. So my question is -- where should I send him to pick up those skills? I'm not just thinking of classes, but rather also websites and books? Where do you all suggest as a starting point for picking up Linux skills?

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Can I remove the original file while running "sort"?

    - by Spaceman
    I'm sorting a huge file, around 400 gigabytes. I'm running out of the disk space and I must do something quickly. Let's assume the original file is called original_file. So I execute (simplified) it as "sort original_file | gzip -c output_file" I use /home/tmp as a temporary dir. From what I see, there are a lot of intermediate files, like so: tmpA465 tmpB154 ... and so on. The smallest ones have size 12 megabytes. The largest have ~182 megabytes. So, it seems that the "sort" command have already split the original file into small pieces, and have sorted them, and now it is merging them into bigger parts (which will be, eventually, sorted as well). Please correct me if I'm wrong. Can I remove the original file right now without terminating the sort process? I've been waiting for a few days for that and it's important that the "sort" command will not fail and I will get the result file, finally. The OS is Ubuntu server 13.04, x64. Thanks!

    Read the article

  • How to simulate browser form POST method using PHP/cURL

    - by user283266
    I'm trying to simulate browser with POST method using PHP/cURL. When I looked at that live Http header it shows Content-Type: multipart/form-data. I checked on the internet where it was suggested that cURL will send multipart/form-data when a custom headers is specified to Content-Type: multipart/form-data. $headers = array( 'Content-Type' => 'multipart/form-data; boundary='.$boundary ); This didn't work for me either when I print_r(curl_getinfo()) it showed [content_type] => text/html; charset=UTF-8 Which means cURL sent a default headers I also read that sending/uploading a file with cURL will cause data to be send as multipart/form-data. I created a file which curl uploaded but again when I ran curl_getinfo I got [content_type] => text/html; charset=UTF-8 $data_array = array("field" => "@c:\file_location.txt"); I also tried to read a file content so that the only thing sent would be content NOT ATTACHED FILE but this didn't work for me curl_getinfo shows [content_type] => text/html; charset=UTF-8. $data_array = array("field" => "<c:\file_location.txt"); // note @ replaced with < Do I miss somthing here? This is the referer url POST somepath HTTP/1.1 Host: www(dot)domain(dot)com User-Agent: Mozilla/5.0 (Windows) Gecko/13081217 Firefox/3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: url/some-file.php Content-Type: multipart/form-data; boundary=--------------------------$boundary Content-Length: $some_number ----------------------------$boundary Content-Disposition: form-data; name="$some_Value1" $some_text1 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value2" $some_text2 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value3" $some_text3 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value4" $some_text4 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value5" $some_text5 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value6" $some_text6 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value7" $some_text7 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value8" $some_text8 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value9" ----------------------------$boundary Content-Disposition: form-data; name="$some_Value10" ----------------------------$boundary-- Here is a piece of code. <? //Include files set_time_limit(0); include'body.php'; include'keyword.php'; include'bio.php'; include'summary.php'; include'headline.php'; include'category.php'; include'spin.php'; include'random-text.php'; $category = category(); $headline = headline() ; $summary = summary(); $keyword = keyword(); $body = body(); $bio = bio(); $target="url"; $ref ="url_ref"; $c = "Content-Disposition: form-data; name="; $boundary = "---------------------------".random_text(); $category = category(); $headline = headline() ; $summary = summary(); $keyword = keyword(); $body = body(); $bio = bio(); // emulating content form as it appears on livehttp header $data = "\r\n".$boundary."\r\n".$c."\"pen_id\"\r\n\r\n".$Auth_id."\r\n".$boundary."\r\n".$c."\"cat_id\"\r\n\r\n".category()."\r\n".$boundary."\r\n".$c."\"title\"\r\n\r\n".headline()."\r\n".$boundary."\r\n".$c."\"meta_desc\"\r\n\r\n".summary()."\r\n".$boundary."\r\n".$c."\"meta_keys\"\r\n\r\n".keyword()."\r\n".$boundary."\r\n".$c."\"content\"\r\n\r\n".body()."\r\n".$boundary."\r\n".$c."\"author_bio\"\r\n\r\n".bio()."\r\n".$boundary."\r\n".$c."\"allow_comments\"\r\n\r\ny\r\n".$boundary."\r\n".$c."\"id\"\r\n\r\n\r\n".$boundary."\r\n".$c."\"action\"\r\n\r\n\r\n".$boundary."--\r\n"; // inserting content into a file $file = "C:\file_path.txt"; $fh = fopen($file, 'w+') or die("Can't open file"); fwrite($fh,$data); fclose($fh); // pulling out content from a file as multipart/form-data $data_array = array ("field" => "<C:\file_path.txt"); $headers = array ( 'POST /myhome/article/new HTTP/1.1', 'Host: url', 'User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.20) Gecko/20081217 Firefox/2.0.0.20 (.NET CLR 3.5.30729)', 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9;q=0.8', 'Accept-Language: en-us,en;q=0.5', 'Accept-Encoding: gzip,deflate', 'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'Keep-Alive: 300', 'Connection: keep-alive', 'Content-Type: multipart/form-data; boundary='.$boundary, 'Content-Length: '.strlen($data), ); # Create the cURL session $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $target); // Define target site curl_setopt($ch, CURLOPT_POST,1); curl_setopt($ch, CURLOPT_HEADER, $headers); // No http head //curl_setopt($ch, CURLOPT_REFERER, $ref); curl_setopt($ch, CURLOPT_NOBODY, FALSE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); // Return page in string curl_setopt($ch, CURLOPT_COOKIEJAR, "c:\cookie\cookies.txt"); // Tell cURL where to write curl_setopt($ch, CURLOPT_COOKIEFILE, "c:\cookie\cookies.txt"); // Tell cURL which cookies //curl_setopt($ch, CURLOPT_USERAGENT, $agent); curl_setopt($ch, CURLOPT_POST, TRUE); curl_setopt($ch, CURLOPT_POSTFIELDS, "$data_array"); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects curl_setopt($ch, CURLOPT_MAXREDIRS, 4); # Execute the PHP/CURL session and echo the downloaded page $page = curl_exec($ch); $err = curl_error($ch); $info =curl_getinfo($ch); # Close the cURL session curl_close($ch); print_r($err); print_r($info); ?>

    Read the article

  • httpd high cpu usage slowing down server response

    - by max
    my client has a image sharing website with about 100.000 visitor per day it has been slowed down considerably since this morning when i checked processes i've notice high cpu usage from http .... some has suggested ddos attack ... i'm not a webmaster and i've no idea whts going on top top - 20:13:30 up 5:04, 4 users, load average: 4.56, 4.69, 4.59 Tasks: 284 total, 3 running, 281 sleeping, 0 stopped, 0 zombie Cpu(s): 12.1%us, 0.9%sy, 1.7%ni, 69.0%id, 16.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15875096k used, 162056k free, 360468k buffers Swap: 4194288k total, 888k used, 4193400k free, 14050008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4151 apache 20 0 277m 84m 3784 R 50.2 0.5 0:01.98 httpd 4115 apache 20 0 210m 16m 4480 S 18.3 0.1 0:00.60 httpd 12885 root 39 19 4296 692 308 S 13.0 0.0 11:09.53 gzip 4177 apache 20 0 214m 20m 3700 R 12.3 0.1 0:00.37 httpd 2219 mysql 20 0 4257m 198m 5668 S 11.0 1.3 42:49.70 mysqld 3691 apache 20 0 206m 14m 6416 S 1.7 0.1 0:03.38 httpd 3934 apache 20 0 211m 17m 4836 S 1.0 0.1 0:03.61 httpd 4098 apache 20 0 209m 17m 3912 S 1.0 0.1 0:04.17 httpd 4116 apache 20 0 211m 17m 4476 S 1.0 0.1 0:00.43 httpd 3867 apache 20 0 217m 23m 4672 S 0.7 0.1 1:03.87 httpd 4146 apache 20 0 209m 15m 3628 S 0.7 0.1 0:00.02 httpd 4149 apache 20 0 209m 15m 3616 S 0.7 0.1 0:00.02 httpd 12884 root 39 19 22336 2356 944 D 0.7 0.0 0:19.21 tar 4054 apache 20 0 206m 12m 4576 S 0.3 0.1 0:00.32 httpd another top top - 15:46:45 up 5:08, 4 users, load average: 5.02, 4.81, 4.64 Tasks: 288 total, 6 running, 281 sleeping, 0 stopped, 1 zombie Cpu(s): 18.4%us, 0.9%sy, 2.3%ni, 56.5%id, 21.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16037152k total, 15792196k used, 244956k free, 360924k buffers Swap: 4194288k total, 888k used, 4193400k free, 13983368k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4622 apache 20 0 209m 16m 3868 S 54.2 0.1 0:03.99 httpd 4514 apache 20 0 213m 20m 3924 R 50.8 0.1 0:04.93 httpd 4627 apache 20 0 221m 27m 4560 R 18.9 0.2 0:01.20 httpd 12885 root 39 19 4296 692 308 S 18.9 0.0 11:51.79 gzip 2219 mysql 20 0 4257m 199m 5668 S 18.3 1.3 43:19.04 mysqld 4512 apache 20 0 227m 33m 4736 R 5.6 0.2 0:01.93 httpd 4520 apache 20 0 213m 19m 4640 S 1.3 0.1 0:01.48 httpd 4590 apache 20 0 212m 19m 3932 S 1.3 0.1 0:00.06 httpd 4573 apache 20 0 210m 16m 3556 R 1.0 0.1 0:00.03 httpd 4562 root 20 0 15164 1388 952 R 0.7 0.0 0:00.08 top 98 root 20 0 0 0 0 S 0.3 0.0 0:04.89 kswapd0 100 root 39 19 0 0 0 S 0.3 0.0 0:02.85 khugepaged 4579 apache 20 0 209m 16m 3900 S 0.3 0.1 0:00.83 httpd 4637 apache 20 0 209m 15m 3668 S 0.3 0.1 0:00.03 httpd ps aux [root@server ~]# ps aux | grep httpd root 2236 0.0 0.0 207524 10124 ? Ss 15:09 0:03 /usr/sbin/http d -k start -DSSL apache 3087 2.7 0.1 226968 28232 ? S 20:04 0:06 /usr/sbin/http d -k start -DSSL apache 3170 2.6 0.1 221296 22292 ? R 20:05 0:05 /usr/sbin/http d -k start -DSSL apache 3171 9.0 0.1 225044 26768 ? R 20:05 0:17 /usr/sbin/http d -k start -DSSL apache 3188 1.5 0.1 223644 24724 ? S 20:05 0:03 /usr/sbin/http d -k start -DSSL apache 3197 2.3 0.1 215908 17520 ? S 20:05 0:04 /usr/sbin/http d -k start -DSSL apache 3198 1.1 0.0 211700 13000 ? S 20:05 0:02 /usr/sbin/http d -k start -DSSL apache 3272 2.4 0.1 219960 21540 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3273 2.0 0.0 211600 12804 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3279 3.7 0.1 229024 29900 ? S 20:06 0:05 /usr/sbin/http d -k start -DSSL apache 3280 1.2 0.0 0 0 ? Z 20:06 0:01 [httpd] <defun ct> apache 3285 2.9 0.1 218532 21604 ? S 20:06 0:04 /usr/sbin/http d -k start -DSSL apache 3287 30.5 0.4 265084 65948 ? R 20:06 0:43 /usr/sbin/http d -k start -DSSL apache 3297 1.9 0.1 216068 17332 ? S 20:06 0:02 /usr/sbin/http d -k start -DSSL apache 3342 2.7 0.1 216716 17828 ? S 20:06 0:03 /usr/sbin/http d -k start -DSSL apache 3356 1.6 0.1 217244 18296 ? S 20:07 0:01 /usr/sbin/http d -k start -DSSL apache 3365 6.4 0.1 226044 27428 ? S 20:07 0:06 /usr/sbin/http d -k start -DSSL apache 3396 0.0 0.1 213844 16120 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3399 5.8 0.1 215664 16772 ? S 20:07 0:05 /usr/sbin/http d -k start -DSSL apache 3422 0.7 0.1 214860 17380 ? S 20:07 0:00 /usr/sbin/http d -k start -DSSL apache 3435 3.3 0.1 216220 17460 ? S 20:07 0:02 /usr/sbin/http d -k start -DSSL apache 3463 0.1 0.0 212732 15076 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3492 0.0 0.0 207660 7552 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3493 1.4 0.1 218092 19188 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3500 1.9 0.1 224204 26100 ? R 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3501 1.7 0.1 216916 17916 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3502 0.0 0.0 207796 7732 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3505 0.0 0.0 207660 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3529 0.0 0.0 207660 7524 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3531 4.0 0.1 216180 17280 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3532 0.0 0.0 207656 7464 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3543 1.4 0.1 217088 18648 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3544 0.0 0.0 207656 7548 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3545 0.0 0.0 207656 7560 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3546 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3547 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3548 2.3 0.1 216904 17888 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3550 0.0 0.0 207660 7540 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3551 0.0 0.0 207660 7536 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3552 0.2 0.0 214104 15972 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3553 6.5 0.1 216740 17712 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3554 6.3 0.1 216156 17260 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3555 0.0 0.0 207796 7716 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3556 1.8 0.0 211588 12580 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3557 0.0 0.0 207660 7544 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3565 0.0 0.0 207660 7520 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3570 0.0 0.0 207660 7516 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL apache 3571 0.0 0.0 207660 7504 ? S 20:08 0:00 /usr/sbin/http d -k start -DSSL root 3577 0.0 0.0 103316 860 pts/2 S+ 20:08 0:00 grep httpd httpd error log [Mon Jul 01 18:53:38 2013] [error] [client 2.178.12.67] request failed: error reading the headers, referer: http://akstube.com/image/show/27023/%D9%86%DB%8C%D9%88%D8%B4%D8%A7-%D8%B6%DB%8C%D8%BA%D9%85%DB%8C-%D9%88-%D8%AE%D9%88%D8%A7%D9%87%D8%B1-%D9%88-%D9%87%D9%85%D8%B3%D8%B1%D8%B4 [Mon Jul 01 18:55:33 2013] [error] [client 91.229.215.240] request failed: error reading the headers, referer: http://akstube.com/image/show/44924 [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] Invalid method in request [Mon Jul 01 18:57:02 2013] [error] [client 2.178.12.67] File does not exist: /var/www/html/501.shtml [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] client denied by server configuration: /var/www/html/server-status [Mon Jul 01 19:21:36 2013] [error] [client 127.0.0.1] File does not exist: /var/www/html/403.shtml [Mon Jul 01 19:23:57 2013] [error] [client 151.242.14.31] request failed: error reading the headers [Mon Jul 01 19:37:16 2013] [error] [client 2.190.16.65] request failed: error reading the headers [Mon Jul 01 19:56:00 2013] [error] [client 151.242.14.31] request failed: error reading the headers Not a JPEG file: starts with 0x89 0x50 also there is lots of these in the messages log Jul 1 20:15:47 server named[2426]: client 203.88.6.9#11926: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:47 server named[2426]: client 203.88.6.9#26255: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#20093: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 20:15:48 server named[2426]: client 203.88.6.9#8672: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:07 server named[2426]: client 203.88.6.9#39352: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#25382: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:08 server named[2426]: client 203.88.6.9#9064: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#35375: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#61932: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.23.9#4423: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:09 server named[2426]: client 203.88.6.9#40229: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#46128: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#62128: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#35240: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#36774: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#28361: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#14970: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20216: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#31794: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#23042: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#11333: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.10#41807: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.23.9#20092: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:14 server named[2426]: client 203.88.6.10#43526: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#17173: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.9#62412: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#63961: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#64345: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:15 server named[2426]: client 203.88.23.10#31030: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17098: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#17197: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#18114: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:16 server named[2426]: client 203.88.6.9#59138: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:45:17 server named[2426]: client 203.88.6.9#28715: query (cache) 'www.xxxmaza.com/A/IN' denied Jul 1 15:48:33 server named[2426]: client 203.88.23.9#26355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#34473: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#62658: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:34 server named[2426]: client 203.88.23.9#51631: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:35 server named[2426]: client 203.88.23.9#54701: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#63694: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:36 server named[2426]: client 203.88.6.10#18203: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:37 server named[2426]: client 203.88.6.10#9029: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#58981: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:48:38 server named[2426]: client 203.88.6.10#29321: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:47 server named[2426]: client 119.160.127.42#42355: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:49 server named[2426]: client 119.160.120.42#46285: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:53 server named[2426]: client 119.160.120.42#30696: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:54 server named[2426]: client 119.160.127.42#14038: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:55 server named[2426]: client 119.160.120.42#33586: query (cache) 'xxxmaza.com/A/IN' denied Jul 1 15:49:56 server named[2426]: client 119.160.127.42#55114: query (cache) 'xxxmaza.com/A/IN' denied

    Read the article

  • Problems with jQuery getJSON using local files in Chrome

    - by Tauren
    I have a very simple test page that uses XHR requests with jQuery's $.getJSON and $.ajax methods. The same page works in some situations and not in others. Specificially, it doesn't work in Chrome on Ubuntu. I'm testing on Ubuntu 9.10 with Chrome 5.0.342.7 beta and Mac OSX 10.6.2 with Chrome 5.0.307.9 beta. It works correctly when files are installed on a web server from both Ubuntu/Chrome and Mac/Chrome (try it out here). It works correctly when files are installed on local hard drive in Mac/Chrome (accessed with file:///...). It FAILS when files are installed on local hard drive in Ubuntu/Chrome (access with file:///...). The small set of 3 files can be downloaded in a tar/gzip file from here: http://issues.tauren.com/testjson/testjson.tgz When it works, the Chrome console will say: XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:16Using getJSON index.html:21 Object result: "success" __proto__: Object index.html:22success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:29Using ajax with json dataType index.html:34 Object result: "success" __proto__: Object index.html:35success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:46Using ajax with text dataType index.html:51{"result":"success"} index.html:52undefined When it doesn't work, the Chrome console will show this: index.html:16Using getJSON index.html:21null index.html:22Uncaught TypeError: Cannot read property 'result' of null index.html:29Using ajax with json dataType index.html:34null index.html:35Uncaught TypeError: Cannot read property 'result' of null index.html:46Using ajax with text dataType index.html:51 index.html:52undefined Notice that it doesn't even show the XHR requests, although the success handler is run. I swear this was working previously in Ubuntu/Chrome, and am worried something got messed up. I already uninstalled and reinstalled Chrome, but that didn't help. Can someone try it out locally on your Ubuntu system and tell me if you have any troubles? Note that it seems to be working fine in Firefox.

    Read the article

  • Portletfaces Bridge, Null pointer exception

    - by Moayad Abu Jaber
    I faced problem in Icefaces portlet using portletfaces bridge inside liferay. the problem is when I open the browser for the first time I got null pointer exception. for example i opened the portal through chrome browser then open firefox, my portlet I made in ICEfaces throw null pointer exception. below you will find full stack trace: java.lang.NullPointerException at org.icefaces.impl.push.servlet.ProxyHttpServletRequest.getCookies(ProxyHttpServletRequest.java:307) at org.icepush.PushContext.getBrowserIDFromCookie(PushContext.java:89) at org.icepush.PushContext.createPushId(PushContext.java:46) at org.icefaces.impl.push.servlet.ICEpushResourceHandler$ICEpushResourceHandlerImpl.beforePhase(ICEpushResourceHandler.java:172) at org.icefaces.impl.push.servlet.ICEpushResourceHandler.beforePhase(ICEpushResourceHandler.java:92) at com.sun.faces.lifecycle.Phase.handleBeforePhase(Phase.java:228) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:99) at com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:116) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at org.portletfaces.bridge.BridgeImpl.doFacesRequest(BridgeImpl.java:391) at org.portletfaces.bridge.GenericFacesPortlet.doView(GenericFacesPortlet.java:181) at javax.portlet.GenericPortlet.doDispatch(GenericPortlet.java:328) at javax.portlet.GenericPortlet.render(GenericPortlet.java:233) at com.liferay.portlet.FilterChainImpl.doFilter(FilterChainImpl.java:101) at com.liferay.portal.kernel.portlet.PortletFilterUtil.doFilter(PortletFilterUtil.java:64) at com.liferay.portal.kernel.servlet.PortletServlet.service(PortletServlet.java:92) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:551) at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:488) at com.liferay.portlet.InvokerPortletImpl.invoke(InvokerPortletImpl.java:638) at com.liferay.portlet.InvokerPortletImpl.invokeRender(InvokerPortletImpl.java:723) at com.liferay.portlet.InvokerPortletImpl.render(InvokerPortletImpl.java:425) at org.apache.jsp.html.portal.render_005fportlet_jsp._jspService(render_005fportlet_jsp.java:1440) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:377) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:551) at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:488) at com.liferay.portal.util.PortalImpl.renderPortlet(PortalImpl.java:3740) at com.liferay.portal.util.PortalUtil.renderPortlet(PortalUtil.java:1180) at com.liferay.portlet.layoutconfiguration.util.RuntimePortletUtil.processPortlet(RuntimePortletUtil.java:160) at com.liferay.portlet.layoutconfiguration.util.RuntimePortletUtil.processPortlet(RuntimePortletUtil.java:94) at com.liferay.portlet.layoutconfiguration.util.RuntimePortletUtil.processTemplate(RuntimePortletUtil.java:256) at com.liferay.portlet.layoutconfiguration.util.RuntimePortletUtil.processTemplate(RuntimePortletUtil.java:181) at org.apache.jsp.html.portal.layout.view.portlet_jsp._jspService(portlet_jsp.java:821) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:377) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:551) at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:488) at com.liferay.portal.action.LayoutAction.includeLayoutContent(LayoutAction.java:370) at com.liferay.portal.action.LayoutAction.processLayout(LayoutAction.java:629) at com.liferay.portal.action.LayoutAction.execute(LayoutAction.java:232) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236) at com.liferay.portal.struts.PortalRequestProcessor.process(PortalRequestProcessor.java:153) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at com.liferay.portal.servlet.MainServlet.callParentService(MainServlet.java:508) at com.liferay.portal.servlet.MainServlet.service(MainServlet.java:485) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.strip.StripFilter.processFilter(StripFilter.java:309) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.gzip.GZipFilter.processFilter(GZipFilter.java:121) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.secure.SecureFilter.processFilter(SecureFilter.java:182) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.autologin.AutoLoginFilter.processFilter(AutoLoginFilter.java:254) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302) at com.liferay.portal.servlet.FriendlyURLServlet.service(FriendlyURLServlet.java:134) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.strip.StripFilter.processFilter(StripFilter.java:261) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.jav a:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.gzip.GZipFilter.processFilter(GZipFilter.java:110) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.secure.SecureFilter.processFilter(SecureFilter.java:182) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.i18n.I18nFilter.processFilter(I18nFilter.java:222) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.etag.ETagFilter.processFilter(ETagFilter.java:45) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.autologin.AutoLoginFilter.processFilter(AutoLoginFilter.java:254) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302) at com.liferay.portal.servlet.filters.virtualhost.VirtualHostFilter.processFilter(VirtualHostFilter.java:311) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:738) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.liferay.portal.kernel.servlet.BaseFilter.processFilter(BaseFilter.java:196) at com.liferay.portal.servlet.filters.threadlocal.ThreadLocalFilter.processFilter(ThreadLocalFilter.java:35) at com.liferay.portal.kernel.servlet.BaseFilter.doFilter(BaseFilter.java:123) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:470) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) I post this in portletfaces JIRA and their forum, no response so far. hope find the solution here, but I guess this bug in portletfaces! thanks

    Read the article

  • ASP.NET Create zip file for download: the compressed zipped folder is invalid or corrupted

    - by Jason Braswell
    string fileName = "test.zip"; string path = "c:\\temp\\"; string fullPath = path + fileName; FileInfo file = new FileInfo(fullPath); Response.Clear(); Response.ClearContent(); Response.ClearHeaders(); Response.Buffer = true; Response.AppendHeader("content-disposition", "attachment; filename=" + fileName ); Response.AppendHeader("content-length", file.Length.ToString()); Response.ContentType = "application/x-compressed"; Response.TransmitFile(fullPath); Response.Flush(); Response.End(); The actual zip file c:\temp\test.zip is good, valid, whatever you want to call it. When I navigate to the directory c:\temp\ and double-click on the test.zip file; it opens right up. My problem seems only to be with the download. The code above executes without any issue. A file download dialog is presented. I can chose to either save or open. If I try to open the file from the dialog, or save it and then open it. I get the following dialog message: The Compressed (zipped) Folder is invalid or corrupted. For Response.ContentType I've tried: application/x-compressed application/x-zip-compressed application/x-gzip-compresse application/octet-stream application/zip The zip file is being created with some prior code (that I'm sure is working fine due to my ability to open the created file directly) using: Ionic.zip http://www.codeplex.com/DotNetZip

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • MacPorts - Installing Port, Dependencies Failed

    - by Louis
    I am attempting to install xulrunner on OSX 10.6.3 using the following: sudo port install xulrunner However, I am receiving the following error: nat-10-200-136-126:phoneyc-new $ sudo port install xulrunner ---> Computing dependencies for xulrunner ---> Activating zlib @1.2.5_0 Error: The following dependencies failed to build: gconf dbus-glib glib2 zlib gtk-doc docbook-xml docbook-xml-4.1.2 xmlcatmgr docbook-xml-4.2 docbook-xml-4.3 docbook-xml-4.4 docbook-xml-4.5 docbook-xml-5.0 docbook-xsl gnome-doc-utils iso-codes libxslt libxml2 p5-xml-parser py26-libxml2 python26 bzip2 db46 gdbm openssl readline sqlite3 tk Xft2 fontconfig freetype xrender xorg-libX11 xorg-bigreqsproto xorg-inputproto xorg-kbproto xorg-libXau xorg-xproto xorg-libXdmcp xorg-util-macros xorg-xcmiscproto xorg-xextproto xorg-xf86bigfontproto xorg-xtrans xorg-renderproto tcl xorg-libXScrnSaver xorg-libXext xorg-scrnsaverproto rarian getopt intltool gnome-common p5-getopt-long p5-pathtools p5-scalar-list-utils gtk2 atk cairo libpixman libpng jasper jpeg pango shared-mime-info tiff xorg-libXcomposite xorg-compositeproto xorg-libXfixes xorg-fixesproto xorg-libXcursor xorg-libXdamage xorg-damageproto xorg-libXi xorg-libXinerama xorg-xineramaproto xorg-libXrandr xorg-randrproto orbit2 libidl policykit heimdal lcms libcanberra gstreamer bison flex gzip texinfo lzmautils libvorbis libogg libnotify nss xorg-libXt xorg-libsm xorg-libice Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. nat-10-200-136-126:phoneyc-new$ I am unsure how to correct this issue, so any help would be much appreciated!

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Using Zend_HTTP_Client instead of CURL

    - by Vincent
    All, I want to use Zend_HTTP_CLient instead of CURL as there are issues with using curl on Solaris. I have the following curl code.. How will this code be written if I want to use Zend_HTTP_Client? $ch = curl_init(); $devnull = fopen('/tmp/cookie.txt', 'w'); curl_setopt($ch, CURLOPT_STDERR, $devnull); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_URL, $desturl); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT,800); curl_setopt($ch, CURLOPT_AUTOREFERER, true); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_POSTFIELDS, $postdata); $retVal = curl_exec($ch); print_r(curl_error($ch)); curl_close($ch); if ($devnull) { fclose($devnull); } Thanks

    Read the article

  • Web scraping etiquette

    - by Ash
    I'm considering writing a simple web scraping application to extract information from a website that does not seem to specifically prohibit this. I've checked for other alternatives (eg RSS, web service) to get this information, but there are none available at this stage. Despite this I've also developed/maintained a few websites myself and so I realize that if web scraping is done naively/greedily it can slow things down for other users and generally become a nuisance. So, what etiquette is involved in terms of: Number of requests per second/minute/hour. HTTP User Agent content. HTTP Referer content. HTTP Cache settings. Buffer size for larger files/resources. Legalities and licensing issues. Good tools or design approaches to use. Robots.txt, is this relevant for web scraping or just crawlers/spiders? Compression such as GZip in requests. Update Found this relevant question on Meta: Etiquette of Screen Scaping StackOverflow. Jeff Atwood's answer has some helpful recommendations. Other related StackOverflow questions: Options for html scraping Legalities of screen scraping

    Read the article

  • Java (Tomcat): how to configure a cookieless subdomain to serve static content

    - by Webinator
    One of the tip given by both Google and Yahoo! to speed up webpages loading is to configure a cookieless subdomain to server static content. How do you configure a "cookieless subdomain" using Tomcat in standalone mode (this question is not about how to use Apache to serve static content in a cookieless-way, but about how to do it in Tomcat-standalone mode)? Note that I don't care about filters supporting If-Modified-Since nor care about filters supporting gzipping: the static content I'm serving is forever cacheable (or its name will change) and it is already compressed data (so gzip would only slow down the transfer). Do I need two different Tomcat webapps? (one "cookiefull" and one "cookieless") Do I need two different servlets? (as of now I've got only one dispatcher/controller servlet). Why would a "regular" link to, say, a static image be called in a cookiefull way when it would be on the same domain as the main webapp and then be called in a "cookie-less" way when it is on a subdomain? I don't understand exactly what is going on: is it the browser that decides to append or not cookies to the query? If so, why would it not append the cookies to a static query on a "cookieless" subdomain. Any example as to what is going on behind the scene is most welcome :)

    Read the article

  • CURL - https - solaris

    - by Vincent
    All, I am receiving the following error when I use PHP to curl to a https site. Both PHP and the https site are hosted on Solaris. This error seems to occur occassionally but frequently. error:80089077:lib(128):func(137):reason(119) This is the curl code I am using: $ch = curl_init(); $devnull = fopen('/tmp/cookie.txt', 'w'); curl_setopt($ch, CURLOPT_STDERR, $devnull); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_URL, $desturl); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT,800); curl_setopt($ch, CURLOPT_AUTOREFERER, true); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_POSTFIELDS, $postdata); $retVal = curl_exec($ch); print_r(curl_error($ch)); curl_close($ch); if ($devnull) { fclose($devnull); } How can I fix this error? If not, is there an alternative to curl?

    Read the article

  • Creating customized .dmg files upon download

    - by Marten
    I want to distribute a cross-platform application for which the executable file is slightly different, depending on the user who downloaded it. This is done by having a placeholder string somewhere in the executable that is replaced with something user-specific upon download. The webserver that has to do these string replacements is a Linux machine. For Windows, the executable is not compressed in the installer .exe, so the string replacement is easy. For uncompressed Mac OS X .dmg files, this is also easy. However, .dmg files that are compressed with either gzip or bzip2 are not so easy. For example, in the latter case, the compressed .dmg is not one big bzip2-compressed disk image, but instead consists of a few different bzip2-compressed parts (with different block sizes) and a plist suffix. Also, decompressing and recompressing the different parts with bzip2 does not result in the original data, so I'm guessing Apple uses some different parameters to bzip2 than the command-line tool. Is there a way to generate a compressed .dmg from an uncompressed one on Linux (which does not have hdiutil)? Or maybe another suggestion for creating customized applications without pregenerating them? It should work without any input by the user.

    Read the article

  • Changing URI suffix in Joomla when adding child php pages

    - by Sleem
    I have added a new directory in my joomla website: http://sitedomain.tld/xxx/ then I have added index.php in that directory here is the code define( '_JEXEC', 1 ); define('JPATH_BASE', '..' ); define( 'DS', DIRECTORY_SEPARATOR ); require_once ( '../includes/defines.php' ); require_once ( '../includes/framework.php' ); //JDEBUG ? $_PROFILER->mark( 'afterLoad' ) : null; /** * CREATE THE APPLICATION * * NOTE : */ $mainframe =& JFactory::getApplication('site'); $template_name = $mainframe->getTemplate();; $mainframe->initialise(); JPluginHelper::importPlugin('system'); /** * ROUTE THE APPLICATION * * NOTE : */ $mainframe->route(); // authorization $Itemid = JRequest::getInt( 'Itemid'); $mainframe->authorize($Itemid); // trigger the onAfterRoute events //JDEBUG ? $_PROFILER->mark('afterRoute') : null; //$mainframe->triggerEvent('onAfterRoute'); /** * DISPATCH THE APPLICATION * * NOTE : */ $option = JRequest::getCmd('option'); //$mainframe->dispatch($option); // trigger the onAfterDispatch events //JDEBUG ? $_PROFILER->mark('afterDispatch') : null; //$mainframe->triggerEvent('onAfterDispatch'); /** * RENDER THE APPLICATION * * NOTE : */ $mainframe->render(); /** * RETURN THE RESPONSE */ var_dump($document->getHeadData()); echo JResponse::toString($mainframe->getCfg('gzip')); sdwdwd wdwd When I view this page in the browser, all the dynamic links like CSS, JS and images were suffixed by the /xxx/ path which make them broken ! How can I drop this suffix or how do I change this suffix from /xxx to / to it points to the original files location? I have tried setting the JDocument::setBase and also tried to play with the JURI object and changed its _path and _uri without any change Thanks

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >