Search Results

Search found 34826 results on 1394 pages for 'valid html'.

Page 707/1394 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • Multiple Internet connections, multiple networks and split access in Linux

    - by Swapneel Patnekar
    I am having trouble setting up multiple internet connections for split access in Linux. We have 3 internet connections from 3 different ISP's. We want to configure our Linux gateway machine such that our three internal networks 10.2.1.0/24, 192.168.20.0/24 & 192.168.2.0/24 use ISP1, ISP2 and ISP3 respectively in a split access manner. Outlined below is the layout/settings, Interfaces of the Linux Gateway connected to Routers: eth0: 10.1.1.2<---------->10.1.1.1(Internal Interface of ADSL Router)[ISP1] eth1: 192.168.15.2<------>192.168.15.1(Internal Interface of 3G Router)[ISP2] eth3: 192.168.1.2<------->192.168.1.1(Internal Interface of ADSL Router)[ISP3] Kindly note that none of the interfaces in the Linux gateway has a public static IP address. Routers of ISP1 and ISP2 get assigned a dynamic public IP address when connected to the Internet, router of ISP3 has been assigned a public static IP address. Interface of Linux gateway connected to a switch, eth4: 10.2.1.1(LAN Interface for ISP1) eth4:0 192.168.20.1(LAN interface for ISP2) eth4:1 192.168.2.1(LAN Interface for ISP3) eth4:0 & eth4:1 are virtual interfaces with eth4 being the interface connected physically. Based on http://linux-ip.net/html/adv-multi-internet.html I've set the following routes, ip route flush table 4 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table 4 $ROUTE done ip route add table 4 default via 192.168.15.1 ip rule add fwmark 4 table 4 ip route flush cache Additionally, using the following iptables rules to mark & route packets as per the guide mentioned above : http://pastebin.com/KzWHFGJA At this point, computers from 192.168.2.0/24 network are successfully able to reach the Internet through ISP3. 192.168.20.0/24 and 10.2.1.0/24 are unable to access the Internet through ISP1 and ISP2 respectively. Any inputs will be much appreciated !

    Read the article

  • Setting up Mako with Cherrypy on nginx through FastCGI

    - by xuniluser
    I'm trying to use TemplateLookup from Mako, but can't seem to get it to work. Layout of the test site is: /var/www main.py templates/ index.html Nginx's config is setup as: location / { fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; } Cherrypy's config has: [global] server.socket_port = 8080 server.thread_pool = 10 engine.autoreload_on = False tools.sessions.on = True A simple cherrypy setup in main.py seems to work fine. import cherrypy class Main: @cherrypy.expose def index(self): return 'Hello' cherrypy.tree.mount(Main(), '/', config='config') Now, if I modify this to use Mako's template lookup, I get a 500 error. I know it has something to do with serving static files, but I've tried over a dozen different configurations accoring to the cherrypy wiki, but none of them work. Here's a bare setup I have for the templates: import cherrypy from mako.template import Template from mako.lookup import TemplateLookup templates = TemplateLookup(directories=['templates'], output_encoding='utf-8') class Main: @cherrypy.expose def index(self): return templates.get_template('index.html').render(msg='hello') cherrypy.tree.mount(Main(), '/', config='config') Does anyone know how I can get this to work ?

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    Hi all, Im new to ofbiz.So is my question is have any mistake forgive me for my mistakes.Im new to ofbiz so i did not know some terminologies in ofbiz.Sometimes my question is not clear because of lack of knowledge in ofbiz.So try to understand my question and give me a good solution with respect to my level.Because some solutions are in very high level cannot able to understand for me.So please give the solution with good examples. My problem is i created a project inside the ofbiz/hot-deploy folder namely "productionmgntSystem".Inside the folder "ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem" i created a .ftl file namely "app_details_1.ftl" .The following are the coding of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file "ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml" ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file "ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java" package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server.The file is get from user "<input type="file"..> tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1".But the file is not uploaded. Give me a good solution for uploading a file to the server. Thanks & Regards, Sivakumar.J

    Read the article

  • Is it worthwhile to block malicious crawlers via iptables?

    - by EarthMind
    I periodically check my server logs and I notice a lot of crawlers search for the location of phpmyadmin, zencart, roundcube, administrator sections and other sensitive data. Then there are also crawlers under the name "Morfeus Fucking Scanner" or "Morfeus Strikes Again" searching for vulnerabilities in my PHP scripts and crawlers that perform strange (XSS?) GET requests such as: GET /static/)self.html(selector?jQuery( GET /static/]||!jQuery.support.htmlSerialize&&[1, GET /static/);display=elem.css( GET /static/.*. GET /static/);jQuery.removeData(elem, Until now I've always been storing these IPs manually to block them using iptables. But as these requests are only performed a maximum number of times from the same IP, I'm having my doubts if it does provide any advantage security related by blocking them. I'd like to know if it does anyone any good to block these crawlers in the firewall, and if so if there's a (not too complex) way of doing this automatically. And if it's wasted effort, maybe because these requests come from from new IPs after a while, if anyone can elaborate on this and maybe provide suggestion for more efficient ways of denying/restricting malicious crawler access. FYI: I'm also already blocking w00tw00t.at.ISC.SANS.DFind:) crawls using these instructions: http://spamcleaner.org/en/misc/w00tw00t.html

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • Nginx Slower than Apache??

    - by ichilton
    Hi, I've just setup 2x identical Rackspace Cloud instances and am doing some comparisons and benchmarks to compare Apache and Nginx. I'm testing with a 3.4k png file and initially 512MB server instances but have now moved to 1024MB server instances. I'm very surprised to see that whatever I try, Apache seems to consistently outperform Nginx....what am I doing wrong? Nginx: Server Software: nginx/0.8.54 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 2.320 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3612000 bytes HTML transferred: 3400000 bytes Requests per second: 431.01 [#/sec] (mean) Time per request: 232.014 [ms] (mean) Time per request: 2.320 [ms] (mean, across all concurrent requests) Transfer rate: 1520.31 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 11 15.7 3 120 Processing: 1 35 76.9 20 1674 Waiting: 1 31 73.0 19 1674 Total: 1 46 79.1 21 1693 Percentage of the requests served within a certain time (ms) 50% 21 66% 39 75% 40 80% 40 90% 98 95% 136 98% 269 99% 334 100% 1693 (longest request) And Apache: Server Software: Apache/2.2.16 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 1.346 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3647000 bytes HTML transferred: 3400000 bytes Requests per second: 742.90 [#/sec] (mean) Time per request: 134.608 [ms] (mean) Time per request: 1.346 [ms] (mean, across all concurrent requests) Transfer rate: 2645.85 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 3.7 0 27 Processing: 0 3 6.2 1 29 Waiting: 0 2 5.0 1 29 Total: 1 4 7.0 1 29 Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 1 80% 1 90% 17 95% 19 98% 26 99% 27 100% 29 (longest request) I'm currently using worker_processes 4; and worker_connections 1024; but i've tried and benchmarked different values and see the same behaviour on all - I just can't get it to perform as well as Apache and from what i've read previously, i'm shocked about this! Can anyone give any advice? Thanks, Ian

    Read the article

  • django : Serving static files through nginx

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • VirtualBox problems writing to shared folders (Guest Additions installed)

    - by vincent
    I am trying to setup a shared folder from the host (ubuntu 10.10) to mount on a virtualized CentOS 5.5 with Guest Additions (4.0.0) installed (Guest addition features are working ie. seamless mode etc.). I am able to successfully mount the share with: mount -t vboxsf -o rw,exec,uid=48,gid=48 sf_html /var/www/html/ (uid and guid belong to the apache user/group) the only problem is that once mounted and I try to write/create directories and files I get the following: mkdir: cannot create directory `/var/www/html/test': Protocol error I am using the proprietary version of VirtualBox version 4.0.0 r69151. Has anyone had the same problem and been able to fix it or has any idea how to potentially fix this? Another question, the reason for setting this up is this. Our production servers are on CentOS 5.5 however I am a great fan of Ubuntu and would like to develop on Ubuntu rather than CentOS. However in order to stay as close to the production environment I would like to virtualize CentOS to use a web server and use the shared folder as web root. Anyone know whether this isn't a good idea? Has anyone successfully been able to set this up? Thanks guys, your help is always much appreciated and if you need any more information please let me know.

    Read the article

  • How do I create a "here document" within a shell function?

    - by BenU
    I'm working my way through William Shotts Jr.'s great The Linux Command Line on my Mac OSX 10.7.5 system. 90% of the linux that Shotts covers is close enough to Darwin that I can figure out or GTEM to figure out what's going on. I've made it to chapter 27 on "Writing Shell Scripts" and am getting hung up creating "here files" within a function. I get an syntax error: unexpected end of file error when I include the following function: report_uptime () { cat <<- _EOF_ <H2>System Uptime</H2> <PRE>$(uptime)</PRE> _EOF_ return } The error goes away if I use the following function placeholder: report_uptime () { return } Also, elsewhere in the script, outside of a function I use the cat << _EOF_ format to create a "here file" with no trouble: cat << _EOF_ <HTML> <HEAD> <TITLE>$TITLE</TITLE> </HEAD> <BODY> <H1>$TITLE</H1> <P>$TIME_STAMP</P> $(report_uptime) $(report_disk_space) $(report_home_space) </BODY> </HTML> _EOF_ If anyone has any idea what I'm doing wrong I would be grateful!

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Raid 1 array won't assemble after power outage. How do I fix this ext4 mirror?

    - by Forkrul Assail
    Two ext4 drives on Raid 1 with mdadm won't reassemble after the power went out for an extended period (UPS drained). After turning the machine back on, mdadm said that the array was degraded, after which it took about 2 days for a full resync, which completed without problems. On trying to remount the array I get: mount: you must specify the filesystem type cat /etc/fstab lines relevant to setup: /dev/md127 /media/mediapool ext4 defaults 0 0 dmesg | tail (on trying to mount) says: [ 1050.818782] EXT3-fs (md127): error: can't find ext3 filesystem on dev md127. [ 1050.849214] EXT4-fs (md127): VFS: Can't find ext4 filesystem [ 1050.944781] FAT-fs (md127): invalid media value (0x00) [ 1050.944782] FAT-fs (md127): Can't find a valid FAT filesystem [ 1058.272787] EXT2-fs (md127): error: can't find an ext2 filesystem on dev md127. cat /proc/mdstat says: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : active (auto-read-only) raid1 sdj[2] sdi[0] 2930135360 blocks super 1.2 [2/2] [UU] unused devices: <none> fsck /dev/md127 says: fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/md127 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> mdadm -E /dev/sdi gives me: /dev/sdi: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 3e6e9a4f:6c07ab3d:22d47fce:13cecfd0 Update Time : Tue Nov 13 20:34:18 2012 Checksum : f7d10db9 - correct Events : 27 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) boot@boot ~ $ sudo mdadm -E /dev/sdj /dev/sdj: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 37ac1824:eb8a21f6:bd5afd6d:96da6394 Name : sojourn:33 Creation Time : Sat Nov 10 10:43:52 2012 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 5860271016 (2794.40 GiB 3000.46 GB) Array Size : 2930135360 (2794.39 GiB 3000.46 GB) Used Dev Size : 5860270720 (2794.39 GiB 3000.46 GB) Data Offset : 262144 sectors Super Offset : 8 sectors State : clean Device UUID : 7fb84af4:e9295f7b:ede61f27:bec0cb57 Update Time : Tue Nov 13 20:34:18 2012 Checksum : b9d17fef - correct Events : 27 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing) machine@user ~ dmesg | tail [ 61.785866] init: alsa-restore main process (2736) terminated with status 99 [ 68.433548] eth0: no IPv6 routers present [ 534.142511] EXT4-fs (sdi): ext4_check_descriptors: Block bitmap for group 0 not in group (block 2838187772)! [ 534.142518] EXT4-fs (sdi): group descriptors corrupted! [ 546.418780] EXT2-fs (sdi): error: couldn't mount because of unsupported optional features (240) [ 549.654127] EXT3-fs (sdi): error: couldn't mount because of unsupported optional features (240) Since this is Raid 1 it was suggested that I try and mount or fsck the drives separately. After a long fsck on one drive, it ended with this as tail: Illegal double indirect block (2298566437) in inode 39717736. CLEARED. Illegal block #4231180 (2611866932) in inode 39717736. CLEARED. Error storing directory block information (inode=39717736, block=0, num=1092368): Memory allocation failed Recreate journal? yes Creating journal (32768 blocks): Done. *** journal has been re-created - filesystem is now ext3 again *** The drive however still doesn't want to mount: dmesg | tail [ 170.674659] md: export_rdev(sdc) [ 170.675152] md: export_rdev(sdc) [ 195.275288] md: export_rdev(sdc) [ 195.275876] md: export_rdev(sdc) [ 1338.540092] CE: hpet increased min_delta_ns to 30169 nsec [26125.734105] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [26125.734115] EXT4-fs (sdc): group descriptors corrupted! [26182.325371] EXT3-fs (sdc): error: couldn't mount because of unsupported optional features (240) [27083.316519] EXT4-fs (sdc): ext4_check_descriptors: Checksum for group 0 failed (43502!=37987) [27083.316530] EXT4-fs (sdc): group descriptors corrupted! Please help me fix this. I never in my wildest nightmares thought a complete mirror would die this badly. Am I missing something? Suggestions on fixing this? Could someone explain why it would resync after the powerout, only to seemingly nuke the drive? Thanks for reading. Any help much appreciated. I've tried everything I can think of, including booting and filesystem checking with SystemRescue and Ubuntu liveboot discs.

    Read the article

  • Shrink a Volume Group in LVM / Linux in order to install Windows on the freed space

    - by Stephan Kristyn
    I have a Volume Group with Unused space. This 40Gig should become an entidy in order to install Microsoft windows 7 on it. I do not have extra space on the drive - that is why I want to shrink the VG! LVG berta resides on sda2 and consists of lv_root lv_swap unused_space I want it to become lv_root lv_swap and have a seperate entidy made out of unused_space. Microsoft Windows 7 has to get installed on this entidy. I do not understand why Linux made simple things complicated. I utterly hate LVM and think its absolute bollocks. Useful Sources: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-system-config-lvm.html Edit: I found the answer. The necessary steps depict how complicated LVM really is. In my opinion it is best to avoiding LVM until pvresize matures as promised in its man pages. Answer: http://fedorasolved.org/Members/zcat/shrink-lvm-for-new-partition If you run into problems when you want to remove lvswap even if in resuce mode, then try swapoff /dev/vg_1/lv_swap lvchange -an /dev/vg_1/lv_swap

    Read the article

  • nginx error page and internal directives not working as expected

    - by Romain
    I'd like to setup my nginx server to return a specific error page on HTTP 50x status codes, and I'd like this page to be unavailable by a direct request from users (e.g., http//mysite/internalerror). For that, I'm using nginx's internal directive, but I must be missing something, as when I put that directive on my /internalerror location, nginx returns a custom 404 error (which isn't even my own 404 error page) when a page crashes. So, to summarize, here's what seems to happen: GET /Home nginx passes the query to Python I'm simulating an application bug to get the 502 error code nginx tries to return /InternalError from its error_page rule because of the internal rule, it finally fails back to a custom 404 error code <-- why? the documentation says error_page directives are not concerned by internal: http://wiki.nginx.org/HttpCoreModule#internal Here's an extract from nginx.conf with a few comments to point things out: error_page 404 /NotFound; error_page 500 502 503 504 =500 /InternalError; # HTTP 500 Error page declaration location / { try_files /Maintenance.html $uri @pythonbackend; } location @pythonbackend { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location ~* \.(py|pyc)$ { # This internal location works OK and returns my own 404 error page internal; } location /__Maintenance.html { # This one also works fine internal; } location ~* /internalerror { # This one doesn't work and returns nginx's 404 error page when I trigger an error somewhere on my site internal; } Thanks very much for your help!!

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Is browser based wireless authentication secure?

    - by johnnyb10
    Our wireless network previously used a preshared WPA/WPA2 key for guest access, which allows them access to the Internet. (Our employee access uses 802.1x authentication). We just had a wireless consultant come in to fix various wireless issues we had; one of the things he wound up doing was changing our guest access to HTML-based instead of the preshared key. So now that guest SSID is open (instead of using WPA) and users are presented with a browser-based login screen before they can get on the Internet. My question is: Is this an acceptable method from a security standpoint? I would assume that having an open network is necessarily a bad idea, but the consultant said that the traffic is still using PEAP, so it's secure. I didn't get a chance to question him further on this because we ran late and a bunch of other things came up. Please let me know what you think about the advantages/disadvantages of using HTML-based wireless authentication as opposed to using a preshared WPA key. Thanks...

    Read the article

  • Network to network VPN Centos 5

    - by Atul Kulkarni
    I am trying to follow "http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-vpn.html#s1-ipsec-net2net" I have come up with the following On local router machine: in my ifcfg-ipsec0: ONBOOT=yes IKE_METHOD=PSK DSTGW=10.5.27.1 SRCGW=10.6.159.1 DSTNET=10.5.27.0/25 SRCNET=10.6.159.0/24 DST=205.X.X.X TYPE=IPSEC I have /etc/sysconfig/network-scripts/keys-ipsec0 file in place. On Remote Machine in the cloud if have /etc/sysconfig/network-scripts/ifcfg-ipsec1: TYPE=IPSEC ONBOOT=yes IKE_METHOD=PSK SRCGW=10.5.27.1 DSTGW=10.6.159.1 SRCNET=10.5.27.124/25 DSTNET=10.6.159.0/24 DST=38.x.x.x with its respective /etc/sysconfig/network-scripts/key-ipsec1 file. The DST in both cases are NAT'd external IPs. Is that a problem? I have made changes for port forwarding as well. When I try to bring the interfaces up it gives me output "RTNETLINK answers: Invalid argument". I am confused now and don't know what more to do? Any place I can digup what parameters were wrong? I really appreciate any help I can get. Thanks and Regards, Atul.

    Read the article

  • How can I get a list of directories with ack?

    - by KPthunder
    I have a directory listing as follows (given by ls -la): total 8 drwxr-xr-x 6 <user> <group> 204 Oct 18 12:13 . drwxr-xr-x 7 <user> <group> 238 Oct 18 11:29 .. drwxr-xr-x 14 <user> <group> 476 Oct 18 12:31 .git -rw-r--r-- 1 <user> <group> 601 Oct 18 12:03 index.html drwxr-xr-x 2 <user> <group> 68 Oct 18 12:13 test drwxr-xr-x 2 <user> <group> 68 Oct 18 12:13 test2 Running ack . -f prints out the files in the directory: index.html How can I get ack to print out the directories in the directory? I want to ignore the .git directory (which I understand is default behavior for ack). On that note, how can I ignore certain directories? I am using ack 1.9.6 on Mac OSX 10.8.2.

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • Truncated content with Apache on Vagrant VM

    - by Nev Stokes
    I'm using Vagrant to run a CentOS VM in order to try and achieve local development parity with our live servers. I've symlinked /var/www/html with the /vagrant shared directory and am forwarding port 80 for viewing at http://localhost:4567. I'm developing using SublimeText 2 on OS X Mountain Lion. Once I figured that iptables was tripping me up, all was well and good. Until I noticed something strange. I have a sample HTML page consisting of several paragraphs of lorem copy. I can view this fine in a browser on OS X. But when I make an edit, for example removing a paragraph, and refresh the content is truncated with the paragraph I deleted still visible. When I cat the files on the server I can see the changes I made but these aren't even reflected when I curl localhost. I strongly suspect that it's a problem with my Apache settings — with which I didn't really tinker — as the issue doesn't arise when I stop Apache and run sudo python -m SimpleHTTPServer 80 in the directory to view pages instead. What gives?

    Read the article

  • Nginx Tries to download file when rewriting non-existent url

    - by Vince Kronlein
    All requests to a non-existent file should be re-written to index.php?name=$1 All other requests should be processed as normal. With this server block, the server is trying to download all non-existent urls: server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location ~ /\.ht { deny all; } location ~ \.php$ { try_files $uri = 404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9002; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location /plg { } location / { if (!-f $request_filename){ rewrite ^(.*)$ /index.php?name=$1 break; } } } I've checked to see that my default_type = text/html instead of octet stream, not sure what the deal is.

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >