Search Results

Search found 19385 results on 776 pages for 'canonical link'.

Page 705/776 | < Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >

  • Disabling the right-click sub menu using JQuery

    - by nikolaosk
    Recently I needed to disable the right-click contextual menu in an HTML page for a very simple HTML application I was creating for a friend.This is going to be a short post where I will demonstrate how to disable the right-click contextual menu.I will use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/downloadPlease find here all my posts regarding JQuery.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. I am going to create a very simple HTML 5 page with some text and an image. The HTML markup for the page follows. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>        <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">     <script type="text/javascript" src="jquery-1.8.2.min.js">        </script><script type="text/javascript"> (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); </script>       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <h2>HTML 5, JQuery, CSS3</h2>    </div>      <figure>  <img src="html5.png" alt="HTML 5"></figure>        <div id="main">          <h2>HTML 5</h2>                        <article>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          </article>      </div>             </body>  </html> This is the JQuery code, I use (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); I simply disable/cancel the contextmenu event.When I load the simple page on the browser and I right-click the context menu does not appear.Hope it helps!!!

    Read the article

  • Is this spaghetti code already? [migrated]

    - by hephestos
    I post the following code writen all by hand. Why I have the feeling that it is a western spaghetti on its own. Second, could that be written better? <div id="form-board" class="notice" style="height: 200px; min-height: 109px; width: auto;display: none;"> <script type="text/javascript"> jQuery(document).ready(function(){ $(".form-button-slide").click(function(){ $( "#form-board" ).dialog(); return false; }); }); </script> <?php echo $this->Form->create('mysubmit'); echo $this->Form->input('inputs', array('type' => 'select', 'id' => 'inputs', 'options' => $inputs)); echo $this->Form->input('Fields', array('type' => 'select', 'id' => 'fields', 'empty' => '-- Pick a state first --')); echo $this->Form->input('inputs2', array('type' => 'select', 'id' => 'inputs2', 'options' => $inputs2)); echo $this->Form->input('Fields2', array('type' => 'select', 'id' => 'fields2', 'empty' => '-- Pick a state first --')); echo $this->Form->end("Submit"); ?> </div> <div style="width:100%"></div> <div class="form-button-slide" style="float:left;display:block;"> <?php echo $this->Html->link("Error Results", "#"); ?> </div> <script type="text/javascript"> jQuery(document).ready(function(){ $("#mysubmitIndexForm").submit(function() { // we want to store the values from the form input box, then send via ajax below jQuery.post("Staffs/view", { data1: $("#inputs").attr('value'), data2:$("#inputs2").attr('value'),data3:$("#fields").attr('value'), data4:$("#fields2").attr('value') } ); //Close the dialog $( "#form-board" ).dialog('close') return false; }); $("#inputs").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); $("#inputs2").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs2').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields2"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); }); </script>

    Read the article

  • Reading graph inputs for a programming puzzle and then solving it

    - by Vrashabh
    I just took a programming competition question and I absolutely bombed it. I had trouble right at the beginning itself from reading the input set. The question was basically a variant of this puzzle http://codercharts.com/puzzle/evacuation-plan but also had an hour component in the first line(say 3 hours after start of evacuation). It reads like this This puzzle is a tribute to all the people who suffered from the earthquake in Japan. The goal of this puzzle is, given a network of road and locations, to determine the maximum number of people that can be evacuated. The people must be evacuated from evacuation points to rescue points. The list of road and the number of people they can carry per hour is provided. Input Specifications Your program must accept one and only one command line argument: the input file. The input file is formatted as follows: the first line contains 4 integers n r s t n is the number of locations (each location is given by a number from 0 to n-1) r is the number of roads s is the number of locations to be evacuated from (evacuation points) t is the number of locations where people must be evacuated to (rescue points) the second line contains s integers giving the locations of the evacuation points the third line contains t integers giving the locations of the rescue points the r following lines contain to the road definitions. Each road is defined by 3 integers l1 l2 width where l1 and l2 are the locations connected by the road (roads are one-way) and width is the number of people per hour that can fit on the road Now look at the sample input set 5 5 1 2 3 0 3 4 0 1 10 0 2 5 1 2 4 1 3 5 2 4 10 The 3 in the first line is the additional component and is defined as the number of hours since the resuce has started which is 3 in this case. Now my solution was to use Dijisktras algorithm to find the shortest path between each of the rescue and evac nodes. Now my problem started with how to read the input set. I read the first line in python and stored the values in variables. But then I did not know how to store the values of the distance between the nodes and what DS to use and how to input it to say a standard implementation of dijikstras algorithm. So my question is two fold 1.) How do I take the input of such problems? - I have faced this problem in quite a few competitions recently and I hope I can get a simple code snippet or an explanation in java or python to read the data input set in such a way that I can input it as a graph to graph algorithms like dijikstra and floyd/warshall. Also a solution to the above problem would also help. 2.) How to solve this puzzle? My algorithm was: Find shortest path between evac points (in the above example it is 14 from 0 to 3) Multiply it by number of hours to get maximal number of saves Also the answer given for the variant for the input set was 24 which I dont understand. Can someone explain that also. UPDATE: I get how the answer is 14 in the given problem link - it seems to be just the shortest path between node 0 and 3. But with the 3 hour component how is the answer 24 UPDATE I get how it is 24 - its a complete graph traversal at every hour and this is how I solve it Hour 1 Node 0 to Node 1 - 10 people Node 0 to Node 2- 5 people TotalRescueCount=0 Node 1=10 Node 2= 5 Hour 2 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5(Rescued) Node 0 to Node 1 = 10 Node 0 to Node 2 = 5 Node 1 to Node 2 = 4 TotalRescueCount = 10 Node 1 = 10 Node 2= 5+4 = 9 Hour 3 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5+4 = 9(Rescued) TotalRescueCount = 9+5+10 = 24 It hard enough for this case , for multiple evac and rescue points how in the world would I write a pgm for this ?

    Read the article

  • Why my register form don't accept spry validation in dw? [migrated]

    - by shaghayeh
    Why my register form don't accept spry validation in dw? Here is my code: <?php if(isset($_POST['go_register'])) { $last_login=jdate('Y/n/j ,H:i:s'); $firstname=trim($_POST['firstname']); $lastname=trim($_POST['lastname']); $email=trim($_POST['username']); $password=sha1($_POST['password']); $register_date=$_POST['register_date']; if(!empty($firstname)&& !empty($lastname)&& !empty($email)&& !empty($password)&& !empty($register_date)){ $reg_query="insert into users(firstname,lastname,email,password,register_date,last_login)values('$firstname','$lastname','$email','$password','$register_date','$last_login')"; $reg_result=mysqli_query($connection,$reg_query); if($reg_result){ echo "ok"; } }//end of not empty }//end of isset ?> <div class="sectionclass"> <script src="../SpryAssets/SpryValidationTextField.js" type="text/javascript"></script> <link href="../SpryAssets/SpryValidationTextField.css" rel="stylesheet" type="text/css"> <h2>??? ??? ???</h2> <form method="post" action="<?php echo $_SERVER['PHP_SELF']."?catid=2"; ?>"> <div> <span style="color:#F00">*</span> <label>??? </label> <span id="sprytextfield1"> <input type="text" name="firstname" /> <span class="textfieldRequiredMsg">A value is required.</span></span></div> <div> <span style="color:#F00">*</span> <label>??? ????????</label><input type="text" name="lastname" /> </div> <div> <span style="color:#F00">*</span> <label>??? ??????(?????)</label><input dir="ltr" type="text" name="username" /> </div> <div> <span style="color:#F00">*</span> <label>??? ????</label><input type="password" name="password" /> </div> <input type="hidden" name="register_date" value="<?php echo jdate('Y/n/j'); ?>"> <div> <input type="submit" name="go_register" value="??? ???"/> </div> </form> </div> <script type="text/javascript"> var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1"); </script>

    Read the article

  • Looking for home networking hardware and software advice

    - by phobos7
    Note: I originally wrote this up in a blog post. I've removed any affiliate links that I put in my original post to ensure I don't annoy anybody. I've recently moved home and I now need to go to the trouble of sorting out my home network yet again. We had Virgin broadband in Hertford but you can't get Virgin in the street we've moved to so I've had to go with O2 Broadband. Normally I prefer to use my own hardward, and previously used the DLink DIR-655 router which was great, but in this situation I am using the O2 Wirelss Box III since I only have an old Netgear DG834PN Wireless G modem router and I'd rather be using Wireless N. Anyway, the place we have moved into has only one phone point in the hallway, has the best TV point in one room and the best place to put the TV and other entertainment stuff in yet another room. So, networking the house up for Internet and TV is required. The diagram below shows the things that I'll have in my home network but there are three points where I'm not quite sure what hardware to us. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a Media Centre/PC and a couple of consoles to the network. I'm pretty much settled on us an Acer Aspire Revo R3600 as my media PC, probably with Ubuntu or Windows and XBMC installed. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a device that can decode and stream TV from a TV aerial across the network. The device that is connected to 2). At the moment I'm considering a HDHomeRun by SiliconDust. At the moment I'm considering either the TP LINK TL-WA701ND 150Mbps Wireless Lite N Access Point (very cheap at Amazon) or the Netgear 5 GHz Wireless-N HD Access Point/Bridge. I'd love to get some insight into what you would do in my situation. What Wireless Access Point/Bridge should I put at points 1) and 2)? What device should I choose for point 3) that can decode and stream a TV signal? Is the Acer Aspire Revo R3600 a good choice? ![alt text][6] Note 2: I've also posted this question on AVForums.

    Read the article

  • Install MegaCli to Monitor Perc 5/i in Nexentastor 3

    - by Peter Valadez
    I have a Dell 2950 with a Perc 5/i Raid controller that we've already installed Nexentastor 3 Community Edition on. We setup a raid-10 array that and put a ZFS pool on top of the hardware. As I understand, in this configuration ZFS/Nexentastor will not be able to tell when a disk fails in the array. Obviously, this is not optimal. Since the Dell Perc 5/i controller is a rebranded LSI controller, you should be able to use the MegaCli utility to manage the array and monitor its condition. I had seen in a separate forum that the Perc 5/i is very similar to the LSI MegaRAID 8480E, so I tried installing the MegaCli utility at the link below. However, I have not been able to successfully install the utility. http://www.lsi.com/support/products/Pages/MegaRAIDSAS8480E.aspx Here is what happened when I tried to install MegaCli: root@Nexenta2:/files# pkgadd -d MegaCli.pkg Warning: unable to relocate '$BASEDIR' mv: cannot move `solmegacli-8.02.16/' to a subdirectory of itself, `solmegacli-8.02.16//var/lib/dpkg/alien/solmegacli/reloc/solmegacli-8.02.16' mv: cannot move `solmegacli-8.02.16/' to a subdirectory of itself, `solmegacli-8.02.16//opt/solmegacli-8.02.16' 822-date: warning: This program is deprecated. Please use 'date -R' instead. 822-date: warning: This program is deprecated. Please use 'date -R' instead. solmegacli_8.02.16-1_all.deb generated (Reading database ... 41397 files and directories currently installed.) Preparing to replace solmegacli 8.02.16-1 (using solmegacli_8.02.16-1_all.deb) ... Unpacking replacement solmegacli ... Setting up solmegacli (8.02.16-1) ... In /var/logs/dpkg.log: 2012-03-23 20:40:19 status unpacked solmegacli 8.02.16-1 2012-03-23 20:40:19 configure solmegacli 8.02.16-1 8.02.16-1 2012-03-23 20:40:19 status unpacked solmegacli 8.02.16-1 2012-03-23 20:40:19 status half-configured solmegacli 8.02.16-1 2012-03-23 20:40:19 status installed solmegacli 8.02.16-1 So... I've got three questions: Is it possible to install and use MegaCli in Nexentastor 3? If so, how can I install MegaCli on Nexentastor 3? Suggestions welcome!!! If not, is there a better way to monitor the condition of the Perc 5/i hardware raid? Our 2950 does have a DRAC card, so can I use that to monitor the raid condition?

    Read the article

  • Lion built-in VPN client times out connecting to Windows 2003 PPTP server

    - by beporter
    I have a new iMac with OS X 10.7 (Lion) on it that refuses to connect to a PPTP-based VPN server (running Windows 2003 SBS). To shortcut past a lot of questions: There is a Dell workstation running Windows 7 on the same LAN as the Mac that is able to establish a PPTP connection to the same VPN server using the same credentials. That would seem to rule out any possible problems with the server, the port forwards on the server's firewall, the internet connection between the two, and the router local to the Dell and iMac. Here's a "verbose" dump of the PPP log from the iMac: Tue Sep 6 10:13:11 2011 : using link 0 Tue Sep 6 10:13:11 2011 : Using interface ppp0 Tue Sep 6 10:13:11 2011 : Connect: ppp0 socket[34:17] Tue Sep 6 10:13:11 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0, interfaceIndex: 0, Protocol: None, Private Port: 0, Public Address: 45f6f181, Public Port: 0, TTL: 0. Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 inconsistent. is Connected: 1, Previous interface: 4, Current interface 0 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 initialized. is Connected: 1, Previous publicAddress: (0), Current publicAddress 45f6f181 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 fully initialized. Flagging up Tue Sep 6 10:13:14 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:17 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:20 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:23 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:26 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:29 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:32 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:35 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:38 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:41 2011 : LCP: timeout sending Config-Requests Tue Sep 6 10:13:41 2011 : Connection terminated. Tue Sep 6 10:13:41 2011 : PPTP disconnecting... Tue Sep 6 10:13:41 2011 : PPTP clearing port-mapping for en0 Tue Sep 6 10:13:41 2011 : PPTP disconnected The error seems to be focused around the line, LCP: timeout sending Config-Requests, but I haven't had any luck in finding troubleshooting information for this. I've tried completely deleting the entire VPN "connection" from the Network prefpane and recreating it from scratch. I am certain the connection details are correct because they exactly match what successfully connects from the Win7 machine sitting next to the iMac. Any suggestions?

    Read the article

  • My current iptable configuration doesn't work [on hold]

    - by Brad
    sudo chkconfig iptables off /etc/init.d/iptables on ### Clear/flush iptables sudo iptables -F sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT ### Allow SSH iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT ### Allow YUM updates sudo iptables -A OUTPUT -o eth0 -p tcp --dport 80 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp --dport 443 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT ### Add your rules form the link above, here # ftp,smtp,imap,http,https,pop3,imaps,pop3s sudo iptables -A INPUT -i eth0 -p tcp -m multiport --dports 21,25,143,80,443,110,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp -m multiport --sports 21,25,143,80,110,443,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT ## allow dns sudo iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT && sudo iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT # handling pings sudo iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT sudo iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT # manage ddos attacks sudo iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT ## Implement some logging so that we know what's getting dropped sudo iptables -N LOGGING sudo iptables -A INPUT -j LOGGING sudo iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7 sudo iptables -A LOGGING -j DROP # once a rule affects traffic then it is no longer managed # so if the traffic has not been accepted, block it sudo iptables -A INPUT -j DROP sudo iptables -I INPUT 1 -i lo -j ACCEPT sudo iptables -A OUTPUT -j DROP # allow only internal port forwarding sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT sudo iptables -P FORWARD DROP # create an iptables config file sudo iptables-save > /root/dsl.fw ### Append the following to the rc.local file sudo nano /etc/rc.local ####--- /sbin/iptables-restore < sudo /root/dsl.fw ####--- /etc/init.d/iptables save ## check to see if this setting is working great. sudo service iptables restart ## log out/in testing sudo chkconfig iptables on What is the problem with this setup? If I restart the server it doesn't allow me back in SSH, and there may be a problem with Yum Original source of information: https://gist.github.com/Jonathonbyrd/1274837#file-instructions

    Read the article

  • iSCSI timeouts under high load

    - by Antonio
    I have two servers connected via Gigabit Ethernet. One is iSCSI target, the second one is initiator. When I run mkfs.ext4 at initiator, after a while disk IO slows down critically. In the target host I can see the following in syslog: Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668c 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668c 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668d 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668d 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668e 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 1196696 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 1196696 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119669e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119669e 6 Sep 14 09:40:04 sh11 tgtd: abort_task_set(1139) found 119669f 0 Sep 14 09:40:04 sh11 tgtd: abort_cmd(1115) found 119669f 6 And load average grows to 12 or even more: # uptime 12:37:00 up 23 days, 13:25, 1 user, load average: 12.00, 7.00, 4.00 CentOS 6.3 tgtd 1.0.24 Intel Pentium 4 2.4GHz 1Gb RAM 2Tb WD Cavlar Green SATA 2.0 #lspci 00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM Controller/Host-Hub Interface (rev 02) 00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE Host-to-AGP Bridge (rev 02) 00:1d.0 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 82) 00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 02) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV200 QW [Radeon 7500] 02:01.0 Ethernet controller: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev 11) (rev 11) 02:02.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:03.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:04.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (CNR) Ethernet Controller (rev 82) Is there a way to tune target host to avoid these timeouts?

    Read the article

  • How to know the source of certain TCP traffic on AIX

    - by A.Rashad
    We have two AIX boxes, one for production system and another for testing. both systems are running ATM machine switches, where the ATM device is connected via TCP socket. we had an issue on production system where the machine would power off or get disconnected but the netstat -na | grep <IP of machine > would still mention that the socket is up when simulated that case on the UAT environment, the problem did not happen, where the socket would terminate in 3 to 5 minutes. when sniffed on the traffic between the machine and ATM we found that no traffic takes place on production while there is some sort of heartbeat on UAT. but it is not initiated by the application. $>tcpdump | grep -v "10.2.2.71" | grep -v "HSRP" | grep "10.3.1.30" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on en6, link-type 1, capture size 96 bytes 09:08:13.323421 IP server073.afs3-callback > 10.3.1.30.impera: . 278204201:278204202(1) ack 3307884029 win 164 09:08:13.335334 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:08:23.425771 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:08:23.425789 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 09:09:13.628985 IP server073.afs3-callback > 10.3.1.30.impera: . 0:1(1) ack 1 win 164 09:09:13.633900 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:09:23.373634 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:09:23.373647 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 while on production, that traffic is not there. we want to know where this traffic is initiated from to implement on production to sense disconnection our comms parameters are: tcp_keepcnt = 2 tcp_keepidle = 100 tcp_keepinit = 150 tcp_keepintvl = 150 tcp_finwait2 = 1200 can anyone help? Editing Question: One point I missed because I was rushing to a meeting. the difference between the Production and UAT in setup is that in Production we have an application called F5 working as load balancer between the ATMs and the AIX box, while it is a direct connection through MPLS in case of UAT. note: we had one MPLS and one GPRS connected ATMs on UAT, and both connections terminated when unplugged in about 4 minutes Edit 2 the no -o tcp_timewait command returns 1 in both Production and UAT

    Read the article

  • XenServer 6.0.2 path to installation media contains non-ascii characters

    - by cmaduro
    XenServer 6.0.2 install fails no matter what I do. I have confirmed that the md5 checksum on my ISO file is good. I tried installing from a mounted ISO file (remotely via iKVM). I tried installing from physical media. I tried installing from a bootable USB stick (using syslinux + contents of the ISO) All attempts have yielded the same result: When verifying the installation media, at 0% initializing, the following is reported: "Some packages appeared to be damaged." followed by a list of pretty much all the gz2 and rpm packages. If I skip the media verification the installer proceeds and then gives me an error when it reaches "Installing from base pack" at 0% which states "An unrecoverable error has occurred. The error was: 'ascii' codec can't decode byte 0xff in position 20710: ordinal not in range(128) Please refer to your user guide, or contact a Technical Support Representative, for further details" there is one option left which is to reboot. Apparently at some point during the processing of the repositories on the installation media non-ascii characters are found, which causes the installer to quit. How do I fix this? Here are my specs TYAN S8236 motherboard 2 AMD Opteron 6234 processors LSI2008 card connected to 2 1TB Seagate Constellation drives SATA, 1 500GB Corsair m4 SSD SATA and 1 Corsair Forse 3 - 64GB SSD SATA Onboard SATA connected to a slim DVD-+RW. Onboard SAS connected to 2 IBM ESX 70GB 10K SAS drives (for XenServer) 256GB memory ================================================================================= Comments: According to pylonsbook.com "chances are you have run into a problem with character sets, encodings, and Unicode" – cmaduro 10 hours ago A clue is provided by "vmware.com/support/vsphere5/doc/…; Data migration fails if the path to the vCenter Server installation media contains non-ASCII characters When this problem occurs, an error message similar to: 'ascii' codec can't decode byte 0xd0 in position 30: ordinal not in range(128) appears, and the installer quits unexpectedly during the data migration process. – cmaduro 10 hours ago This is an error that python throws. And guess what, the .py extention of the file you have to edit in this link community.spiceworks.com/how_to/show/1168 means the installer is written in python. Python is interpreted, so now to find the install file responsible for this error. – cmaduro 6 hours ago The file that generates the error upon verification is /opt/xensource/installer/tui/repo.py. The error message appears around line 359. – cmaduro 2 hours ago I am fairly sure that the install error is generated somewhere in repository.py as the backend.py file throws errors while methods in that file are being called. Perhaps all errors can be traced back to this file. – cmaduro 1 hour ago

    Read the article

  • Supervisor sentry-web exit status 1

    - by rockingskier
    I'm having problems getting Sentry (https://www.getsentry.com - not enough rep for a link) running as a service using supervisor. I can run Sentry in the command line and view it correctly in the browser but when it comes to supervisor I am completely in the dark. I shall try and give all the details I can Initial user warning By no means a server admin, just playing/learning in VirtualBox. Literally only just discovered supervisor from reading the Sentry documentation so I may well be making some obvious mistakes here. The setup: Ubuntu server 11.10 (fresh install, VirtualBox) virtualenv with Sentry and its dependencies. supervisor Instructions followed Supervisor with vanilla ini file Sentry/supervisor instructions My supervisor ini (Sentry section) [program:sentry-web] directory=/root/.virtualenvs/sentry/ command= start http /root/.virtualenvs/sentry/bin/sentry autostart=true autorestart=true redirect_stderr=true OK so here we go: When I run supervisord -n I get the following messages rather than a nice web interface to play with. 2012-04-12 23:48:09,024 CRIT Supervisor running as root (no user in config file) 2012-04-12 23:48:09,097 INFO RPC interface 'supervisor' initialized 2012-04-12 23:48:09,099 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-04-12 23:48:09,100 INFO supervisord started with pid 17813 2012-04-12 23:48:10,126 INFO spawned: 'sentry-web' with pid 17816 2012-04-12 23:48:10,169 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:11,199 INFO spawned: 'sentry-web' with pid 17817 2012-04-12 23:48:11,238 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:13,269 INFO spawned: 'sentry-web' with pid 17818 2012-04-12 23:48:13,309 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:16,343 INFO spawned: 'sentry-web' with pid 17819 2012-04-12 23:48:16,389 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:17,394 INFO gave up: sentry-web entered FATAL state, too many start retries too quickly CRIT Supervisor running as root (no user in config file) suggests a big problem, probably shouldn't be running this as root? CRIT Server 'unix_http_server' running without any HTTP authentication checking Surely authentication is optional? INFO exited: sentry-web (exit status 1; not expected) *sad face* here. Google hasn't been much help yet. Anyway, that is it as far as I know. If anyone can help me that would be greatly appreciated. Thanks in advance.

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • How to allow local LAN access while connected to Cisco VPN?

    - by Ian Boyd
    How can I maintain local LAN access while connected to Cisco VPN? When connecting using Cisco VPN, the server has to ability to instruct the client to prevent local LAN access. Assuming this server-side option cannot be turned off, how can allow local LAN access while connected with a Cisco VPN client? I used to think it was simply a matter of routes being added that capture LAN traffic with a higher metric, for example: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.255.0.0 10.0.0.3 10.0.0.3 20 <--Local LAN 10.0.0.0 255.255.0.0 192.168.199.1 192.168.199.12 1 <--VPN Link And trying to delete the 10.0.x.x -> 192.168.199.12 route don't have any effect: >route delete 10.0.0.0 >route delete 10.0.0.0 mask 255.255.0.0 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 192.168.199.12 >route delete 10.0.0.0 mask 255.255.0.0 192.168.199.1 if 0x3 And while it still might simply be a routing issue, attempts to add or delete routes fail. At what level is Cisco VPN client driver doing what in the networking stack that takes overrides a local administrator's ability to administer their machine? The Cisco VPN client cannot be employing magic. It's still software running on my computer. What mechanism is it using to interfere with my machine's network? What happens when an IP/ICMP packet arrives on the network? Where in the networking stack is the packet getting eaten? See also No internet connection with Cisco VPN Cisco VPN Client interrupts connectivity to my LDAP server Cisco VPN stops Windows 7 Browsing How can I prohibit the creation of a route in Windows XP upon connection to Cisco VPN? Rerouting local LAN and Internet traffic when in VPN VPN Client "Allow local LAN Access" Allow Local LAN Access for VPN Clients on the VPN 3000 Concentrator Configuration Example LAN access gone when I connect to VPN Windows XP Documentation: Route Edit: Things I've not yet tried: >route delete 10.0.* Update: Since Cisco has abandoned their old client, in favor of AnyConnect (HTTP SSL based VPN), this question, unsolved, can be left as a relic of history. Going forward, we can try to solve the same problem with their new client.

    Read the article

  • Accidentally Removed Permissions for the XP SP3 Registry key HKEY_CLASSES_ROOT! Workaround Please!

    - by Praveen Kumar
    This is Praveen and I am using Microsoft Windows XP SP3 Build 2600. I had problems with using Microsoft Office 2010. It was keeping on saying, "Please wait while windows configures Microsoft Office Professional Plus 2010". After seeing this link: http://social.answers.microsoft.com/Forums/en-US/outlookcontact/thread/6e9c3f2e-010a-4b74-b433-0c41548ee468?prof=required I thought of giving the Registry Key: HKEY_CLASSES_ROOT, full access permissions for Everyone. I went to the registry, right-clicked on HKEY_CLASSES_ROOT and clicked on Permissions. I added Everyone and gave Full Control to the ACL and clicked on Apply. Also I checked "Replace permission entries on all child objects with entries shown here that apply to child objects." After a long time, it said with an error, cannot replace for few entries. Now, the key, HKEY_CLASSES_ROOT has no access to any user. My system is not starting up. Some of my friends asked me to try the Last Known Good Configuration. Even that did not work out. When I tried to open in the Safe Mode, I got only one driver to load and even that didn't load well. Another friend suggested me to put the Setup disk and reinstall the OS. I tried that and after completing the "Installing Device Drivers" part, when it started "Installing Network", the system is getting restarted. Now, the setup is also half way through and even if I open in Safe Mode to try System Restore, it popped up a message saying, "Setup cannot run under Safe Mode. Setup will restart now." and the system is restarting. All my official files and my software, which I developed for Registry Security, resides in my system now. I am unable to access the system and I want it to be working to submit the project, as the deadline is this week. I had no better solutions from elsewhere. Can anyone please help me out with this issue. If it is possible to open the registry editor's stored file from another system and restore the access permissions, I hope it would solve the problem. Please do help me. Thanking You, Praveen.

    Read the article

  • Will this RAID5 setup work (3TB Seagate Barracudas + Adaptec RAID 6405)?

    - by Slayer537
    As the title states, will this RAID combo work, and if not what needs to be changed? Overall opinions would be most helpful. I currently run a small file server of about 5TB or so. I keep outgrowing my needs and need to build a RAID setup that will allow me to expand as needed. I am new to RAID setups, especially one of the scale I have currently planned out, but I have being doing some research for the past couple of weeks and have come up with a build. Ideally, I'd have the setup completely built, but I'd like to keep the total cost around $1k and can't afford to go above $1.5k, so unfortunately that's not an option. 2 of my current drives are WD Caviar Blacks 2TB; however, I have recently learned that due to the lack of TLER those drives are awful for any RAID setup other than 0 or 1. That being said, my third drive is a Seagate Barracuda 3TB (ST300DM001) and I have found a RAID controller that states it supports it, so I'd like to use this same type of drive, if possible. Have any of you had any experience using this drive or a similar one in a RAID5 configuration? The manufacturer states that it supports it, but knowing that it is not an enterprise drive, I am slightly concerned that it could drop out of the array. I would just go with enterprise drives, but those are about double in cost... Parts list: Storage rack: http://www.ebay.com/itm/SGI-3U-Media-Storage-Server-16-Hard-Drive-Bay-SATA-SAS-Expander-Omnistor-SE3016-/140735776937?pt=LH_DefaultDomain_0&hash=item20c48188a9 3 more HDs (for now..): http://www.amazon.com/Seagate-Barracuda-3-5-Inch-Internal-ST3000DM001/dp/B005T3GRLY/ref=dp_return_2?ie=UTF8&n=172282&s=electronics Adaptec RAID 6405: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103224 here's a link to the compatibility sheet if that helps: http://download.adaptec.com/pdfs/compatibility_report/arc-sas_cr_03-27-12_series6.pdf SAS expander cable: http://www.pc-pitstop.com/sas_cables_adapters/8887-2M.asp My plan is to install the RAID card in my computer and then route the SAS cable to the rack. Setup a RAID5 on 3 drives, transfer my data over from my other drive, and then add that drive to the array. Eventually, I'd like to get a 2U unit and run the file server on that and move the RAID card over to there, but that will have to happen later on. Side note: The computer the card would be going into will be running Windows 7 Pro with 24GB of DDR3-1600 and an i7-930.

    Read the article

  • Error codes 80070490 and 8024200D in Windows Update

    - by Sammy
    How do get past these stupid errors? The way I have set things up is that Windows Update tells me when there are new updates available and then I review them before installing them. Yesterday it told me that there were 11 new updates. So I reviewed them and I saw that about half of them were security updates for Vista x64 and .NET Framework 2.0 SP2, and half of them were just regular updates for Vista x64. I checked them all and hit the Install button. It seemed to work at first, updates were being downloaded and installed, but then at update 11 of 11 total it got stuck and gave me the two error codes you see in the title. Here are some screenshots to give you an idea of what it looks like. This is what it looks like when it presents the updates to me. This is how it looks like when the installation fails. I'm not sure if you're gonna see this very well but these are the updates it's trying to install. Update: This is on Windows Vista Ultimate 64-bit with integrated SP2, installed only two weeks ago on 2012-10-02. Aside from this, the install is working flawlessly. I have not done any major changes to the system like installing new devices or drivers. What I have tried so far: - I tried installing the System Update Readiness Tool (the correct one for Vista x64) from Microsoft. This did not solve the issue. Microsoft resource links: Solutions to 80070490 Windows Update error 80070490 System Update Readiness Tool fixes Windows Update errors in Windows 7, Windows Vista, Windows Server 2008 R2, and Windows Server 2008 Solutions to 8024200D: Windows Update error 8024200d Essentially both solutions tell you to install the System Update Readiness Tool for your system. As I have done so and it didn't solve the problem the next step would be to try to repair Windows. Before I do that, is there anything else I can try? Microsoft automatic troubleshooter If I click the automatic troubleshooter link available on the solution web page above it directs me to download a file called windowsupdate.diagcab. But after download this file is not associated to any Windows program. Is this the so called Microsoft Fix It program? It doesn't have its icon, it's just blank file. Does it need to be associated? And to what Windows program?

    Read the article

  • Secure ldap problem

    - by neverland
    I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • tproxy squid bridge very slow when cache is full

    - by Roberto
    I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s. When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable. Any hints of what may be happening? I'm using: linux-2.6.30.10 iptables-1.4.3.2 squid-3.1.1 compiled with these options: ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish My squid.conf: cache_mem 100 MB memory_pools off acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl net-g1 src xxx.xxx.xxx.xxx/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow net-g1 from where browsing should be allowed http_access allow localnet http_access allow localhost http_access deny all http_port 3128 http_port 3129 tproxy hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 8000 16 256 access_log none cache_log /var/log/squid/cache.log coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . I have this issue when the cache is full, but do not really know if it is because of that. Thanks in advance and sorry my english. roberto

    Read the article

  • Bidirectional real-time sync of large file tree between two distant linux servers

    - by dlo
    By large file tree I mean about 200k files, and growing all the time. A relatively small number of files are being changed in any given hour though. By bidirectional I mean that changes may occur on either server and need to be pushed to the other, so rsync doesn't seem appropriate. By distant I mean that the servers are both in data centers, but geographically remote from each other. Currently there are only 2 servers, but that may expand over time. By real-time, it's ok for there to be a little latency between syncing, but running a cron every 1-2 minutes doesn't seem right, since a very small fraction of files may change in any given hour, let alone minute. EDIT: This is running on VPS's so I might be limited on the kinds of kernel-level stuff I can do. Also, the VPS's are not resource-rich, so I'd shy away from solutions that require lots of ram (like Gluster?). What's the best / most "accepted" approach to get this done? This seems like it would be a common need, but I haven't been able to find a generally accepted approach yet, which was surprising. (I'm seeking the safety of the masses. :) I've come across lsyncd to trigger a sync at the filesystem change level. That seems clever though not super common, and I'm a bit confused by the various lsyncd approaches. There's just using lsyncd with rsync, but it seems this could be fragile for bidirectionality since rsync doesn't have a notion of memory (eg- to know whether a deleted file on A should be deleted on B or whether it's a new file on B that should be copied to A). lipsync appears to be just a lsyncd+rsync implementation, right? Then there's using lsyncd with csync2, like this: http://www.axivo.com/community/threads/lightning-fast-synchronization-with-csync2-and-lsyncd.121/ ... I'm leaning towards this approach, but csync2 is a little quirky, though I did do a successful test of it. I'm mostly concerned that I haven't been able to find a lot of community confirmation of this method. People on here seem to like Unison a lot, but it seems that it is no longer under active development and it's not clear that it has an automatic trigger like lsyncd. I've seen Gluster mentioned, but maybe overkill for what I need? UPDATE: fyi- I ended up going with the original solution I mentioned: lsyncd+csync2. It seems to work quite well, and I like the architectural approach of having the servers be very loosely joined, so that each server can operate indefinitely on its own regardless of the link quality between them.

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • HTTP not working EC2 instance with own domain name

    - by bogdanvursu
    I have this problem I've already posted on the Amazon AWS forum. Unfortunately I haven't got a clear answer I and I was hoping you guys could help. Here's the link: http://developer.amazonwebservices.com/connect/thread.jspa?messageID=198238#198207 Basically I don't know why after associating an Elastic IP address and mapping it to one of my domains, FTP an ping work fine, but HTTP does a 302 redirect to the Amazon AWS hostname I had before associating the Elastic IP address. Here's the question from the AWS forum: I have an EC2 instance with HTTP and FTP installed. They both worked. Then I associated an Elastic IP address to that instance. Then I mapped that IP address to a name which is a subdomain of a domain I own. I think it's an A name (I didn't do the mapping personally). Now FTP works and HTTP doesn't. The AWS host name before the Elastic IP association: ec2-184-73-27-8.compute-1.amazonaws.com The AWS IP address and host name after the association: 174.129.7.254 and ec2-174-129-7-254.compute-1.amazonaws.com The domain which is mapped to 174.129.7.254 using an A record is: demo.flashxml.net FTP works means that I can connect to both 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com and demo.flashxml.net. HTTP doesn't work means that a HTTP request to 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com or demo.flashxml.net returns a 302 redirect to ec2-184-73-27-8.compute-1.amazonaws.com Here is my VirtualHost file: <VirtualHost *:80> DocumentRoot /home/ec2-user/public_html/wordpress ServerName demo.flashxml.net ErrorLog logs/ec2-user-error_log <Directory /home/ec2-user/public_html/wordpress> AllowOverride FileInfo Order Deny,Allow Allow from All </Directory> </VirtualHost> I finally figured out what was wrong. It's the fact that I installed Wordpress on the server using the hostname provided by Amazon. After associating the Elastic IP and updating the DNS records, the server was reachable - FTP working was the proof of that. The 302 redirect when accessing via HTTP was caused by Wordpress's hostname settings. So, what I've learned from all this was that I should setup my IP and DNS first and only after that install Wordpress or any other web app(s).

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • VMWare-Mount not recognizing virtual disks

    - by user36175
    I have two disks as .vmdk files, and four as .vdi files. I can boot virtual machines on them with Sun xMV VirtualBox, and they work just fine. However, I want to mount them on my local computer so I can read some files off of them without starting a virtual machine. I downloaded the vmware-mount utility, but I get this error, even when mounting .vmdk files, which should be VMWare images... Unable to mount the virtual disk. The disk may be in use by a virtual machine, may not have enough volumes or mounted under another drive letter. If not, verify that the file is a valid virtual disk file. Thinking it's a problem with the utility, I downloaded the SDK and made my own simple program in C to try to mount a disk. It just initializes the API, connects to it, then attempts to open the disk. I get this error, once again claiming it is not a virtual disk: **LOG: DISKLIB-DSCPTR: descriptor above max size: I64u **LOG: DISKLIB-LINK : "f:\programming\VMs\windowstrash.vdi" : failed to open (The file specified is not a virtual disk). **LOG: DISKLIB-CHAIN : "f:\programming\VMs\windowstrash.vdi" : failed to open (The file specified is not a virtual disk). **LOG: DISKLIB-LIB : Failed to open 'f:\programming\VMs\windowstrash.vdi' with flags 0x1e (The file specified is not a virtual disk). ** FAILURE ** : The file specified is not a virtual disk The files are clearly virtual disks, though, since I can actually mount and use them with a virtual machine. I tried detaching them from any VMs and trying again, but I got the same results. Any ideas? Maybe the "descriptor above max size" is a hint? Some more info: the .vmdk disks were created on other computers. I just copied them to mine and created new VMs around them, but they work fine. All the .vdi files were created on my machine. Not sure if that affects anything. Update: WinMount can mount the file.. so the problem seems to be with vmware-mount.

    Read the article

< Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >