Search Results

Search found 11888 results on 476 pages for 'hero vs zero'.

Page 442/476 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Why doesen't the number 2 work in this for-loop?

    - by Emil
    Hello. I have a function that runs trough each element in an array. It's hard to explain, so I'll just paste in the code here: NSLog(@"%@", arraySub); for (NSString *string in arrayFav){ int favoriteLoop = [string intValue] + favCount; NSLog(@"%d", favoriteLoop); id arrayFavObject = [array objectAtIndex:favoriteLoop]; [arrayFavObject retain]; [array removeObjectAtIndex:favoriteLoop]; [array insertObject:arrayFavObject atIndex:0]; [arrayFavObject release]; id arraySubFavObject = [arraySub objectAtIndex:favoriteLoop]; [arraySubFavObject retain]; [arraySub removeObjectAtIndex:favoriteLoop]; [arraySub insertObject:arraySubFavObject atIndex:0]; [arraySubFavObject release]; id arrayLengthFavObject = [arrayLength objectAtIndex:favoriteLoop]; [arrayLengthFavObject retain]; [arrayLength removeObjectAtIndex:favoriteLoop]; [arrayLength insertObject:arrayLengthFavObject atIndex:0]; [arrayLengthFavObject release]; } NSLog(@"%@", arraySub); The array arrayFav contains these strings: "3", "8", "2", "10", "40". Array array contains 92 strings with a name. Array arraySub contains numbers 0 to 91, representing a filename with a title from the array array. Array arrayLength contains 92 strings representing the size of each file from array arraySub. Now, the first NSLog shows, as expected, the numbers 0 to 91. The NSLog-s in the loop shows the numbers 3, 8, 2, 10, 40, also as expected. But here's the odd part: the last NSLog shows these numbers: 40, 10, 0, 8, 3, 1, 2, 4, 5, 6, 7, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 that is 40, 10, 0, 8, 3, and so on. It was not supposed to be a zero in there, it was supposed to be a 2.. Do you have any idea at why this is happening or a way to fix it? Thank you.

    Read the article

  • May be an IE z-index bug?

    - by baikaishiuc
    Below is my code. when open the page in ie browser, then select the text in div, the text will be replaced by some shadow quad blank . If you delete a line z-index:0, in css class test1, the ie will perform correctly. In my project , the z-index must be set greater than zero, so I couldn't delete the line. I found a solution is to set bg_img.filter = "" when pannel.z-index greater than 0, then ie will also working good. But unfortunately, the bg_img.filter.alpha must be set, too. So how could I do? test .test1 { position:absolute; background:#ffffff; left:20px; top:20px; border:1px solid; width:198px; height:500px; filter: progid:DXImageTransform.Microsoft.Shadow(color="#999999", Direction=135, Strength=5); z-index:0; } </style> <script> function init () { var pannel = document.createElement ('div'); var bg_img = document.createElement ('div'); var head = document.createElement ('div'); pannel.setAttribute('class', 'test1'); pannel.setAttribute('className', 'test1'); bg_img.style.cssText = "position:relative;left:0px;top:0px;" + "width:198px;" + "height:500px;" + "filter:alpha(opacity=100);"; head.style.cssText = "position:absolute;" + "left:0px;" + "top:0px;" + "width:180px;" + "height:20px;"; document.body.appendChild (pannel); pannel.appendChild(bg_img); pannel.appendChild(head); head.innerHTML = "<div>yusutechasdf</div><div>innerhtml</div>" } </script>

    Read the article

  • How can I obtain the IP address of my server program?

    - by Dr Dork
    Hello! This question is related to another question I just posted. I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code and client side code setup to communicate. Currently, my client code successfully connects to the server code and the server code sends it a test message, then both quit out. Perfect! That's exactly what I wanted to accomplish. Now I'm playing around with the functions used to obtain info about the two environments (server and client). I'd like to obtain my server program's IP address. Here's the code I currently have to do this, but it's not working... int sockfd; unsigned int len; socklen_t sin_size; char msg[]="test message"; char buf[MAXLEN]; int st, rv; struct addrinfo hints, *serverinfo, *p; struct sockaddr_storage client; char s[INET6_ADDRSTRLEN]; char ip[INET6_ADDRSTRLEN]; //zero struct memset(&hints,0,sizeof(hints)); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; //get the server info if((rv = getaddrinfo(NULL, SERVERPORT, &hints, &serverinfo ) != 0)){ perror("getaddrinfo"); exit(-1); } // loop through all the results and bind to the first we can for( p = serverinfo; p != NULL; p = p->ai_next) { //Setup the socket if( (sockfd = socket( p->ai_family, p->ai_socktype, p->ai_protocol )) == -1 ) { perror("socket"); continue; } //Associate a socket id with an address to which other processes can connect if(bind(sockfd, p->ai_addr, p->ai_addrlen) == -1){ close(sockfd); perror("bind"); continue; } break; } if( p == NULL ){ perror("Fail to bind"); } inet_ntop(p->ai_family, get_in_addr((struct sockaddr *)p->ai_addr), s, sizeof(s)); printf("Server has TCP Port %s and IP Address %s\n", SERVERPORT, s); and the output for the IP is always empty... server has TCP Port 21412 and IP Address :: any ideas for what I'm missing? thanks in advance for your help! this stuff is really complicated at first.

    Read the article

  • MySQL Database Query - Codeigniter

    - by user2450349
    I am building an application with Codeigniter and need some help with a DB query. I have a table called users with the following fields: user_id, user_name, user_password, user_email, user_role, user_manager_id In my app, I pull all records from the user table using the following: function get_clients() { $this->db->select('*'); $this->db->where('user_role', 'client'); $this->db->order_by("user_name", "Asc"); $query = $this->db->get("users"); return $query->result_array(); } This works as expected, however when I display the results in the view, I also want to display a new column called Manager which will display the managers user_name field. The user_manager_id is the id of the user from the same table. Im guessing you can create an outer join on the same table but not sure. In the view, I am displaying the returned info as follows: <table class="table table-striped" id="zero-configuration"> <thead> <tr> <th>Name</th> <th>Email</th> <th>Manager</th> </tr> </thead> <tbody> <?php foreach($clients as $row) { ?> <tr> <td><?php echo $row['user_name']; ?> (<?php echo $row['user_username']; ?>)</td> <td><?php echo $row['user_email']; ?></td> <td><?php echo $row['???']; ?></td> </tr> <?php } ?> </tbody> </table> Any idea of how I can form the query and display the manager name is the view? Example: user_id user_name user_password user_email user_role user_manager_id 1 Ollie adjjk34jcd [email protected] client null 2 James djklsdfsdjk [email protected] client 1 When i query the database, i want to display results like this: Ollie [email protected] James [email protected] Ollie

    Read the article

  • Efficient method of finding database rows that have *one or more* qualities from a list of seven qualities

    - by hithere
    Hello! For this question, I'm looking to see if anyone has a better idea of how to implement what I'm currently planning on implementing (below): I'm keeping track of a set of images, using a database. Each image is represented by one row. I want to be able to search for images, using a number of different search parameters. One of these parameters involves a search-by-color option. (The rest of the search stuff is currently working fine.) Images in this database can contain up to seven colors: -Red -Orange -Yellow -Green -Blue -Indigo -Violet Here are some example user queries: "I want an image that contains red." "I want an image that contains red and blue." "I want an image that contains yellow and violet." "I want an image that contains red, orange, yellow, green, blue, indigo and violet." And so on. Users make this selection through the use of checkboxes in an html form. They can check zero checkboxes, all seven, and anything in between. I'm curious to hear what people think would be the most efficient way to perform this database search. I have two possible options right now, but I feel like there must be something better that I'm not thinking of. (Option 1) -For each row, simply have seven additional fields in the database, one for each color. Each field holds a 1 or 0 (true/false) value, and I SELECT based on whatever the user has checked off. (I didn't like this solution so much, because it seemed kind of wasteful to add seven additional fields...especially since most pictures in this table will only have 3-4 colors max, though some could have up to 7. So that means I'm storing a lot of zeros.) Also, if I added more searchable colors later on (which I don't think I will, but it's always possible), I'd have to add more fields. (Option 2) -For each image row, I could have a "colors" text field that stores space-separated color names (or numbers for the sake of compactness). Then I could do a fulltext match against search through the fields, selecting rows that contain "red yellow green" (or "1 3 4"). But I kind of didn't want to do fulltext searching because I already allow a keyword search, and I didn't really want to do two fulltext searches per image search. Plus, if the database gets big, fulltext stuff might slow down. Any better options that I didn't think of? Thanks! Side Note: I'm using PHP to work with a MySQL database.

    Read the article

  • 1-DimArray Counting same elements (0,1)

    - by Chris
    Hello, I have a 1-dim array that fills up a table of 40 random elements (all the values are either 0 or 1). So i wanne "count" the largest row of the values net to each otherto each other.. Meaning for example : 111100101 = the longest row would be 1111 (= 4 elements of the same kind closest to each other). So 011100 would result in the longest row being 3 elements (3 x 1). My problem i have no idea how to check upon the "next element" and check if its a 0 or 1. Like the first would be 1111 (count 4) but the next would be a 0 value = meaning i have to stop counting. My idea was placing this value (4) in a other array (example: 111100101) , and place the value of the 1's back on zero. And start the process all over again. To find the largest value i have made a other method that checks up the biggest value in the array that keeps track of the count of 0's 1's, this is not the problem. But i cannot find a way to fill the array tabelLdr up. (having all the values of the group of elements of the same kind (being 0 or 1). In the code below i have 2 if's and offcourse it will never go into the second if (to check if the next value in the array is != then its current state (being 0 or 1) Best Regards. public void BerekenDeelrij(byte[] tabel, byte[] tabelLdr) { byte LdrNul = 0, Ldréén = 0; //byte teller = 0; for (byte i = 0; i < tabel.Length; i++) { if (tabel[i] == 0) { LdrNul++; //this 2nd if cleary does not work, but i have no idea how to implend this sort of idea in my program. if (tabel[i] == 1) //if value != 0 then the total value gets put in the second array tabelLdr, { tabelLdr[i] = LdrNul; LdrNul = 0; } } if (tabel[i] == 1) { Ldréén++; if (tabel[i] == 0) { tabelLdr[i] = Ldréén; Ldréén = 0; } } }/*for*/ }

    Read the article

  • mysql search using for loop from php.

    - by deb
    hi, i am a beginner. but I'm practicing a lot for few days with php mysql, and I am trying to use for loop to search an exploded string, one by one from mysql server. Till now I have no results. I'm giving my codes, <?php // Example 1 $var = @$_GET['s'] ; $limit=500; echo " "; echo "$var"; echo " "; $trimmed_array = explode(" ", $var); echo "$trimmed_array[0]"; // piece1 echo " "; $count= count($trimmed_array); echo $count; for($j=0;$j<$count;$j++) { e cho "$trimmed_array[$j]";; echo " "; } echo " "; for($i=0; $i<$count ; $i++){ $query = "select * from book where name like \"%$trimmed_array[$i]%\" order by name"; $numresults=mysql_query($query); $numrows =mysql_num_rows($numresults); if ($numrows == 0) { echo "<h4>Results</h4>"; echo "<p>Sorry, your search: &quot;" . $trimmed_array[i] . "&quot; returned zero results</p>"; } if (empty($s)) { $s=0; } $query .= " limit $s,$limit"; $result = mysql_query($query) or die("Couldn't execute query"); echo "<p>You searched for: &quot;" . $var . "&quot;</p>"; echo "Results<br /><br />"; $count=1; while ($row= mysql_fetch_array($result)) { $name = $row["name"]; $publisher=$row["publisher"]; $total=$row["total"]; $issued=$row["issued"]; $available=$row["available"]; $category=$row["category"]; echo "<table border='1'><tr><td>$count)</td><td>$name&nbsp;</td><td>$publisher&nbsp;</td><td>$total&nbsp;</td><td>$issued&nbsp;</td><td>$available&nbsp;</td><td>$category&nbsp;</td></tr></table>" ; $count++ ; } } ?>

    Read the article

  • Users and roles in context

    - by Eric W.
    I'm trying to get a sense of how to implement the user/role relationships for an application I'm writing. The persistence layer is Google App Engine's datastore, which places some interesting (but generally beneficial) constraints on what can be done. Any thoughts are appreciated. It might be helpful to keep things very concrete. I would like there to be organizations, users, test content and test administrations (records of tests that have been taken). A user can have the role of participant (test-taker), contributor of test material or both. A user can also be a member of zero or more organizations. In the role of participant, the user can see the previous administrations of tests he or she has taken. The user can also see a test administration of another participant if that participant has given the user authorization. The user can see test material that has been made public, and he or she can see restricted content as a participant during a specific administration of a test for which that user has been authorized by an organization. As a member of an organization, the user can see restricted content in the role of contributor, and he or she might or might not also be able to edit the content. Each organization should have one or more administrators that can determine whether a member can see and edit content and determine who has admin privileges. There should also be one or more application-wide superusers that can troubleshoot and solve problems. Members of organizations can see the administrations of tests that the participants concerned have authorized them to see, and they can see anonymous data if no authorization has been given. A user cannot see the test results of another user in any other circumstances. Since there are no joins in the App Engine datastore, it might be necessary to have things less normalized than usual for the typical SQL database in order to ensure that queries that check permissions are fast (e.g., ones that determine whether a link is to be displayed). My questions are: How do I move forward on this? Should I spend a lot of time up front in order to get the model right, or can I iterate several times and gradually roll in additional complexity? Does anyone have some general ideas about how to break things up in this instance? Are there any GAE libraries that handle roles in a way that is compatible with this arrangement?

    Read the article

  • IO::Pipe - close(<handle>) does not set $?

    - by danboo
    My understanding is that closing the handle for an IO::Pipe object should be done with the method ($fh->close) and not the built-in (close($fh)). The other day I goofed and used the built-in out of habit on a IO::Pipe object that was opened to a command that I expected to fail. I was surprised when $? was zero, and my error checking wasn't triggered. I realized my mistake. If I use the built-in, IO:Pipe can't perform the waitpid() and can't set $?. But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core. I worked up a little test script to show what I mean: use 5.012; use warnings; use IO::Pipe; say 'init pipes:'; pipes(); my $fh = IO::Pipe->reader(q(false)); say 'post open pipes:'; pipes(); say 'return: ' . $fh->close; #say 'return: ' . close($fh); say 'status: ' . $?; say q(); say 'post close pipes:'; pipes(); sub pipes { for my $fd ( glob("/proc/self/fd/*") ) { say readlink($fd) if -p $fd; } say q(); } When using the method it shows the pipe being gone after the close and $? is set as I expected: init pipes: post open pipes: pipe:[992006] return: 1 status: 256 post close pipes: And, when using the built-in it also appears to close the pipe, but does not set $?: init pipes: post open pipes: pipe:[952618] return: 1 status: 0 post close pipes: It seems odd to me that the built-in results in the pipe closure, but doesn't set $?. Can anyone help explain the discrepancy? Thanks!

    Read the article

  • % _ in search form displays all results

    - by fusion
    if the search form is blank, it should display an error that something should be entered by the user. it should only show those results which contain the keywords the user has entered in the search textbox. however, if the user enters % or _ or +, it displays all results. how do i display an error when the user enters these wildcard characters? my search php code: $search_result = ""; $search_result = $_GET["q"]; $search_result = trim($search_result); if ($search_result == "") { echo "<p>Search Error</p><p>Please enter a search...</p>" ; exit(); } $result = mysql_query('SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes WHERE cQuotes LIKE "%' . mysql_real_escape_string($search_result) .'%" ORDER BY idQuotes DESC', $conn) or die ('Error: '.mysql_error()); // there's either one or zero records. Again, no need for a while loop function h($s) { echo htmlspecialchars($s, ENT_QUOTES); } ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result)) { ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php h($cQuotes); ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> <?php ?> </div>

    Read the article

  • How to run VisualSVN Server on port 443 running IIS on same server?

    - by Metro Smurf
    Server 2008 R2 SP1 VisualSVN Server 2.1.6 The IIS server has about 10 sites. One of them uses https over port 443 with the following bindings: http x.x.x.39:80 site.com http x.x.x.39:80 www.site.com https x.x.x.39:443 VisualSVN Server Properties server name: svn.SomeSite.com server port: 443 Server Binding: x.x.x.40 No sites on IIS are listening to x.x.x.40. When starting up VisualSVN server, the following errors are thrown: make_sock: could not bind to address x.x.x.40:443 (OS 10013) An attempt was made to access a socket in a way forbidden by its access permissions. no listening sockets available, shutting down When I stop Site.com on IIS, then VisualSVN Server starts up without a problem. When I bind VisualSVN server to port 8443 and start Site.com, then VisualSVN Server starts without a problem. My goal is to be able to access the VisualSvn Server with a normal url, i.e., one that does't use a port number in the address: https://svn.site.com vs https://svn.site.com:8443 What needs to be configured to allow VisualSVN Server to run on port 443 with IIS running on the same server? Edit / Answer The answer provided by Ivan did point me in the right direction. For anyone else running into this, here is a bit more information. Even though my IIS had no bindings set to the IP address I am using for VisualSvn, IIS will still take the IP address hostage unless IIS is explicitly told which IP addresses to listen to. There is no GUI in Win Server 2k8 to configure the IP addresses for IIS to listen; by default, IIS listens to all IP addresses assigned to the server. The following will help configure IIS to only listen to the IP addresses you want: open a command prompt enter: netsh enter: http enter: show iplisten -- this will show a table of the IP addresses IIS is listening to. By default, the table will be empty (I guess this means IIS listens to all IP's) For each IP address IIS should listen to, enter: add iplisten ipaddress=x.x.x.x enter: show iplisten -- you should now see all the IP addresses added to the listening table. Exit and then reset IIS. Each of these commands can also be run directly, i.e., netsh http show iplisten If you need to delete an IP address from the listening table: open a command prompt enter: netsh enter: http enter: delete iplisten ipaddress=x.x.x.x Exit and then reset IIS.

    Read the article

  • how to install ffmpeg in cpanel

    - by Ajay Chthri
    i'm using dedicated server(linux) so i need to install ffmpeg in cpanel so here ffmpeg i found in Main Software Install a Perl Module but i writing script in php so how can i install ffmpeg phpperl when i'am trying to install ffmpeg in perl module i get this response Checking C compiler....C compiler (/usr/bin/cc) OK (cached Tue Jan 17 19:16:31 2012)....Done CPAN fallback is disabled since /var/cpanel/conserve_memory exists, and cpanm is available. Method: Using Perl Expect, Installer: cpanm You have make /usr/bin/make Falling back to HTTP::Tiny 0.009 You have /bin/tar: tar (GNU tar) 1.15.1 You have /usr/bin/unzip You have Cpanel::HttpRequest 2.1 Testing connection speed...(using fast method)...Done Ping:2 (ticks) Testing connection speed to cpan.knowledgematters.net using pureperl...(28800.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.develooper.com using pureperl...(22233.33 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.schatt.com using pureperl...(32750.00 bytes/s)...Done Ping:3 (ticks) Testing connection speed to cpan.mirror.facebook.net using pureperl...(14050.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.mirrors.hoobly.com using pureperl...(5150.00 bytes/s)...Done Five usable mirrors located Ping:0 (ticks) Testing connection speed to 208.109.109.239 using pureperl...(28950.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to 208.82.118.100 using pureperl...(19300.00 bytes/s)...Done Ping:1 (ticks) Testing connection speed to 69.50.192.73 using pureperl...(19300.00 bytes/s)...Done Three usable fallback mirrors located Mirror Check passed for cpan.schatt.com (/index.html) Searching on cpanmetadb ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:0).......(request attempt 1/12)...Using dns cache file /root/.HttpRequest/cpanmetadb.cpanel.net......searching for mirrors (mirror search attempt 1/3)......5 usable mirrors located. (less then expected)......mirror search success......connecting to 208.74.123.82...@208.74.123.82......connected......receiving...100%......request success......Done Searching Video::FFmpeg on cpanmetadb (http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release) ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:1).......(request attempt 1/12)[email protected]%......request success......Done Source: fastest CPAN mirror ... --> Working on Video::FFmpeg Fetching http://cpan.schatt.com//authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz ... Fetching http://cpan.schatt.com/authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz (connected:1).......(request attempt 1/12)...Resolving cpan.schatt.com...(resolve attempt 1/65)......connecting to 66.249.128.125...@66.249.128.125......connected......receiving...25%...50%...75%...100%......request success......Done OK Unpacking Video-FFmpeg-0.47.tar.gz Video-FFmpeg-0.47/ Video-FFmpeg-0.47/Changes Video-FFmpeg-0.47/FFmpeg.xs Video-FFmpeg-0.47/MANIFEST Video-FFmpeg-0.47/META.yml Video-FFmpeg-0.47/Makefile.PL Video-FFmpeg-0.47/README Video-FFmpeg-0.47/lib/ Video-FFmpeg-0.47/lib/Video/ Video-FFmpeg-0.47/lib/Video/FFmpeg/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVFormat.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Audio.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Subtitle.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Video.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream.pm Video-FFmpeg-0.47/lib/Video/FFmpeg.pm Video-FFmpeg-0.47/ppport.h Video-FFmpeg-0.47/t/ Video-FFmpeg-0.47/t/Video-FFmpeg.t Video-FFmpeg-0.47/test Video-FFmpeg-0.47/test.mp4 Video-FFmpeg-0.47/typemap Entering Video-FFmpeg-0.47 Checking configure dependencies from META.yml META.yml not found or unparsable. Fetching META.yml from search.cpan.org Fetching http://search.cpan.org/meta/Video-FFmpeg-0.47/META.yml (connected:1).......(request attempt 1/12)...Resolving search.cpan.org...(resolve attempt 1/65)......connecting to 199.15.176.161...@199.15.176.161......connected......receiving...100%......request success......Done Configuring Video-FFmpeg-0.47 ... Running Makefile.PL Perl v5.10.0 required--this is only v5.8.8, stopped at Makefile.PL line 1. BEGIN failed--compilation aborted at Makefile.PL line 1. N/A ! Configure failed for Video-FFmpeg-0.47. See /home/.cpanm/build.log for details. Perl Expect failed with non-zero exit status: 256 All available perl module install methods have failed guide me how can i install ffmpeg in cPanel Thanks for advance.

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • DNS problems with Google on Windows 7

    - by awishformore
    Hello dear superuser community. I had no idea where to post this because it is a problem that completely baffles me. I have a lot of experience with network configuration, but I am completely out of ideas on how to fix this. I have a Fritz!Box branded router on my ISP 1&1 in Germany. My computer is connected to it with a normal Ethernet cable. I always manually set my IP on the computer and use the Google DNS servers for name resolution. I also tried OpenDNS and the result is the same. With that configuration the following happens: Google search responds with big delay Gmail, Google Calendar & Google Drive requests time out the majority of the time In order to troubleshoot, I set the network connection to DHCP for both IP & DNS. At that point, what happens is the following: Google search times out most of the time Gmail, Google Calendar & Google Drive work most of the time Sometimes, it happens that the sites that time out will come up, but weirdly enough, the pictures on the buttons will be missing. For instance, the magnifying glass on Google will be gone or the circle arrow on Gmail (but all buttons of course). All other websites load just fine - and very quickly. All other network functionality is completely unimpacted. The behaviour of fixed IP & Google DNS vs automatic IP & DNS is easily reproducible. I am going crazy trying to fix this as I have no idea what the hell is going on at this point. Here a list of the things I have tried thus far: Flushed the DNS Tried on different browsers (Works fine on my laptop by the way) Tried disabling Teredo & IPv6 stack Emptied all caches Checked the HOSTS file Rebooted the router Reset the router Reinstalled the network adapter Tracert displayes normal route until timing out at one point Ping usually doesn't work for the unreachable sites either Ran both complete Norton 360 & Kaspersky 2012 scan Ran Kaspersky Virus Removal Tool in safe mode Tried connection in safe mode & networking enabled If you have any ideas, please let me know. I'm getting desperate...

    Read the article

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • DNS request timed out. timeout was 2 seconds

    - by sahil007
    i had setup bind dns server on centos. from local lan it will work fine but from remote when i tried to nslookup ..it will give reply like "DNS request timed out...timeout was 2 seconds." what is the problem? this is my bind config---- // Red Hat BIND Configuration Tool options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; query-source address * port 53; }; controls { inet 127.0.0.1 allow {localhost; } keys {rndckey; }; }; acl internals { 127.0.0.0/8; 192.168.0.0/24; 10.0.0.0/8; }; view "internal" { match-clients { internals; }; recursion yes; zone "mydomain.com" { type master; file "mydomain.com.zone"; }; zone "0.168.192.in-addr.arpa" { type master; file "0.168.192.in-addr.arpa.zone"; }; zone "." IN { type hint; file "named.root"; }; zone "localdomain." IN { type master; file "localdomain.zone"; allow-update { none; }; }; zone "localhost." IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa." IN { type master; file "named.local"; allow-update { none; }; }; zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa." I N { type master; file "named.ip6.local"; allow-update { none; }; }; zone "255.in-addr.arpa." IN { type master; file "named.broadcast"; allow-update { none; }; }; zone "0.in-addr.arpa." IN { type master; file "named.zero"; allow-update { none; }; }; }; view "external" { match-clients { any; }; recursion no; zone "mydomain.com" { type master; file "mydomain.com.zone"; // file "/var/named/chroot/var/named/mydomain.com.zone"; }; zone "0.168.192.in-addr.arpa" { type master; file "0.168.192.in-addr.arpa.zone"; }; }; include "/etc/rndc.key";

    Read the article

  • Using Upstart to manage Unicorn w/ rbenv + bundler binstubs w/ ruby-local-exec shebang

    - by codykrieger
    Alright, this is melting my brain. It might have something to do with the fact that I don't understand Upstart as well as I should. Sorry in advance for the long question. I'm trying to use Upstart to manage a Rails app's Unicorn master process. Here is my current /etc/init/app.conf: description "app" start on runlevel [2] stop on runlevel [016] console owner # expect daemon script APP_ROOT=/home/deploy/app PATH=/home/deploy/.rbenv/shims:/home/deploy/.rbenv/bin:$PATH $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production # >> /tmp/upstart.log 2>&1 end script # respawn That works just fine - the Unicorns start up great. What's not great is that the PID detected is not of the Unicorn master, it's of an sh process. That in and of itself isn't so bad, either - if I wasn't using the automagical Unicorn zero-downtime deployment strategy. Because shortly after I send -USR2 to my Unicorn master, a new master spawns up, and the old one dies...and so does the sh process. So Upstart thinks my job has died, and I can no longer restart it with restart or stop it with stop if I want. I've played around with the config file, trying to add -D to the Unicorn line (like this: $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production -D) to daemonize Unicorn, and I added the expect daemon line, but that didn't work either. I've tried expect fork as well. Various combinations of all of those things can cause start and stop to hang, and then Upstart gets really confused about the state of the job. Then I have to restart the machine to fix it. I think Upstart is having problems detecting when/if Unicorn is forking because I'm using rbenv + the ruby-local-exec shebang in my $APP_ROOT/bin/unicorn script. Here it is: #!/usr/bin/env ruby-local-exec # # This file was generated by Bundler. # # The application 'unicorn' is installed as part of a gem, and # this file is here to facilitate running it. # require 'pathname' ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath) require 'rubygems' require 'bundler/setup' load Gem.bin_path('unicorn', 'unicorn') Additionally, the ruby-local-exec script looks like this: #!/usr/bin/env bash # # `ruby-local-exec` is a drop-in replacement for the standard Ruby # shebang line: # # #!/usr/bin/env ruby-local-exec # # Use it for scripts inside a project with an `.rbenv-version` # file. When you run the scripts, they'll use the project-specified # Ruby version, regardless of what directory they're run from. Useful # for e.g. running project tasks in cron scripts without needing to # `cd` into the project first. set -e export RBENV_DIR="${1%/*}" exec ruby "$@" So there's an exec in there that I'm worried about. It fires up a Ruby process, which fires up Unicorn, which may or may not daemonize itself, which all happens from an sh process in the first place...which makes me seriously doubt the ability of Upstart to track all of this nonsense. Is what I'm trying to do even possible? From what I understand, the expect stanza in Upstart can only be told (via daemon or fork) to expect a maximum of two forks.

    Read the article

  • Slow Windows Explorer on Windows 7

    - by MadBoy
    I have Laptop with i7 (4 cores), 8GB ram and SSD OCZ Vertex 3 MaxIOPS which in testing that I did just now does 400mb/s+ read/write. However the responsiveness of Windows Explorer is far from being perfect. Opening up Computer, Documents, going into folders is very slow (1-5seconds). I don't have any viruses or spyware and I have tried changing properties to optimize view for General Items. I tried disabling Search Indexer but it made search in Outlook 2010 crawl and didn't bring any other effect. Even double clicking on file takes some time to open things up (like clicking a Word document). I don't have any drives mapped, my computer is not joined to domain. I have multiple VPN connections that I connect to but they all have disabled default gateways. I tried using CC Cleaner or some Windows 7 Tweaks app to disable some things. I am power user using Visual Studio, Tortoise SVN and other developer/administration apps. Any non obvious ideas? Edit: So I've been trying to pinpoint where the issue comes from and it seems that straight after reboot Windows Explorer opens very fast, when I load 3-4 programs (Royal TS, Visual Studio, Outlook) it's noticeably slower and the more programs I have it gets worse. After I start closing programs it starts working better and if I leave 2 open it's fast again. I tried doing some research with DiskMon and other programs from sysinternals but couldn't find anything suspicious. Below are stats during normal usage with a lots of programs open: - Ram usage with a lot of programs open and no swap file (i disabled it for testing): 6.95GB - CPU usage: 15%, none of the cores takes more then 50% (I have VS 2010 open x 4) HD Tune Pro: OCZ-VERTEX3 MI Benchmark Test capacity: full Read transfer rate Transfer Rate Minimum : 363.9 MB/s Transfer Rate Maximum : 505.5 MB/s Transfer Rate Average : Access Time : Burst Rate : CPU Usage : HD Tune Pro: OCZ-VERTEX3 MI File Benchmark Drive C: Transfer rate test File Size: 500 MB Sequential read 484102 KB/s Sequential write 444714 KB/s Random read 7779 IOPS Random write 16888 IOPS Random read (queue depth = 32) 73007 IOPS Random write (queue depth = 32) 69790 IOPS HD Tune Pro: OCZ-VERTEX3 MI Random Access Test capacity: full Read test Transfer size operations / sec avg. access time max. access time avg. speed 512 bytes 3260 IOPS 0.306 ms 2.106 ms 1.592 MB/s 4 KB 4161 IOPS 0.240 ms 2.006 ms 16.256 MB/s 64 KB 2382 IOPS 0.419 ms 2.367 ms 148.934 MB/s 1 MB 449 IOPS 2.225 ms 4.197 ms 449.407 MB/s Random 809 IOPS 1.235 ms 6.551 ms 410.527 MB/s HD Tune Pro: OCZ-VERTEX3 MI Extra Tests Test capacity: full Random seek 3975 IOPS 0.252 ms 1.941 MB/s Random seek 4 KB 4245 IOPS 0.236 ms 16.583 MB/s Butterfly seek 4086 IOPS 0.245 ms 1.995 MB/s Random seek / size 64 KB 3812 IOPS 0.262 ms 58.606 MB/s Random seek / size 8 MB 120 IOPS 8.348 ms 485.737 MB/s Sequential outer 4524 IOPS 0.221 ms 282.721 MB/s Sequential middle 4429 IOPS 0.226 ms 276.818 MB/s Sequential inner 5504 IOPS 0.182 ms 344.000 MB/s Burst rate 4472 IOPS 0.224 ms 279.475 MB/s

    Read the article

  • Kernel panic when bringing up DRBD resource

    - by sc.
    I'm trying to set up two machines synchonizing with DRBD. The storage is setup as follows: PV - LVM - DRBD - CLVM - GFS2. DRBD is set up in dual primary mode. The first server is set up and running fine in primary mode. The drives on the first server have data on them. I've set up the second server and I'm trying to bring up the DRBD resources. I created all the base LVM's to match the first server. After initializing the resources with `` drbdadm create-md storage I'm bringing up the resources by issuing drbdadm up storage After issuing that command, I get a kernel panic and the server reboots in 30 seconds. Here's a screen capture. My configuration is as follows: OS: CentOS 6 uname -a Linux host.structuralcomponents.net 2.6.32-279.5.2.el6.x86_64 #1 SMP Fri Aug 24 01:07:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux rpm -qa | grep drbd kmod-drbd84-8.4.1-2.el6.elrepo.x86_64 drbd84-utils-8.4.1-2.el6.elrepo.x86_64 cat /etc/drbd.d/global_common.conf global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb become-primary-on both; wfc-timeout 30; degr-wfc-timeout 10; outdated-wfc-timeout 10; } options { # cpu-mask on-no-data-accessible } disk { # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout } net { # protocol timeout max-epoch-size max-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle protocol C; allow-two-primaries yes; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } } cat /etc/drbd.d/storage.res resource storage { device /dev/drbd0; meta-disk internal; on host.structuralcomponents.net { address 10.10.1.120:7788; disk /dev/vg_storage/lv_storage; } on host2.structuralcomponents.net { address 10.10.1.121:7788; disk /dev/vg_storage/lv_storage; } /var/log/messages is not logging anything about the crash. I've been trying to find a cause of this but I've come up with nothing. Can anyone help me out? Thanks.

    Read the article

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • DNS lookups failing somewhere between firewall and router

    - by TessellatingHeckler
    we have a setup of ADSL line - Cisco 837 ADSL router - Zyxel ZyWall 35 firewall/NAT - Switch == Intel load balanced NICS in a server. It has been fine for years, suddenly DNS resolution stopped working on the server. No changes that I know of, so I can't work backwards from there. It was configured with the ISP's DNS servers, neither network device does DNS relaying. Wireshark shows the request go out but nothing comes back. The server networking stack seems OK though, because if we query an internal DNS server on a remote site, that works. I can logon to the Cisco, and DNS resolves OK from the command line. I can logon to the ZyWall, and DNS does not resolve from the command line. So the problem seems to be the firewall, patch cable or router, yes? On the router: interface Ethernet0 ip address aaa.bbb.ccc.ddd 255.255.255.ddd ip tcp adjust-mss 1450 hold-queue 100 out On the firewall: DNS server set to 8.8.8.8 (Google's), DNS traffic allowed LAN-WAN. What else should I look for? Update: Following This guide I've got traffic logging on the Cisco. I have also got access to a public DNS server which I can run tcpdump on to see things from the other side. And as per the below comments, I've tested with Dig and see that DNS over TCP works, and over UDP does not. Currently: DNS request from the server using TCP shows up in the firewall log, and in the Cisco log, and in tcpdump on the DNS server, the answer comes back, it works fine. DNS request from the server using UDP shows up in the firewall log, and in the Cisco log, does NOT show in tcpdump on the DNS server, times out. DNS request from the cisco (using UDP) does show up in tcpdump on the DNS server, answer received, works fine. Ping requests from the server and the cisco to the DNS server show up in tcpdump on the DNS server. DNS request from the server using UDP does show up on the firewall. Summary: TCP seems fine throughought. UDP works over the ADSL and to the Cisco, and it works from the server to the Cisco, but it doesn't cross the Cisco properly, it seems. I did see the Cisco showing as connected at 10Mb/full-duplex internally, and the firewall showing as 100Mb/full-duplex externally. I have forced the firewall to 10Mb and rebooted both devices. That seemed to help get UDP traffic (server-firewall-cisco) instead of (server-firewall), but did not fix it. Update: Sanitized Cisco config: version 12.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname cisco ! logging queue-limit 100 enable secret 5 {password} enable password 7 {password} ! ip subnet-zero ip domain name example.org ip name-server {nameserver_IP} ! ! ip audit notify log ip audit po max-events 100 no ftp-server write-enable ! interface Ethernet0 ip address {Inside_public_IP} 255.255.255.248 ip tcp adjust-mss 1460 hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface Dialer1 ip unnumbered Ethernet0 encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer persistent no cdp enable ppp chap hostname {ADSL_Username} ppp chap password 7 {ADSL_Password} ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer1 no ip http server no ip http secure-server ! access-list 23 permit {IP} dialer-list 1 protocol ip permit no cdp run snmp-server enable traps tty ! {con, vty} end

    Read the article

  • ubuntu 8.04lts + rdiff-backup: Should I install from source instead of using apt repositories?

    - by egarcia
    I'm trying to use rdiff-backup in order to make backup copies of some folders inside an Ubuntu 8.04LTS server. I'm attempting to do the backup on another server with a more modern Ubuntu distro (9.10). I'll call this one the "client". rdiff-backup needs to be installed on both the client and the server. It is available on the apt repositories on both machines, so I installed it using sudo apt-get install rdiff-backup. The problem is that the version installed on the server is older than the one on the client (1.1.15 vs 1.2.8). Thus I get errors when I try do make them work together. So I need both versions to be the same. What is the standard procedure in these cases? Should I attempt to upgrade the version on the server, or downgrade the version on the client? And how whould I do that? In case it is useful, I'd like to point out that the rdiff-backup apt-package has some dependencies - librsync1 & python-support Attaching the errors I got in case they help: rdiff-backup egarcia@test::/var/rails/ohwr/backup /home/kikito/backup/files Warning: Local version 1.2.8 does not match remote version 1.1.15. Exception ' Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup'] ' raised of class '<class 'rdiff_backup.Security.Violation'>': File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result Traceback (most recent call last): File "/usr/bin/rdiff-backup", line 30, in <module> rdiff_backup.Main.error_check_Main(sys.argv[1:]) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result rdiff_backup.Security.Violation: Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup']

    Read the article

  • Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

    - by growse
    I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI. All was well, and I had a ZFS memory usage graph looking like this: A fairly healthy Arc size of around 23GB - making use of the available memory for caching. However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this: Partial output of arc_summary.pl is: System Memory: Physical RAM: 30701 MB Free Memory : 26719 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 915 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something? Edit To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are: my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p}; my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c}; my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min}; my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max}; my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size}; The Kstat entries are there, I'm just getting odd values out of them. Edit 2 I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat: System Memory: Physical RAM: 30701 MB Free Memory : 26697 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 744 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB. So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens? I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg Edit 3 Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates. Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like. It seems to have fixed the problem. This is proper la la land stuff now. I've literally no idea what's going on. :(

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >