Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 378/428 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • recursive wget with hotlinked requisites

    - by dongle
    I often use wget to mirror very large websites. Sites that contain hotlinked content (be it images, video, css, js) pose a problem, as I seem unable to specify that I would like wget to grab page requisites that are on other hosts, without having the crawl also follow hyperlinks to other hosts. For example, let's look at this page https://dl.dropbox.com/u/11471672/wget-all-the-things.html Let's pretend that this is a large site that I would like to completely mirror, including all page requisites – including those that are hotlinked. wget -e robots=off -r -l inf -pk ^^ gets everything but the hotlinked image wget -e robots=off -r -l inf -pk -H ^^ gets everything, including hotlinked image, but goes wildly out of control, proceeding to download the entire web wget -e robots=off -r -l inf -pk -H --ignore-tags=a ^^ gets the first page, including both hotlinked and local image, does not follow the hyperlink to the site outside of scope, but obviously also does not follow the hyperlink to the next page of the site. I know that there are various other tools and methods of accomplishing this (HTTrack and Heritrix allow for the user to make a distinction between hotlinked content on other hosts vs hyperlinks to other hosts) but I'd like to see if this is possible with wget. Ideally this would not be done in post-processing, as I would like the external content, requests, and headers to be included in the WARC file I'm outputting.

    Read the article

  • Given a vector of maximum 10 000 natural and distinct numbers, find 4 numbers(a, b, c, d) such that

    - by king_kong
    Hi, I solved this problem by following a straightforward but not optimal algorithm. I sorted the vector in descending order and after that substracted numbers from max to min to see if I get a + b + c = d. Notice that I haven't used anywhere the fact that elements are natural, distinct and 10 000 at most. I suppose these details are the key. Does anyone here have a hint over an optimal way of solving this? Thank you in advance! Later Edit: My idea goes like this: '<<quicksort in descending order>>' for i:=0 to count { // after sorting, loop through the array int d := v[i]; for j:=i+1 to count { int dif1 := d - v[j]; int a := v[j]; for k:=j+1 to count { if (v[k] > dif1) continue; int dif2 := dif1 - v[k]; b := v[k]; for l:=k+1 to count { if (dif2 = v[l]) { c := dif2; return {a, b, c, d} } } } } } What do you think?(sorry for the bad indentation)

    Read the article

  • jQuery Code Only Fires On Hard Refresh?

    - by Rob Vanders
    The XFBML version of the Facebook Registration plugin only loads HTTPS. I need it to load HTTP so my form does not call a security error mismatch between domains. I wrote this code to get the SRC and rewrite it with out HTTPS It works fine on the first load, however on Chrome and Safari it only loads the first time and on HARD refreshes. It does not load on standard reloads or by pressing "enter" on the address bar. Here is the code $(window).load(function () { // Replace HTTPS with HTTP when frame has loaded $(".subscribe iframe").each(function(){ var source = $(this).attr("src"); //alert(source); var sourceNew = source.replace("https", "http"); // change https to http alert(sourceNew); $(this).attr("src", sourceNew); }); }); I have .HTACCESS set to disable server cache <Files *> Header set Cache-Control: "private, pre-check=0, post-check=0, max-age=0" Header set Expires: 0 Header set Pragma: no-cache </Files> What is causing this to not fire reliably? Thanks

    Read the article

  • Need some help setting up subdomains for my site

    - by KarimSaNet
    I'm setting up my website and want to have it so all subdomain requests are rewritten to the appropriate subdirectory. For example http://projects.karimsa.net/ -> http://karimsa.net/projects/ But I want to use the Apache rewrite mod to do this so that the URL in the browser stays the same. Here is what my config looks like at the moment: ## rewrite subdomains RewriteEngine On RewriteCond %{HTTP_HOST} ^(.*).karimsa.net RewriteCond %{HTTP_HOST} !^www.karimsa.net [NC] RewriteRule ^(.*)$ http://karimsa.net/%1/$1 [R=301,L] And my CNAME records on 'projects.karimsa.net': Domain TTL Data Type projects.karimsa.net 14400 karimsa.net CNAME Theoretically, I feel this should work. But when I go to the URL, it gives me a server misconfiguration error, my provider's default webpage. What I should see is the index.php under /projects/. What am I doing wrong? Any help would be appreciated, thanks for reading. Addition: I realized I forgot to mention some of the problem. The domain 'karimsa.net' is parked at 'karimsa.x10.mx'. If I set up the same configuration on 'projects.karimsa.x10.mx', the rewrite and CNAME work. But on the parked domain I still get the default webpage.

    Read the article

  • In Maven 2, Is it possible to specify a mirror for everything, but allow for failover to direct repo

    - by Justin Searls
    I understand that part of the appeal of setting up a Maven mirror, such as the following: <mirror> <id>nexus</id> <name>Maven Repository</name> <mirrorOf>*</mirrorOf> <url>http://server:8081/nexus/content/groups/public</url> </mirror> ... is that the documentation states, "You can force Maven to use a single repository by having it mirror all repository requests." However, is this also an indication that by having a * mirror set up each workstation [b]must[/b] be forced to go through the mirror? I ask because I would like each workstation to failover and connect directly to whatever public repositories it knows about in the event that Nexus can't resolve a dependency or plugin. (In a perfect world, each developer has the access necessary to add additional proxy repositories as needed. However, sometimes that access isn't available; sometimes the Nexus server goes down; sometimes it suffers a Java heap error.) Is this "mirror but go ahead and connect directly to public repos" failover configuration possible in Maven 2? Will it be in Maven 3?

    Read the article

  • need help understanding a function.

    - by Adam McC
    i had previously asked for help writing/improving a function that i need to calculate a premium based on differing values for each month. the premium is split in to 12 months and earned on a percentage for each month. so if the policy start in march and we are in jan we will have earned 10 months worth. so i need to add up the monthly earning to give us the total earned. wach company wil have differeing earnings values for each month. my original code is Here. its ghastly and slow hence the request for help. and i was presented with the following code. the code works but returns stupendously large figures. begin set @begin=datepart(month,@outdate) set @end=datepart(month,@experiencedate) ;with a as ( select *, case calmonth when 'january' then 1 when 'february' then 2 when 'march' then 3 when 'april' then 4 when 'may' then 5 when 'june' then 6 when 'july' then 7 when 'august' then 8 when 'september' then 9 when 'october' then 10 when 'november' then 11 when 'december' then 12 end as Mnth from tblearningpatterns where clientname=@client and earningpattern=@pattern ) , b as ( select earningvalue, Mnth, earningvalue as Ttl from a where Mnth=@begin union all select a.earningvalue, a.Mnth, cast(b.Ttl*a.earningvalue as decimal(15,3)) as Ttl from a inner join b on a.Mnth=b.Mnth+1 where a.Mnth<=@end ) select @earningvalue= Ttl from b inner join ( select max(Mnth) as Mnth from b ) c on b.Mnth=c.Mnth option(maxrecursion 12) SET @earnedpremium = @earningvalue*@premium end can someone please help me out?

    Read the article

  • CakePHP pagination with HABTM models

    - by nickf
    I'm having some problems with creating pagination with a HABTM relationship. First, the tables and relationships: requests (id, to_location_id, from_location_id) locations (id, name) items_locations (id, item_id, location_id) items (id, name) So, a Request has a Location the request is coming from and a Location the Request is going to. For this question, I'm only concerned about the "to" location. Request --belongsTo--> Location* --hasAndBelongsToMany--> Item (* as "ToLocation") In my RequestController, I want to paginate all the Items in a Request's ToLocation. // RequestsController var $paginate = array( 'Item' => array( 'limit' => 5, 'contain' => array( "Location" ) ) ); // RequestController::add() $locationId = 21; $items = $this->paginate('Item', array( "Location.id" => $locationId )); And this is failing, because it is generating this SQL: SELECT COUNT(*) AS count FROM items Item WHERE Location.id = 21 I can't figure out how to make it actually use the "contain" argument of $paginate... Any ideas?

    Read the article

  • What's wrong with this inner query (MySQL)...

    - by stuboo
    ...besides the fact that I am a total amateur? My table is set up like this: CREATE TABLE `messages` ( `id` int(6) unsigned NOT NULL AUTO_INCREMENT, `patient_id` int(6) unsigned NOT NULL, `message` varchar(255) NOT NULL, `savedate` int(10) unsigned NOT NULL, `senddate` int(10) unsigned NOT NULL, `SmsSid` varchar(40) NOT NULL COMMENT 'where we store the cookies from twilio', `sendorder` tinyint(3) unsigned NOT NULL COMMENT 'the order we want the msg sent in', `sent` tinyint(1) NOT NULL COMMENT '0=queued, 1=sent, 2=sent-unqueued,4=rec-unread,5=recd-read', PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=143 ; I need a query that will SELECT * FROM `messages` WHERE `senddate` < $now AND `sent` = 0 (AND LIMIT TO ONLY ONE RECORD PER `patient_id`) I've tried the following: SELECT * FROM `messages` WHERE `senddate` IN (SELECT `patient_id`, max(`senddate`) GROUP by `patient_id`) AND `senddate` < $now AND `sent` = 0 ; But I get this error: MySQL client version: 5.1.37 `#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'GROUP by patient_id) AND senddate < 1270093898 AND sent = 0 LIMIT 0, 30' at line 5

    Read the article

  • Sending an AJAX Request - Can't get to work

    - by user357944
    I'm trying to make an AJAX GET request, but I simply cannot get it to work. I want to retrieve the HTML source of example.com. I've previously used JQuery to send AJAX requests, but I use JQuery only for its AJAX capabilities so it's a waste to include the 30KB file for one task. What is it that I'm doing wrong? <script type="text/javascript"> var XMLHttpArray = [ function() {return new XMLHttpRequest()}, function() {return new ActiveXObject("Msxml2.XMLHTTP")}, function() {return new ActiveXObject("Msxml2.XMLHTTP")}, function() {return new ActiveXObject("Microsoft.XMLHTTP")} ]; function createXMLHTTPObject(){ var xmlhttp = false; for(var i=0; i<XMLHttpArray.length; i++){ try{ xmlhttp = XMLHttpArray[i](); }catch(e){ continue; } break; } return xmlhttp; } function AjaxRequest(url,method){ var req = createXMLHTTPObject(); req.onreadystatechange= function(){ if(req.readyState != 4) return; if(req.status != 200) return; return req.responseText; } req.open(method,url,true); req.send(null); } function MakeRequst(){ var result=AjaxRequest("http://example.com","get"); alert(result); } </script>

    Read the article

  • Storing large json strings to database + hash

    - by Guy
    I need to store quiete large JSON data strings to the database. I am using gzip to compress the string and therefore BLOB MySQL data type to store it. However, only 5% of all the requests contain unique data and only unique data ought to be stored to the database. My approach is as follows. array_multisort data (array [a, b, c] is virtually the same as [a, c, b]). json_encode data (json_encode is faster than serialize; we need string array representation for the step 3). sha1 data (slower than md5, though less possible the collisions). Check if the hash exists in the database. 5.1 yes – do not insert the data. 5.2. no – gzip the data and store it along the hash. Is there anything about this (apart from storing JSON data to the database in the first place) that sounds fishy or should be done a different way? p.s. We are talking about a database with roughly 1kk unique records being created every month.

    Read the article

  • Find existence of number in a sorted list in constant time? (Interview question)

    - by Rich
    I'm studying for upcoming interviews and have encountered this question several times (written verbatim) Find or determine non existence of a number in a sorted list of N numbers where the numbers range over M, M N and N large enough to span multiple disks. Algorithm to beat O(log n); bonus points for constant time algorithm. First of all, I'm not sure if this is a question with a real solution. My colleagues and I have mused over this problem for weeks and it seems ill formed (of course, just because we can't think of a solution doesn't mean there isn't one). A few questions I would have asked the interviewer are: Are there repeats in the sorted list? What's the relationship to the number of disks and N? One approach I considered was to binary search the min/max of each disk to determine the disk that should hold that number, if it exists, then binary search on the disk itself. Of course this is only an order of magnitude speedup if the number of disks is large and you also have a sorted list of disks. I think this would yield some sort of O(log log n) time. As for the M N hint, perhaps if you know how many numbers are on a disk and what the range is, you could use the pigeonhole principle to rule out some cases some of the time, but I can't figure out an order of magnitude improvement. Also, "bonus points for constant time algorithm" makes me a bit suspicious. Any thoughts, solutions, or relevant history of this problem?

    Read the article

  • Storing data in XML or MongoDB

    - by user766473
    Here is my usecase. 1.Have some data, which I am storing now in the xml files. The data that I am storing is not persistent i.e I would be deleting the user data once the user logs out. 2.My server communicates with the client using the XML requests and responses. So initially we decided, since we are sending the XML as response, lets store it in XML so that conversion from database to XML format time is saved. 3.Client will request for XML based on some filter conditions. So will have to use XQUERY. 4.Maximum of 100 entries will be there in an XML, atleast as of now. Now I would like to hear some advice on whether I should use XML or mongodb. My Concerns : 1. How good is it to store temporary data in mongodb and delete/take backup once done with session 2. Conversion from mongodb json format to XML. 3. Handling the changes in the schema design. Cant use any other DB than mongodb. As some persistent operation or still done on mongodb. Thanks in advance.

    Read the article

  • Dynamic programming Approach- Knapsack Puzzle

    - by idalsin
    I'm trying to solve the Knapsack problem with the dynamical programming(DP) approach, with Python 3.x. My TA pointed us towards this code for a head start. I've tried to implement it, as below: def take_input(infile): f_open = open(infile, 'r') lines = [] for line in f_open: lines.append(line.strip()) f_open.close() return lines def create_list(jewel_lines): #turns the jewels into a list of lists jewels_list = [] for x in jewel_lines: weight = x.split()[0] value = x.split()[1] jewels_list.append((int(value), int(weight))) jewels_list = sorted(jewels_list, key = lambda x : (-x[0], x[1])) return jewels_list def dynamic_grab(items, max_weight): table = [[0 for weight in range(max_weight+1)] for j in range(len(items)+1)] for j in range(1,len(items)+1): val= items[j-1][0] wt= items[j-1][1] for weight in range(1, max_weight+1): if wt > weight: table[j][weight] = table[j-1][weight] else: table[j][weight] = max(table[j-1][weight],table[j-1][weight-wt] + val) result = [] weight = max_weight for j in range(len(items),0,-1): was_added = table[j][weight] != table[j-1][weight] if was_added: val = items[j-1][0] wt = items[j-1][1] result.append(items[j-1]) weight -= wt return result def totalvalue(comb): #total of a combo of items totwt = totval = 0 for val, wt in comb: totwt += wt totval += val return (totval, -totwt) if totwt <= max_weight else (0,0) #required setup of variables infile = "JT_test1.txt" given_input = take_input(infile) max_weight = int(given_input[0]) given_input.pop(0) jewels_list = create_list(given_input) #test lines print(jewels_list) print(greedy_grab(jewels_list, max_weight)) bagged = dynamic_grab(jewels_list, max_weight) print(totalvalue(bagged)) The sample case is below. It is in the format line[0] = bag_max, line[1:] is in form(weight, value): 575 125 3000 50 100 500 6000 25 30 I'm confused as to the logic of this code in that it returns me a tuple and I'm not sure what the output tuple represents. I've been looking at this for a while and just don't understand what the code is pointing me at. Any help would be appreciated.

    Read the article

  • limiting mysql results by range of a specific key INCLUDING DUPLICATES

    - by aVC
    I have a query SELECT p.*, m.*, (SELECT COUNT(*) FROM newPhotoonAlert n WHERE n.userIDfor='$id' AND n.threadID=p.threadID and n.seen='0') AS unReadCount FROM posts p JOIN myMembers m ON m.id = p.user_id LEFT JOIN following f ON (p.user_id = f.user_id AND f.follower_id='$id' AND f.request='0' AND f.status='1') JOIN myMembers searcher ON searcher.id = '$id' WHERE ((f.follower_id = searcher.id) OR m.id='$id') AND p.flagged <'5' ORDER BY p.threadID DESC,p.positionID It brings result as expected but I want to add Another CLAUSE to limit the results. Say a sample (minimal shown) set of data looks like this with the above query. threadID postID positionID url 564 1254 2 a.com 564 1245 1 a1.com 541 1215 3 b1.com 541 1212 2 b2.com 541 1210 1 b3.com 523 745 1 c1.com 435 689 2 d2.com 435 688 1 a4.com 256 345 1 s3.com 164 316 1 f1.com . . I want to get ROWS corresponding to 2 DISTINCT threadIDs starting from MAX, but I want to include duplicates as well. Something like AND p.threadID IN (Select just Two of all threadIDs currently selected, but include duplicate rows) So my result should be threadID postID positionID url 564 1254 2 a.com 564 1245 1 a1.com 541 1215 3 b1.com 541 1212 2 b2.com 541 1210 1 b3.com

    Read the article

  • make it simpler and efficient

    - by gcc
    temp1=*tutar[1]; //i hold input in char *tutar[] if(temp1!='x'||temp1!='n') arrays[1]=malloc(sizeof(int)*num_arrays); //if second input is int a=0; n=i; for(i=1;i<n;++i) { temp1=*tutar[i]; if(temp1=='d') { ++i; j=atoi(tutar[i]); free(arrays[j]); continue; } if(temp1=='x') break; if(temp1=='n')//if it is n { a=0; ++j; arrays[j]=malloc(sizeof(int)*num_arrays);//create and allocate continue; } ++a; if(a>num_arrays) //resize the array arrays[j]=realloc(arrays[j],sizeof(int)*(num_arrays+a)); *(arrays[j]+a-1)=atoi(tutar[i]); printf("%d",arrays[1][1]); } arrays is pointer when you see x exit you see n create (old one is new array[a] new one is array[i+1]) you see d delete arrays[i] according to int after d first number is size of max arrays and where is the error in code input is composed from int and n d x i make a program -taking input(first input must be int) -according to input(there is comman in input like n or d or j , i fill array with number and use memory efficiently -j is jumb to array[x] ( x is int coming after j in input)

    Read the article

  • Prevent cached objects to end up in the database with Entity Framework

    - by Dirk Boer
    We have an ASP.NET project with Entity Framework and SQL Azure. A big part of our data only needs to be updated a few times a day, other data is very volatile. The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do. The volatile data is requested everytime by a DbContext per Http request. When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server. So far, so good. Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges. Well, that was quite a mess. The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data. As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint. But it is of course hacky, and I'm still pretty scared ;P Are there any better ways to prevent whole tree's of cached objects ending up in the database? More information Project is ASP.NET MVC I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request

    Read the article

  • Dynamic Image Caching with Java

    - by zteater
    I have a servlet with an API that delivers images from GET requests. The servlet creates a data file of CAD commands based on the parameters of the GET request. This data file is then delivered to an image parser, which creates an image on the file system. The servlet reads the image and returns the bytes on the response. All of the IO and the calling of the image parser program can be very taxing and images of around 80kb are rendering in 3-4000ms on a local system. There are roughly 20 parameters that make up the GET request. Each correlates to a different portion of the image. So, the combinations of possible images is extremely large. To alleviate the loading time, I plan to store BLOBs of rendered images in a database. If a GET request matches one previously executed, I will pull from cache. Else, I will render a new one. This does not fix "first-time" run, but will help "n+1 runs". Any other ideas on how I can improve performance?

    Read the article

  • grails services :: multiple projects

    - by naveen
    PROBLEM : I have multiple grails projects (lets say appA, appB and appC) : services to be precise I want to run them in a single grails-app.. probably a war deployment, how can i do this? REQUIREMENTS : I want this to be a single app since i am deploying it on cloud and i don't have enough memory to hold all these service instances individually. The reason for multiple grails project is scalability. So that if later on i want to run 10 instance of appA, 3 instance of appB, and 1 instance of aapC; i should be able to do that. EDIT : Can i use something like 0mq, will that be helpful in keeping the services separated from each other. How will i package my service? And reading the docs of 0mq seems that it can work with both inprocess and external process. Will async grails requests on HTTP work with 0mq in process/ external mq calls. Haven't used 0mq, but from the initial doc it seems to work. Need some experience calls in this scenario. Are there any other alternatives or mq alternatives?

    Read the article

  • How to limit EditText lines to 1 by coding (ignoring enter)?

    - by Vahe Musinyan
    I am trying to ignore the enter key, but i do not want to use onKeyDown() function. There is a way to do this in xml: 1. android:maxLines = "1" 2. android:lines = "1" 3. android:singleLine = "true" I actually want to do the last one by coding. Does anyone know how to do that? for (int i=0; i<numClass; i++) { temp_ll = new LinearLayout(this); temp_ll.setOrientation(LinearLayout.HORIZONTAL); temp1 = new EditText(this); InputFilter[] FilterArray = new InputFilter[1]; FilterArray[0] = new InputFilter.LengthFilter(12); temp1.setFilters(FilterArray); // set edit text length to max 12 temp1.setHint(" class name "); temp1.setSingleLine(true); temp_ll.addView(temp1); frame.addView(temp_ll); } ll.addView(frame);

    Read the article

  • Textbox time validations (javascript)

    - by unos
    I have a textbox in which user can enter time (eg- 01:00) and also a drop down box for entering AM/PM fields. (Since the AM/PM field is used, 12-hour time format is used.) The text box allows a max entry of 5 chars only (eg - 01:00). Pls let me know how I can set the 3rd char as a default -colon : , so that the user simply has to enter only the time. How to check if the time entered by the user is numeric or not?. Autocompletion feature : for eg, if user enters 1 then it would automatically be set to 01:00 Javascript Validations for 12-hour format. Eg: if user enters 13:00 then it should change to 01:00 How can I append the text box time values with the am/pm value selected in the drop down box?. Once the values are appended, automatically populate another text box (text box 2) with the result. Eg: 01:00 + pm should be set as 01:00p in the new text box (text box 2). Any help would be appreciated.

    Read the article

  • Aborting $.post() / responsive search results

    - by Emphram Stavanger
    I have the following kludgey code; HTML <input type="search" id="search_box" /> <div id="search_results"></div> JS var search_timeout, search_xhr; $("#search_box").bind("textchange", function(){ clearTimeout(search_timeout); search_xhr.abort(); search_term = $(this).val(); search_results = $("#search_results"); if(search_term == "") { if(search_results.is(":visible")) search_results.stop().hide("blind", 200); } else { if(search_results.is(":hidden")) search_results.stop().show("blind", 200); } search_timeout = setTimeout(function () { search_xhr = $.post("search.php", { q: search_term }, function(data){ search_results.html(data); }); }, 100); }); (uses the textchange plugin by Zurb) The problem I had with my original more simple code was that it was horribly unresponsive. Results would appear seconds later, especially when typed slower, or when Backspace was used, etc. I made all this, and the situation isn't much better. Requests pile up. My original intention is to use .abort() to cancel out whatever previous request is still running as the textchange event is fired again (as per 446594). This doesn't work, as I get repeated errors like this in console; Uncaught TypeError: Cannot call method 'abort' of undefined How can I make .abort() work in my case? Furthermore, is this approach the best way to fetch 'realtime' search results? Much like Facebook's search bar, which gives results as the user types, and seems to be very quick on its feet.

    Read the article

  • Pagination in Java

    - by user569125
    I wrote paging logic: My requirement: total elements to display:100 per page,if i click next it should display next 100 records,if i click previous 100 records. Initial varaible values: showFrom:1, showTo:100 max elements:depends on size of data. pageSize:100. Code: if(p*emphasized text*aging.getAction().equalsIgnoreCase("Next")){ paging.setTotalRec(availableList.size()); showFrom = (showTo + 1); showTo = showFrom + 100- 1; if(showTo >= paging.getTotalRec()) showTo = paging.getTotalRec(); paging.setShowFrom(showFrom); paging.setShowTo(showTo); } else if(paging.getAction().equalsIgnoreCase("Previous")){ showTo = showFrom - 1; showFrom = (showFrom - 100); paging.setShowTo(showTo); paging.setShowFrom(showFrom); paging.setTotalRec(availableList.size()); } Here i can remove and add the elements to the existing data.above code works fine if i add and remove few elements.but if i remove or add 100 elements at a time counts are not displaying properly above code works fine if i add and remove few elements.

    Read the article

  • pxe boot fails with message: no DEFAULT or UI configuration directive found

    - by spockaroo
    I am trying to pxe-boot a machine (client), and in the process I am trying to setup a tftp server that this machine can boot off. On the server, which runs Ubuntu 10.10, I have setup dhcp, dns, nfs, and tftp-hpa servers. All the servers/deamons start fine. I tested the tftp server by using a tftp client and downloading a file that the server directory hosts. My /etc/xinet.d/tftp looks like this service tftp { disable = no socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -v -s /var/lib/tftpboot only_from = 10.1.0.0/24 interface = 10.1.0.1 } My /etc/default/tftpd-hpa looks like this RUN_DAEMON="yes" OPTIONS="-l -s /var/lib/tftpboot" TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure" My /var/lib/tftpboot/ directory looks like this initrd.img-2.6.35-25-generic-pae vmlinuz-2.6.35-25-generic-pae pxelinux.0 pxelinux.cfg -- default I did sudo chmod 644 /var/lib/tftpboot/pxelinux.cfg/default chmod 755 /var/lib/tftpboot/initrd.img-2.6.35-25-generic-pae chmod 755 /var/lib/tftpboot/vmlinuz-2.6.35-25-generic-pae /var/lib/tftpboot/pxelinux.cfg has the following contents SERIAL 0 19200 0 LABEL linux KERNEL vmlinuz-2.6.35-25-generic-pae APPEND root=/dev/nfs initrd=initrd.img-2.6.35-25-generic-pae nfsroot=10.1.0.1:/nfsroot ip=dhcp console=ttyS0,19200n8 rw I copied /var/lib/tftpboot/pxelinux.0 from /usr/lib/syslinux/ after installing the package syslinux-common. Also just for completeness, /etc/dhcp3/dhcpd.conf the following lines (relevant to this interface) subnet 10.1.0.0 netmask 255.255.255.0 { range 10.1.0.100 10.1.0.240; option routers 10.1.0.1; option broadcast-address 10.1.0.255; option domain-name-servers 10.1.0.1; filename "pxelinux.0"; } When I boot the client machine, and watch the output over the serial port, I notice that the client requests an ip address from the server and gets it. Then I see TFTP being displayed - indicating that it is trying to connect to the TFTP server. This succeeds, and I see TFTP.|, which return immediately displaying the following message PXELINUX 4.01 debian-20100714 Copyright (C) 1994-2010 H. Peter Anvin et al No DEFAULT or UI configuration directive found! boot: /var/log/syslog shows Feb 20 15:24:05 ch in.tftpd[2821]: tftp: client does not accept options What option is it talking about in the syslog? I assume it is referring to OPTIONS or TFTP_OPTIONS, but what am I doing wrong?

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >