Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 363/428 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • Altering an embedded truetype font so it will be useable by Windows GDI

    - by Ritsaert Hornstra
    I am trying to render PDF content to a GDI device context (a 24bit bitmap to be exact). Parsing the PDF stream into PDF objects and rendering the PDF commands from the content dictionary works well, inclduing font rendering. Embedded fonts are decompressed from their FontFile streams and "loaded" using AddFontMemResourceEx. Now some embedded fonts remove some TrueType tables that are needed by GDI, like the NAME table. Because of this, I tried to modify the font by parsing the TrueType subset font into it's tables and modify those tables that have data missing / missing tables are regenerated with as correct information as possible. I use the Microsoft Font Validator tool to see how "correct" the generated font is. I still get a few errors, like for the maxp table the max values are usually too large (it is a subset) or The xAvgCharWidth field does not equal the calculated value of the OS/2 table is not correct but this does not stop other embedded fonts to be useable.The fonts embedded using PDFCreator are the ones that are problematic. Question: - How can I determine what I need to change to the font file in order for GDI to be able to use it? - Are there any other font validation tools that might give me insight into what is still wrong with the fontfile? If needed: I can make an original fontfile and an altered fontfile available for download somewhere.

    Read the article

  • Possibility of a custom Contacts field with a set list of values and Contacts lookup performance

    - by areyling
    I'm pretty sure it's not viable to do what I'd like to based on some initial research, but I figured it couldn't hurt to ask the community of experts here in case someone knows a way. I'd like to create a custom field for contacts that the user is able to edit from the main Contacts app; however, the user should only be allowed to select from a list of four specific values. A short list of string values would be ideal, but an int with a min/max range would suffice. I'm interested in knowing if it's possible either way, but also wondering if it make sense to go this route performance wise. More specifically, would it be better to look up a contact (based on a phone number) each time a call or SMS message is received or better to store my own set of data (consisting of name, numbers, and the custom field) and just syncing contact info in a thread every so often? Or syncing contacts the first time the app is run and then registering for changes using ContentObserver? Here is a similar question with an answer that explains how to add a custom field to a contact. Thanks in advance.

    Read the article

  • Piping EOF problems with stdio and C++/Python

    - by yeus
    I got some problems with EOF and stdio. I have no idea what I am doing wrong. When I see an EOF in my program I clear the stdin and next round I try to read in a new line. The problem is: for some reason the getline function immediatly (from the second run always, the first works just as intended) returns an EOF instead of waiting for a new input from py python process... Any idea? alright Here is the code: #include <string> #include <iostream> #include <iomanip> #include <limits> using namespace std; int main(int argc, char **argv) { for (;;) { string buf; if (getline(cin,buf)) { if (buf=="q") break; /*****///do some stuff with input //my actual filter program cout<<buf; /*****/ } else { if ((cin.rdstate() & istream::eofbit)!=0)cout<<"eofbit"<<endl; if ((cin.rdstate() & istream::failbit)!=0)cout<<"failbit"<<endl; if ((cin.rdstate() & istream::badbit)!=0)cout<<"badbit"<<endl; if ((cin.rdstate() & istream::goodbit)!=0)cout<<"goodbit"<<endl; cin.clear(); cin.ignore(numeric_limits<streamsize>::max()); //break;//I am not using break, because I //want more input when the parent //process puts data into stdin; } } return 0; } and in python: from subprocess import Popen, PIPE import os from time import sleep proc=Popen(os.getcwd()+"/Pipingtest",stdout=PIPE,stdin=PIPE,stderr=PIPE); while(1): sleep(0.5) print proc.communicate("1 1 1") print "running"

    Read the article

  • Help me to refactor this F# code to tail recursion

    - by Kev
    I write some code to learning F#. Here is a example: let nextPrime list= let rec loop n= match n with | _ when (list |> List.filter (fun x -> x <= ( n |> double |> sqrt |> int)) |> List.forall (fun x -> n % x <> 0)) -> n | _ -> loop (n+1) loop (List.max list + 1) let rec findPrimes num= match num with | 1 -> [2] | n -> let temp = findPrimes <| n-1 (nextPrime temp ) :: temp //find 10 primes findPrimes 10 |> printfn "%A" I'm very happy that it just works! I'm totally beginner to recursion Recursion is a wonderful thing. I think findPrimes is not efficient. Someone help me to refactor findPrimes to tail recursion if possible? BTW, is there some more efficient way to find first n primes?

    Read the article

  • What type of web service should I put together?

    - by Jake
    I want to write a web service using Visual Studio. The service needs to support some type of authentication, and should be able to receive commands via simple HTTP GET requests. The input would only be a method call with some parameters, and the responses will be simple status/error codes. My instinct would be to go with an ASP.NET Web Service, but this isn't an option in C# 4.0 and it makes me wonder if I should be using something that's more up-to-date. I've looked into WCF, but it seems like this requires a running application on the client-side - is there a way to query a WCF host by just accessing a URL? The authentication is also an important piece. Developing my own little authentication system seems like a bad idea - I've read that it's too easy to mess up. What would be the standard way of authenticating with a web service like this? I'd love to look up all of the specifics on this and learn it myself, but I really don't even know where to begin. Some direction would be greatly appreciated!

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • Save result of for loop in a vector

    - by hendrik
    i think I'm just too tired to see the mistake. i wrote a function to get the maximal value for two data sets from a for loop plot_zu <- function(x) {for (i in 1:x){ z=data_raw[grep(a[i], data_raw$Gene.names),] b=data_raw_ace[grep(a[i], data_raw_ace$Gene.names),] p<-vector("numeric", length(1:length(a))) p[i]<-max(z$t_test_diff) return(p)} } so picture: a is a vector of names and the data set (data_raw(_ace)) are filtered by it. In the end i would like to have all maxima values of column t_test_diff in a vector. After that i want to add the t_test_diff column values from data_raw_ace also. So the problem is, that i get this: [1] 1.210213 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 [8] 0.000000 0.000000 So there is a problem with brackets or something but i cannot see it ( first value fits). Sorry for no good example but i think it is understandable and an easy to solve question. If im to dumb to explain my problem right, i will add an example. !! Thx a lot !! grateful Hendrik

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • How can I differentiate a manual scroll (via mousewheel/scrollbar) from a Javascript/jQuery scroll?

    - by David Murdoch
    UPDATE: Here is a jsbin example demonstrating the problem. Basically, I have the following javascript which scrolls the window to an anchor on the page: // get anchors with href's that start with "#" $("a[href^=#]").live("click", function(){ var target = $($(this).attr("href")); // if the target exists: scroll to it... if(target[0]){ // If the page isn't long enough to scroll to the target's position // we want to scroll as much as we can. This part prevents a sudden // stop when window.scrollTop reaches its maximum. var y = Math.min(target.offset().top, $(document).height() - $(window).height()); // also, don't try to scroll to a negative value... y=Math.max(y,0); // OK, you can scroll now... $("html,body").stop().animate({ "scrollTop": y }, 1000); } return false; }); It works perfectly......until I manually try to scroll the window. When the scrollbar or mousewheel is scrolled I need to stop the current scroll animation...but I'm not sure how to do this. This is probably my starting point... $(window).scroll(e){ if(IsManuallyScrolled(e)){ $("html,body").stop(); } } ...but I'm not sure how to code the IsManuallyScrolled function. I've checked out e (the event object) in Google Chrome's console and AFAIK there is not way to differentiate between a manual scroll and jQuery's animate() scroll. How can I differentiate between a manual scroll and one called via jQuery's $.fn.animate function?

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • Rails Tableless Model

    - by mplacona
    I'm creating a tableless Rails model, and am a bit stuck on how I should use it. Basically I'm trying to create a little application using Feedzirra that scans a RSS feed every X seconds, and then sends me an email with only the updates. I'm actually trying to use it as an activerecord model, and although I can get it to work, it doesn't seem to "hold" data as expected. As an example, I have an initializer method that parses the feed for the first time. On the next requests, I would like to simply call the get_updates method, which according to feedzirra, is the existing object (created during the initialize) that gets updated with only the differences. I'm finding it really hard to understand how this all works, as the object created on the initialize method doesn't seem to persist across all the methods on the model. My code looks something like: def initialize feed parse here end def get_updates feedzirra update passing the feed object here end Not sure if this is the right way of doing it, but it all seems a bit confusing and not very clear. I could be over or under-doing here, but I'd like your opinion about this approach. Thanks in advance

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • Accessing global variable in multithreaded Tomcat server

    - by jwegan
    I have a singleton object that I construct like thus: private static volatile KeyMapper mapper = null; public static KeyMapper getMapper() { if(mapper == null) { synchronized(Utils.class) { if(mapper == null) { mapper = new LocalMemoryMapper(); } } } return mapper; } The class KeyMapper is basically a synchronized wrapper to HashMap with only two functions, one to add a mapping and one to remove a mapping. When running in Tomcat 6.24 on my 32bit Windows machine everything works fine. However when running on a 64 bit Linux machine (CentOS 5.4 with OpenJDK 1.6.0-b09) I add one mapping and print out the size of the HashMap used by KeyMapper to verify the mapping got added (i.e. verify size = 1). Then I try to retrieve the mapping with another request and I keep getting null and when I checked the size of the HashMap it was 0. I'm confident the mapping isn't accidentally being removed since I've commented out all calls to remove (and I don't use clear or any other mutators, just get and put). The requests are going through Tomcat 6.24 (configured to use 200 threads with a minimum of 4 threads) and I passed -Xnoclassgc to the jvm to ensure the class isn't inadvertently getting garbage collected (jvm is also running in -server mode). I also added a finalize method to KeyMapper to print to stderr if it ever gets garbage collected to verify that it wasn't being garbage collected. I'm at my wits end and I can't figure out why one minute the entry in HashMap is there and the next it isn't :(

    Read the article

  • How to Load external Div that has a dynamic content using ajax and jsp/servlets ?

    - by A.S al-shammari
    I need to use ajax feature to load external div element ( external jsp file) into the current page. That JSP page contains a dynamic content - e.g. content that is based on values received from the current session. I solved this problem , but I'm in doubt because I think that my solution is bad , or maybe there is better solution since I'm not expert. I have three files: Javascript function that is triggered when a element is clicked, it requests html data from a servlet: $("#inboxtable tbody tr").click(function(){ var trID = $(this).attr('id'); $.post("event?view=read",{id:trID}, function(data){ $("#eventContent").html(data); // load external file },"html"); // type }); The servlet "event" loads the data and generates HTML content using include method : String id = request.getParameter("id"); if (id != null) { v.add("Test"); v.add(id); session.setAttribute("readMessageVector", v); request.getRequestDispatcher("readMessage.jsp").include(request, response); } The readMessage jsp file looks like this: <p> Text: ${readMessageVector[0]} </p> <p> ID: ${readMessageVector[1]} </p> My questions Is this solution good enough to solve this problem - loading external jsp that has dynamic content ? Is there better solution ?

    Read the article

  • Building asynchronous cache pattern with JSP

    - by merweirdo
    I have a JSP that will take some 8 minutes to render. The code logic itself can not be made more efficient (it will update often and be updated by basically a pointy haired boss). I tried wrapping it with a caching layer like <%@ taglib uri="/WEB-INF/classes/oscache.tld" prefix="oscache" %> <oscache:cache time="60"> <div class="pagecontent"> ..... my logic </div> </oscache:cache> This is nice until the 60 seconds is over. The next query after that blocks until the 8 minutes of rendering is done with again. I would need a way to build a pattern something like: If there is no version of the dynamic content in the cache run the actual logic (and populate the cache for subsequent requests) If there is a non-expired version of the dynamic content in the cache serve the output of the JSP logic from the cache If there is an expired version of the dynamic content in the cache serve the output of the JSP logic still from the cache AND run the JSP logic in the background so that the cache gets updated transparently to the user - avoiding the user have to wait for 8 minutes I found out that at least EHCache might be able to do some asynchronous cache updating but it did not sadly seem to apply to the JSP tags... Also I have to take in 10-20 parameters for the actual logic of the JSP and some of them should be used as a key for caching. Code example and/or pointers would be greatly appreciated. I do not frankly care if the solution provided is extremely ugly. I just want a simple 5 minute caching with asynchronous cache update taking into account some parameters as a key.

    Read the article

  • gen_server with a dict vs mnesia table vs ets

    - by pablo
    Hi, I'm building an erlang server. Users sends http requests to the server to update their status. The http request process on the server saves the user status message in memory. Every minute the server sends all messages to a remote server and clear the memory. If a user update his status several times in a minute, the last message overrides the previous one. It is important that between reading all the messages and clearing them no other process will be able to write a status message. What is the best way to implement it? gen_server with a dict. The key will be the userid. dict:store/3 will update or create the status. The gen_server solves the 'transaction' issue. mnesia table with ram_copies. Handle transactions and I don't need to implement a gen_server. Is there too much overhead with this solution? ETS table which is more light weight and have a gen_server. Is it possible to do the transaction in ETS? To lock the table between reading all the messages and clearing them? Thanks

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • How to Specify Columntype in fluent nHibernate?

    - by Bipul
    I have a class CaptionItem public class CaptionItem { public virtual int SystemId { get; set; } public virtual int Version { get; set; } protected internal virtual IDictionary<string, string> CaptionValues {get; private set;} } I am using following code for nHibernate mapping Id(x => x.SystemId); Version(x => x.Version); Cache.ReadWrite().IncludeAll(); HasMany(x => x.CaptionValues) .KeyColumn("CaptionItem_Id") .AsMap<string>(idx => idx.Column("CaptionSet_Name"), elem => elem.Column("Text")) .Not.LazyLoad() .Cascade.Delete() .Table("CaptionValue") .Cache.ReadWrite().IncludeAll(); So in database two tables get created. One CaptionValue and other CaptionItem. In CaptionItem table has three columns 1. CaptionItem_Id int 2. Text nvarchar(255) 3. CaptionSet_Name nvarchar(255) Now, my question is how can I make the columnt type of Text to nvarchar(max)? Thanks in advance.

    Read the article

  • Powershell scripts to backup SQL, SVN

    - by bszom
    I'm trying to use PowerShell to create some backups, and then to copy these to a web folder (or, in other words, upload them to a WebDAV share). At first I thought I'd do the WebDAV stuff from within PowerShell, but it seems this still requires a fair amount of "manual labour", ie: constructing HTTP requests. I then settled for creating a web folder from the script and letting Windows handle the WebDAV stuff. It seems that all it takes to create a web folder is to create a standard shortcut, as described here. What I can't figure out is how to actually copy files to the shortcut's target..? Maybe I'm going about this the wrong way. It would be ideal if I could somehow encrypt the credentials for the WebDAV in the script, then have it create the web folder, shunt over the files, and delete the web folder again. Or even better, not use a web folder at all. Third option would be to just create the web folder manually and leave it there, though I'd rather not. Any ideas/pointers/tips? :)

    Read the article

  • Animation is slow on iPhone

    - by Anthony Chan
    I'm developing an app that would display images and change them according to the user's action. I've created a subclass of UIView to contain an image, an index number and an array of image names. The code is like this: @interface CustomPic : UIView { UIImageView *pic; NSInteger index; NSMutableArray *picNames; //<-- an array of NSString } And in the implementation part, it has a method to change the image using a dissolve effect. - (void)nextPic { index++; if (index >= [picNames count]) { index = 0; } UIImageView *temp = pic; pic.image = [UIImage imageNamed:[picNames objectAtIndex:index]]; temp.alpha = 0; [UIView beginAnimations:nil context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; [UIView setAnimationDuration:0.25]; pic.alpha = 0; temp.alpha = 1; [UIView commitAnimations]; } In the viewController, there are several CustomPic which would change the images depends on users' choice. The images would change as expected with the fade in/out effect, but the animation performance is really bad. I've tested it on an iPhone 3G, the Instruments shows that the animation is only 2-3FPS! I tried many methods to simplify and modify the codes but with no hope. Is there something wrong in my code or in my concept? Thanks for any help. P.S. all the images are 320*480 PNGs with a max size of 15KB.

    Read the article

  • Doctrine - get the offset of an object in a collection (implementing an infinite scroll)

    - by dan
    I am using Doctrine and trying to implement an infinite scroll on a collection of notes displayed on the user's browser. The application is very dynamic, therefore when the user submits a new note, the note is added to the top of the collection straightaway, besides being sent (and stored) to the server. Which is why I can't use a traditional pagination method, where you just send the page number to the server and the server will figure out the offset and the number of results from that. To give you an example of what I mean, imagine there are 20 notes displayed, then the user adds 2 more notes, therefore there are 22 notes displayed. If I simply requests "page 2", the first 2 items of that page will be the last two items of the page currently displayed to the user. Which is why I am after a more sophisticated method, which is the one I am about to explain. Please consider the following code, which is part of the server code serving an AJAX request for more notes: // $lastNoteDisplayedId is coming from the AJAX request $lastNoteDisplayed = $repository->findBy($lastNoteDisplayedId); $allNotes = $repository->findBy($filter, array('createdAt' => 'desc')); $offset = getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId); // retrieve the page to send back so that it can be appended to the listing $notesPerPage = 30 $notes = $repository->findBy( array(), array('createdAt' => 'desc'), $notesPerPage, $offset ); $response = json_encode($notes); return $response; Basically I would need to write the method getLastNoteDisplayedOffset, that given the whole set of notes and one particoular note, it can give me its offset, so that I can use it for the pagination of the previous Doctrine statement. I know probably a possible implementation would be: getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId) { $i = 0; foreach ($allNotes as $note) { if ($note->getId() === $lastNoteDisplayedId->getId()) { break; } $i++; } return $i; } I would prefer not to loop through all notes because performance is an important factor. I was wondering if Doctrine has got a method itself or if you can suggest a different approach.

    Read the article

  • How do i make divs go into another row when full?

    - by acidzombie24
    My code is something like the below. When theres 3 images everything is fine once theres 4 it gets full and moves the entire div.top into another row. How do i make the div inside top just start a new row instead? I tried writing .top width=500px but once it hits or passes it instead the images inside are squeeze together instead of each being 150x150. I tried max-width on top instead and in opera and chrome i see the border of top as 500width but the images continue to render pass it. (i have a firefox problem with my div so the width looks fixed to something else). So how do i make these divs go into another row? and not try to squeeze together <div class="top"> <div><a href><img/></a></div> <div><a href><img/></a></div> <div><a href><img/></a></div> </div>

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >