Search Results

Search found 1282 results on 52 pages for 'overhead'.

Page 41/52 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Which technology should I use to transform my latex documents into html documents

    - by Matthias Günther
    Hey, I want to write a little program that transforms my TeX files into HTML. I want to parse the documents and turn the macros (the build-in and of course my own) into HTML pieces. Here are my requirements: predefined rules (e.g. begin{itemize} \item text \end{itemize} = <br> <p>text </p> <br/>) defining own CSS style ability to convert formulars (extract the formulars, load them in an imagecreator and then save the jpg/png) easy to maintain and concise I know there are several technologies out there, but I don't exactly know which is the best for me. Here are the technologies which flow into my mind Ruby (I/O is easy, formular loading via webrat), XML XSLT (I don't think that I need just overhead) perl (there are many libs out there but I'm not quite familiar with it) bash (I worked with sed and was surprised how easy it was to work with regular expressions) latex2html ... (these converters won't work for me and they don't give me freedom in parsing) Any suggestions, hints and comments are welcome. Thanks for your time, folks.

    Read the article

  • Is XSLT worth investing time in and are there any actual alternatives?

    - by Keeno
    I realize this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the XML etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independent e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibilty to be able to easy reformat the content (I realize CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independent, and to be able to run "offline" i.e. without a server powering the HTML Negatives I've read so far for XSLT: Overhead? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • How can I help fellow students struggling in programming classes?

    - by David Barry
    I'm a computer science student finishing up my second semester of programming classes. I've enjoyed them quite a bit, and learned a lot, but it seems other students are struggling with the concepts and assignments more than I am. When an assignment is due, the inevitable group email comes out the day or two before with people needing some help either with a specific part of the problem, or sometimes people just seem to have a hard time knowing where to start. I'd really like to be able to help out, but I have a hard time thinking of the right way to give them help without giving them the answer. When I'm having trouble understanding a concept, a code snippet can go along way to helping me, but at the same time if it makes a lot of sense, it can be difficult to think of another way to go about it. Plus the Academic Integrity section of each assignment is always looming overhead warning against sharing code with others. I've tried using pseudo code to help give others an idea on program flow, leaving them to figure out how to implement certain aspects of it, but I didn't get too much feedback and don't know how much it actually helped them out, or if it just confused them further. So I'm basically looking to see if anyone has experience with this, or good ways that I can help out other students to nudge them in the right direction or help them think about the problem in the right way.

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • Threading calls to web service in a web service - (.net 2.0)

    - by Ryan Ternier
    Got a question regarding best practices for doing parallel web service calls, in a web service. Our portal will get a message, split that message into 2 messages, and then do 2 calls to our broker. These need to be on separate threads to lower the timeout. One solution is to do something similar to (pseudo code): XmlNode DNode = GetaGetDemoNodeSomehow(); XmlNode ENode = GetAGetElNodeSomehow(); XmlNode elResponse; XmlNode demResponse; Thread dThread = new Thread(delegate { //Web Service Call GetDemographics d = new GetDemographics(); demResponse = d.HIALRequest(DNode); }); Thread eThread = new Thread(delegate { //Web Service Call GetEligibility ge = new GetEligibility(); elResponse = ge.HIALRequest(ENode); }); dThread.Start(); eThread.Start(); dThread.Join(); eThread.Join(); //combine the resulting XML and return it. //Maybe throw a bit of logging in to make architecture happy Another option we thought of is to create a worker class, and pass it the service information and have it execute. This would allow us to have a bit more control over what is going on, but could add additional overhead. Another option brought up would be 2 asynchronous calls and manage the returns through a loop. When the calls are completed (success or error) the loop picks it up and ends. The portal service will be called about 50,000 times a day. I don't want to gold plate this sucker. I'm looking for something light weight. The services that are being called on the broker do have time out limits set, and are already heavily logged and audited, so I'm not worried on that part. This is .NET 2.0 , and as much as I would love to upgrade I can't right now. So please leave all the goodies of 2.0 out please.

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • What is a reasonable OSGi development workflow?

    - by levand
    I'm using OSGi for my latest project at work, and it's pretty beautiful as far as modularity and functionality. But I'm not happy with the development workflow. Eventually, I plan to have 30-50 separate bundles, arranged in a dependency graph - supposedly, this is what OSGi is designed for. But I can't figure out a clean way to manage dependencies at compile time. Example: You have bundles A and B. B depends on packages defined in A. Each bundle is developed as a separate Java project. In order to compile B, A has to be on the javac classpath. Do you: Reference the file system location of project A in B's build script? Build A and throw the jar into B's lib directory? Rely on Eclipse's "referenced projects" feature and always use Eclipse's classpath to build (ugh) Use a common "lib" directory for all projects and dump the bundle jars there after compilation? Set up a bundle repository, parse the manifest from the build script and pull down the required bundles from the repository? No. 5 sounds the cleanest, but also like a lot of overhead.

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

  • Call .NET Webservice with Android

    - by Lasse P
    Hi, I know this question has been asked here before, but I don't think those answers were adequate for my needs. We have a SOAP webservice that is used for an iPhone application, but it is possible that we need an Android specific version or a proxy of the service, so we have the option to go with either SOAP or JSON. I have a few concerns about both methods: SOAP solution: Is it possible to generate java source code from a WSDL file, if so, will it include some kind of proxy class to invoke the webservice and will it work in the Android environment at all? Google has not provided any SOAP library in Android, so i need to use 3rd party, any suggestion? What about the performance/overhead with parsing and transmitting SOAP xml over the wire versus the JSON solution? JSON solution: There is a few classes in the Android sdk that will let me parse JSON, but does it support generic parsing, like if I want the result to be parsed as a complex type? Or would I need to implement that myself? I have read about 2 libraries before here on Stackoverflow, GSON an Jackson. What is the difference performance and usability (from a developers perspective) wise? Do you guys have any experince with either of those libraries? So i guess the big question is, what method to go with? I hope you can help me out. Thanks in advance :-)

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • Why do I get errors when using unsigned integers in an expression with C++?

    - by neuviemeporte
    Given the following piece of (pseudo-C++) code: float x=100, a=0.1; unsigned int height = 63, width = 63; unsigned int hw=31; for (int row=0; row < height; ++row) { for (int col=0; col < width; ++col) { float foo = x + col - hw + a * (col - hw); cout << foo << " "; } cout << endl; } The values of foo are screwed up for half of the array, in places where (col - hw) is negative. I figured because col is int and comes first, that this part of the expression is converted to int and becomes negative. Unfortunately, apparently it doesn't, I get an overflow of an unsigned value and I've no idea why. How should I resolve this problem? Use casts for the whole or part of the expression? What type of casts (C-style or static_cast<...)? Is there any overhead to using casts (I need this to work fast!)? EDIT: I changed all my unsigned ints to regular ones, but I'm still wondering why I got that overflow in this situation.

    Read the article

  • finding common prefix of array of strings

    - by bumperbox
    I have an array like this $sports = array( 'Softball - Counties', 'Softball - Eastern', 'Softball - North Harbour', 'Softball - South', 'Softball - Western' ); and i would like to find the longest common part of the string so in this instance, it would be 'Softball - '; I am thinking that I would follow the this process $i = 1; // loop to the length of the first string while ($i < strlen($sports[0]) { // grab the left most part up to i in length $match = substr($sports[0], 0, $i); // loop through all the values in array, and compare if they match foreach ($sports as $sport) { if ($match != substr($sport, 0, $i) { // didn't match, return the part that did match return substr($sport, 0, $i-1); } } // foreach // increase string length $i++; } // while // if you got to here, then all of them must be identical Questions is there a built in function or much simpler way of doing this ? for my 5 line array that is probably fine, but if i were to do several thousand line arrays, there would be a lot of overhead, so i would have to be move calculated with my starting values of $i, eg $i = halfway of string, if it fails, then $i/2 until it works, then increment $i by 1 until we succeed. so that we are doing the least number of comparisons to get a result If there a formula/algorithm out already out there for this kind of problem ? thanks alex

    Read the article

  • Accurately and securely measure the time spent viewing a web page

    - by balpha
    Suppose the following: You have a web page that presents a simple game to a user (e.g. a quiz, a puzzle, etc). The user solves the puzzle, submits the result, and you want to measure as precisely as possible how long they took to solve it. Assume it's quite simple, so we're talking seconds, not hours. Also assume JavaScript is required anyway, so there's no need to think of JS-disabled browsers. Finally, assume we don't want to use anything like Flash, Silverlight, or the like. I can think of several techniques: Simply take the time between the points when the data was sent from the server and when the submission arrives. Since this is exclusively server-side, there's no chance for cheating. However, issues like network latency and page rendering time might make this unfair for users with slow computers / browsers / internet connections. On the first request, just send the page without the actual game data. When everything is loaded so far, retrieve the game data through an AJAX call and populate it into the page. This is similar to 1., but reduces some of the caveats introduced through time spent on overhead. Have the time measured on the client side using JavaScript and submitted alongside with the solution. This would theoretically be the most accurate, but it introduces the possibility of cheating, because you're relying on client data. Use the request time headers of a "ready to play" AJAX call and the result submission request. Same caveat as 3., as it is still client data. A combination of server side and client side measuring with some kind of plausibility analysis. I can't think of a good way, but maybe you can. Thoughts? Other ideas?

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • How to decide between using PLINQ and LINQ at runtime?

    - by Hamish Grubijan
    Or decide between a parallel and a sequential operation in general. It is hard to know without testing whether parallel or sequential implementation is best due to overhead. Obviously it will take some time to train "the decider" which method to use. I would say that this method cannot be perfect, so it is probabilistic in nature. The x,y,z do influence "the decider". I think a very naive implementation would be to give both 1/2 chance at the beginning and then start favoring them according to past performance. This disregards x,y,z, however. I suspect that this question would be better answered by academics than practitioners. Anyhow, please share your heuristic, your experience if any, your tips on this. Sample code: public interface IComputer { decimal Compute(decimal x, decimal y, decimal z); } public class SequentialComputer : IComputer { public decimal Compute( ... // sequential implementation } public class ParallelComputer : IComputer { public decimal Compute( ... // parallel implementation } public class HybridComputer : IComputer { private SequentialComputer sc; private ParallelComputer pc; private TheDecider td; // Helps to decide between the two. public HybridComputer() { sc = new SequentialComputer(); pc = new ParallelComputer(); td = TheDecider(); } public decimal Compute(decimal x, decimal y, decimal z) { decimal result; decimal time; if (td.PickOneOfTwo() == 0) { // Time this and save result into time. result = sc.Compute(...); } else { // Time this and save result into time. result = pc.Compute(); } td.Train(time); return result; } }

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Cache layer for MVC - Model or controller?

    - by Industrial
    Hi everyone, I am having some second thoughts about where to implement the caching part. Where is the most appropriate place to implement it, you think? Inside every model, or in the controller? Approach 1 (psuedo-code): // mycontroller.php MyController extends Controller_class { function index () { $data = $this->model->getData(); echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $data = memcached->get('data'); if (!$data) { $query->SQL_QUERY("Do query!"); } return $data; } } Approach 2: // mycontroller.php MyController extends Controller_class { function index () { $dataArray = $this->memcached->getMulti('data','data2'); foreach ($dataArray as $key) { if (!$key) { $data = $this->model->getData(); $this->memcached->set($key, $data); } } echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $query->SQL_QUERY("Do query!"); return $data; } } Thoughts: Approach 1: No multiget/multi-set. If a high number of keys would be returned, overhead would be caused. Easier to maintain, all database/cache handling is in each model Approach 2: Better performancewise - multiset/multiget is used More code required Harder to maintain Tell me what you think!

    Read the article

  • Queueing method calls - any idea how?

    - by TomTom
    I write a heavily asynchronseous application. I am looking for a way to queue method calls, similar to what BeginInvoke / EndInvoke does.... but on my OWN queue. The reaqson is that I am having my own optimized message queueing system using a threadpool but at the same time making sure every component is single threaded in the requests (i.e. one thread only handles messages for a component). I Have a lot of messages going back and forth. For limited use, I would really love to be able to just queue a message call with parameters, instead of having to define my own parameter, method wrapping / unwrapping just for the sake of doing a lot of admnistrative calls. I also do not always want to bypass the queue, and I definitely do not want the sending service to wait for the other service to respond. Anyone knows of a way to intercept a method call? Some way to utilize TransparentProxy / Virtual Proxy for this? ;) ServicedComponent? I would like this to be as little overhead as possible ;)

    Read the article

  • Is there any point in using a volatile long?

    - by Adamski
    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class: public class MyClass { private volatile int id; public MyClass() { ScheduledExecutorService execService = Executors.newScheduledThreadPool(1); execService.scheduleAtFixedRate(new Runnable() { public void run() { ++id; } }, 0L, 30L, TimeUnit.SECONDS); } public int getId() { return id; } } My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit). Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

    Read the article

  • Randomly sorting an array

    - by Cam
    Does there exist an algorithm which, given an ordered list of symbols {a1, a2, a3, ..., ak}, produces in O(n) time a new list of the same symbols in a random order without bias? "Without bias" means the probability that any symbol s will end up in some position p in the list is 1/k. Assume it is possible to generate a non-biased integer from 1-k inclusive in O(1) time. Also assume that O(1) element access/mutation is possible, and that it is possible to create a new list of size k in O(k) time. In particular, I would be interested in a 'generative' algorithm. That is, I would be interested in an algorithm that has O(1) initial overhead, and then produces a new element for each slot in the list, taking O(1) time per slot. If no solution exists to the problem as described, I would still like to know about solutions that do not meet my constraints in one or more of the following ways (and/or in other ways if necessary): the time complexity is worse than O(n). the algorithm is biased with regards to the final positions of the symbols. the algorithm is not generative. I should add that this problem appears to be the same as the problem of randomly sorting the integers from 1-k, since we can sort the list of integers from 1-k and then for each integer i in the new list, we can produce the symbol ai.

    Read the article

  • How to combine apache requests?

    - by Bruce
    To give you the situation in abstract: I have an ajax client that often needs to retrieve 3-10 static documents from the server. Those 3-10 documents are selected by the client out of about 100 documents in total. I have no way of knowing in advance which 3-10 documents the client will require. Additionally, those 100 documents are generated from database content, and so change over time. It seems messy to me to have to make 10 ajax requests for 10 separate documents. My first thought was to write a jsp that could use the include action. ie in pseudo code for (param in params){ jsp:include page="[param]" } But it turns out the tomcat doesn't just include the html resource, it recompiles it, generating a class file every time, which also seems wasteful. Does any one know of a neat solution for combining apache requests to static files to make one request, rather than several, but without the overhead of, for example, tomcat generating extra class files for each static file and regenerating them each time the static file changes? Thanks! Hopefully my question is clear - it's a bit long-winded.

    Read the article

  • What runs faster? Wordpress or Drupal 6.x?

    - by electblake
    So... I run a pretty large Wordpress blog. Currently it gets around 20k+ pageviews a day, and its always a struggle to keep the bad boy running quickly - I currently run a vps.net with CentOS 5.3 I am also Drupal developer by trade so I love the CMS Framework for its versatility and the portability (I can take work from one site and implement on another with great ease) MY QUESTION IS: What is faster then? Wordpress 3.x & Drupal 6.x I'd love to migrate my site to Drupal to be able to roll out new features etc (which I find awkward to do in Wordpress) but I am scared that Drupal may not be able to handle the traffic. Any opinions? I know that some major players use Drupal - as Dries documents well on his blog but I'm not under any illusions that Drupal can be a real hog. Thanks for any/all help! Please try to avoid server optimization talk unless it pertains to Wordpress or Drupal 6.x specifically, I love to learn more about optimizations but I do want to sort out which platform is quicker :) p.s - I realize the fastest option is to use a lower-level framework (with less overhead) like CakePHP etc but assume that isn't an option ;)

    Read the article

  • Invoking a method overloaded where all arguments implement the same interface

    - by double07
    Hello, My starting point is the following: - I have a method, transform, which I overloaded to behave differently depending on the type of arguments that are passed in (see transform(A a1, A a2) and transform(A a1, B b) in my example below) - All these arguments implement the same interface, X I would like to apply that transform method on various objects all implementing the X interface. What I came up with was to implement transform(X x1, X x2), which checks for the instance of each object before applying the relevant variant of my transform. Though it works, the code seems ugly and I am also concerned of the performance overhead for evaluating these various instanceof and casting. Is that transform the best I can do in Java or is there a more elegant and/or efficient way of achieving the same behavior? Below is a trivial, working example printing out BA. I am looking for examples on how to improve that code. In my real code, I have naturally more implementations of 'transform' and none are trivial like below. public class A implements X { } public class B implements X { } interface X { } public A transform(A a1, A a2) { System.out.print("A"); return a2; } public A transform(A a1, B b) { System.out.print("B"); return a1; } // Isn't there something better than the code below??? public X transform(X x1, X x2) { if ((x1 instanceof A) && (x2 instanceof A)) { return transform((A) x1, (A) x2); } else if ((x1 instanceof A) && (x2 instanceof B)) { return transform((A) x1, (B) x2); } else { throw new RuntimeException("Transform not implemented for " + x1.getClass() + "," + x2.getClass()); } } @Test public void trivial() { X x1 = new A(); X x2 = new B(); X result = transform(x1, x2); transform(x1, result); }

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >