Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 136/157 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Set a datetime for next or previous sunday at specific time

    - by Marc
    I have an app where there is always a current contest (defined by start_date and end_date datetime). I have the following code in the application_controller.rb as a before_filter. def load_contest @contest_last = Contest.last @contest_last.present? ? @contest_leftover = (@contest_last.end_date.utc - Time.now.utc).to_i : @contest_leftover = 0 if @contest_last.nil? Contest.create(:start_date => Time.now.utc, :end_date => Time.now.utc + 10.minutes) elsif @contest_leftover < 0 @winner = Organization.order('votes_count DESC').first @contest_last.update_attributes!(:organization_id => @winner.id, :winner_votes => @winner.votes_count) if @winner.present? Organization.update_all(:votes_count => 0) Contest.create(:start_date => @contest_last.end_date.utc, :end_date => Time.now.utc + 10.minutes) end end My questions: 1) I would like to change the :end_date to something that signifies next Sunday at a certain time (eg. next Sunday at 8pm). Similarly, I could then set the :start_date to to the previous Sunday at a certain time. I saw that there is a sunday() class (http://api.rubyonrails.org/classes/Time.html#method-i-sunday), but not sure how to specify a certain time on that day. 2) For this situation of always wanting the current contest, is there a better way of loading it in the app? Would caching it be better and then reloading if a new one is created? Not sure how this would be done, but seems to be more efficient. Thanks!

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • Repeated host lookups failing in urllib2

    - by reve_etrange
    I have code which issues many HTTP GET requests using Python's urllib2, in several threads, writing the responses into files (one per thread). During execution, it looks like many of the host lookups fail (causing a name or service unknown error, see appended error log for an example). Is this due to a flaky DNS service? Is it bad practice to rely on DNS caching, if the host name isn't changing? I.e. should a single lookup's result be passed into the urlopen? Exception in thread Thread-16: Traceback (most recent call last): File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "/home/da/local/bin/ThreadedDownloader.py", line 61, in run page = urllib2.urlopen(url) # get the page File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.6/urllib2.py", line 391, in open response = self._open(req, data) File "/usr/lib/python2.6/urllib2.py", line 409, in _open '_open', req) File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib/python2.6/urllib2.py", line 1170, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib/python2.6/urllib2.py", line 1145, in do_open raise URLError(err) URLError: <urlopen error [Errno -2] Name or service not known>

    Read the article

  • A generic C++ library that provides QtConcurrent functionality?

    - by Lucas
    QtConcurrent is awesome. I'll let the Qt docs speak for themselves: QtConcurrent includes functional programming style APIs for parallel list processing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications. For instance, you give QtConcurrent::map() an iterable sequence and a function that accepts items of the type stored in the sequence, and that function is applied to all the items in the collection. This is done in a multi-threaded manner, with a thread pool equal to the number of logical CPU's on the system. There are plenty of other function in QtConcurrent, like filter(), filteredReduced() etc. The standard CompSci map/reduce functions and the like. I'm totally in love with this, but I'm starting work on an OSS project that will not be using the Qt framework. It's a library, and I don't want to force others to depend on such a large framework like Qt. I'm trying to keep external dependencies to a minimum (it's the decent thing to do). I'm looking for a generic C++ framework that provides me with the same/similar high-level primitives that QtConcurrent does. AFAIK boost has nothing like this (I may be wrong though). boost::thread is very low-level compared to what I'm looking for. I know C# has something very similar with their Parallel Extensions so I know this isn't a Qt-only idea. What do you suggest I use?

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • Why is there so much XML in Java these days?

    - by BD at Rivenhill
    This is really more of a philosophy/design issue. I did some work in Java back in the middle 90's and again in the early 2000's and now I'm coming back to it after spending a lot of time in C/C++ and it seems like there was an explosion of XML dependency while I was gone. Major build system tools like ant and maven depend on XML for their configuration, but I'm actually more concerned with all the frameworks, such as Spring, Hibernate, etc. My experience is that powerful supporting libraries like these are where a developer can really get leverage for building programs with lots of features without writing a lot of code, but it really seems like I'm getting one language for the price of two here. I write a bunch of Java classes, but then I also write a bunch of XML files to glue them together. The things that get done in the XML are things that I can see reasonable ways of doing in straight code without the middleman, and they don't really seem to be treated exactly like configuration files: they change rarely and they end up getting committed to source code control like the Java code itself, but they are distributed with the resulting application and need to be unpacked and installed in the classpath in order to get the application to work. I'm working with server applications that are not web-based, so maybe the domain is a bit different from what most people are doing, but I just can't help feeling that I must be doing something wrong here. Can someone point me to a good source of information for why these design choices were made and what problems they are meant to solve so that I can analyze my own experiences in this context?

    Read the article

  • First Time Architecturing?

    - by cam
    I was recently given the task of rebuilding an existing RIA. The new RIA that I've designed is based on Silverlight, with a WCF service to connect to MS SQL Server. This is my first time doing something like this, so I'm not sure how to design the entire thing. Basically, the client can look through graphs of "stocks" (allowing the client to choose different time periods, settings, etc). I've written the whole application essentially, but I'm not sure how to put it together. The graphs are supposed to be directly based on the database, and to create the datapoints on the graph, some calculations need to be done (not very expensive ones). The problem I'm having is to decide where to put the calculations (client or serverside? Or half and half?) What factors should I look for to help me decide where the calculations should be done? And how can I go about optimizing this (caching, etc)? Obviously this is a very broad subject, so I'm not expecting an immediate answer, but any help/pointing in the right direction/resources would be appreciated.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • Are there any changes in the licensing of Visual Studio 2013 Express editions?

    - by Ramón García-Pérez
    As was going through reading the license.htm file provided as part of the VS2013_RTM_WebExp_ENU.iso offline installation media for the Visual Studio 2013 Express for Web, section 6 reads as follows: 6. PACKAGE MANAGER AND THIRD PARTY SOFTWARE INSTALLATION FEATURES. The software includes the following features (each a “Feature”), each of which enables you to obtain software applications or packages through the Internet from other sources: Extension Manager, New Project Dialog, Web Platform Installer, and Microsoft NuGet-Based Package Manager. Those software applications and packages are offered and distributed in some cases by third parties and in some cases by Microsoft, but each such application or package is under its own license terms. Microsoft is not developing, distributing or licensing any of the third-party applications or packages to you, but instead, as a convenience, enables you to use the Features to access or obtain those applications or packages directly from the third-party application or package providers. By using the Features, you acknowledge and agree that: you are obtaining the applications or packages from such third parties and under separate license terms applicable to each application or package (including, with respect to the package-manager Features, any terms applicable to software dependencies that may be included in the package); MICROSOFT MAKES NO REPRESENTATIONS, WARRANTIES OR GUARANTEES AS TO THE FEED OR GALLERY URL, ANY FEEDS OR GALLERIES FROM SUCH URL, THE INFORMATION CONTAINED THEREIN, OR ANY SOFTWARE APPLICATIONS OR PACKAGES REFERENCED IN OR ACCESSED BY YOU THROUGH SUCH FEEDS OR GALLERIES. MICROSOFT GRANTS YOU NO LICENSE RIGHTS FOR THIRD-PARTY SOFTWARE APPLICATIONS OR PACKAGES THAT ARE OBTAINED USING THE FEATURES. Are there any changes in the licensing of Visual Studio 2013 Express editions? If so, does this means that Visual Studio extensions installation in Express Editions is now allowed? PS: Previous versions of the Express editions did not allow the installation of extensions as per "EULA/TOS" discussed here: Limitations of Visual Studio 2012 Express Desktop

    Read the article

  • Entity framework 4.0 compiled query with Where() clause issue

    - by Andrey Salnikov
    Hello, I encountered with some strange behavior of System.Data.Objects.CompiledQuery.Compile function - here is my code for compile simple query: private static readonly Func<DataContext, long, Product> productQuery = CompiledQuery.Compile((DataContext ctx, long id) => ctx.Entities.OfType<Data.Product>().Where(p => p.Id == id) .Select(p=>new Product{Id = p.Id}).SingleOrDefault()); where DataContext inherited from ObjectContext and Product is a projection of POCO Data.Product class. My data context in first run contains Data.Product {Id == 1L} and in second Data.Product {Id == 2L}. First using of compilled query productQuery(dataContext, 1L) works perfect - in result I have Product {Id == 1L} but second run productQuery(dataContext, 2L) always returns null, instead of context in second run contains single product with id == 2L. If I remove Where clause I will get correct product (with id == 2L). It seems that first id value caching while first run of productQuery, and therefore all further calls valid only when dataContext contains Data.Product {id==1L}. This issue can't be reproduced if I've used direct query instead of its precompiled version. Also, all tests I've performed on test mdf base using SQL Server 2008 express and Visual studio 2010 final from my ASP.net application.

    Read the article

  • Why doesn't String's hashCode() cache 0?

    - by polygenelubricants
    I noticed in the Java 6 source code for String that hashCode only caches values other than 0. The difference in performance is exhibited by the following snippet: public class Main{ static void test(String s) { long start = System.currentTimeMillis(); for (int i = 0; i < 10000000; i++) { s.hashCode(); } System.out.format("Took %d ms.%n", System.currentTimeMillis() - start); } public static void main(String[] args) { String z = "Allocator redistricts; strict allocator redistricts strictly."; test(z); test(z.toUpperCase()); } } Running this in ideone.com gives the following output: Took 1470 ms. Took 58 ms. So my questions are: Why doesn't String's hashCode() cache 0? What is the probability that a Java string hashes to 0? What's the best way to avoid the performance penalty of recomputing the hash value every time for strings that hash to 0? Is this the best-practice way of caching values? (i.e. cache all except one?) For your amusement, each line here is a string that hash to 0: pollinating sandboxes amusement & hemophilias schoolworks = perversive electrolysissweeteners.net constitutionalunstableness.net grinnerslaphappier.org BLEACHINGFEMININELY.NET WWW.BUMRACEGOERS.ORG WWW.RACCOONPRUDENTIALS.NET Microcomputers: the unredeemed lollipop... Incentively, my dear, I don't tessellate a derangement. A person who never yodelled an apology, never preened vocalizing transsexuals.

    Read the article

  • Response time increasing (worsening) over time with consistent load

    - by NJ
    Ok. I know I don't have a lot of information. That is, essentially, the reason for my question. I am building a game using Flash/Flex and Rails on the back-end. Communication between the two is via WebORB. Here is what is happening. When I start the client an operation calls the server every 60 seconds (not much, right?) which results in two database SELECTS and an UPDATE and a resulting response to the client. This repeats every 60 seconds. I deployed a test version on heroku and NewRelic's RPM told me that response time degraded over time. One client with one task every 60 seconds. Over several hours the response time drifted from 150ms to over 900ms in response time. I have been able to reproduce this in my development environment (my Macbook Pro) so it isn't a problem on Heroku's side. I am not doing anything sophisticated (by design) in the server app. An action gets called, gets some data from the database, performs an AR update and then returns a response. No caching, etc. Any thoughts? Anyone? I'd really appreciate it.

    Read the article

  • [FLEX] Images won't dynamically refresh

    - by Bridget
    I am writing a Flex application to receive xml from an httpservice. That works because I can populate a datagrid with the information. The xml sends image pathnames. A combobox sends a new HttpService call onChange. This repopulates the datagrid and puts new images in the folder that flex is accessing. I want to dynamically change the image without changing the pathname of the image. <mx:Canvas id="borderCanvas"><mx:Canvas id="dropCanvas"> <mx:Tile id="adTile"><mx:Image></mx:Image> </mx:Tile></mx:Canvas></mx:Canvas> This is my component. I assign my Image sources using this code: var i:Number = 0; while ( i <= dg_conads.rowCount){ var img:Image = new Image(); img.source = null; img.source = imageSource+i+".jpg"; adTile.addChild(img); i++; } My biggest problem is that the images are not refreshing. I get the same image even though I've prevented caching from the HTML wrapper and the ASP.Net website. The image automatically loads in the folder and refreshes in the folder but I can't get the image to refresh in the application. I've tried removeAllChildren(); delete(adTile.getChildAt(0)); and neither worked.

    Read the article

  • Web services or shared database for (game) server communication?

    - by jaaronfarr
    We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database). We're considering several approaches for inter-server communication: Just share the MySQL databases between clusters (introduce SQL to the game servers) Sharing data in a distributed key-value store like Memcache, Redis, etc. Use an RPC technology like Google ProtoBufs or Apache Thrift Using RESTful web services (the game server would POST back to the web servers, for example) At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail. Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

    Read the article

  • Ajax return string links not working

    - by Andreas Lympouras
    I have this ajax function: function getSearchResults(e) { var searchString = e.value; /*var x = e.keyCode; var searchString = String.fromCharCode(x);*/ if (searchString == "") { document.getElementById("results").innerHTML = ""; return; } var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp = new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { document.getElementById("results").innerHTML = xmlhttp.responseText; } } xmlhttp.open("GET", "<%=Page.ResolveUrl("~/template/searchHelper.aspx?searchString=")%>"+searchString, true); xmlhttp.setRequestHeader("If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT"); //solve any AJAX caching problem(not refreshing) xmlhttp.send(); } and here is when I call it: <input class="text-input" value="SEARCH" id="searchbox" onkeyup="javascript:getSearchResults(this);" maxlength="50" runat="server" /> and all my links in the searchHelper.aspx file(which retruns them as a string) are like this: <a class="item" href='../src/actors.aspx?id=77&name=dasdassss&type=a' > <span class="item">dasdassss</span></a> When I click this link I want my website to go to ../src/actors.aspx?id=77&name=dasdassss&type=a but nothing happens. When I hover over the link, it also shows me where the link is about to redirect! any help?

    Read the article

  • How can I initialize an ActiveX control from a URL?

    - by Peter Ruderman
    I have an MFC ActiveX control embedded in a web page. Some of the parameters for this control are very large. I don't know what these values will be at compile time, but I do know that once retrieved, they will almost certainly never change. Currently, I embed the parameters like so: <object name="MyActiveX"> <param name="param" value="<%= GetData() %>" /> </object> I want to do something like this: <object name="MyActiveX"> <param name="param" value="content/data" valuetype="ref" /> </object> The idea is that the browser would retrieve the resource from the web server and pass it on to the control. The browser's own caching would then take care of the unneccesary downloads. Unfortunately, ref parameters don't work like this. The browser just passes the url along to the control (which strikes me as utterly useless, but I digress). So, is there some way I can make this work? Alternatively, is there an easy way in MFC to instruct the control's host container to retrieve a URI identified resource? Any better ideas?

    Read the article

  • Fastest reliable way for Clojure (Java) and Ruby apps to communicate

    - by jkndrkn
    Hi There, We have cloud-hosted (RackSpace cloud) Ruby and Java apps that will interact as follows: Ruby app sends a request to Java app. Request consists of map structure containing strings, integers, other maps, and lists (analogous to JSON). Java app analyzes data and sends reply to Ruby App. We are interested in evaluating both messaging formats (JSON, Buffer Protocols, Thrift, etc.) as well as message transmission channels/techniques (sockets, message queues, RPC, REST, SOAP, etc.) Our criteria: Short round-trip time. Low round-trip-time standard deviation. (We understand that garbage collection pauses and network usage spikes can affect this value). High availability. Scalability (we may want to have multiple instances of Ruby and Java app exchanging point-to-point messages in the future). Ease of debugging and profiling. Good documentation and community support. Bonus points for Clojure support. What combination of message format and transmission method would you recommend? Why? I've gathered here some materials we have already collected for review: Comparison of various java serialization options Comparison of Thrift and Protocol Buffers (old) Comparison of various data interchange formats Comparison of Thrift and Protocol Buffers Fallacies of Protocol Buffers RPC features Discussion of RPC in the context of AMQP (Message-Queueing) Comparison of RPC and message-passing in distributed systems (pdf) Criticism of RPC from perspective of message-passing fan Overview of Avro from Ruby programmer perspective

    Read the article

  • In maven2, how do I assemble bits and pieces of different modules to create final distributions?

    - by Carcassi
    I have four maven project: client api jar web service war ui jar web interface war The service war will need to be packaged to include the client api jar, together with javadocs (so that each version is distributed with a matching client and documentation). The web interface war will need the ui jar and all the dependencies (webstart/applet deployment). So I need a 5th project that does all the packaging. How to do this with ant or a script is perfectly clear to me, but not in maven. I tried the following: having the javadocs included as part of the war packaging: this requires the execution of the javadocs goal in project 1 before execution of package in project 2. Haven't found a way to bind plugins/goals across different projects. Using the assembly plugin in project2 had the same problem. create a fifth project and use the assembly plugin. Still the same problems as before, with the problem that since I need different pieces from each sub-project I do not understand how this can be done using the assembly. Is this too hard to do in maven, and should I just give up? Or I am looking at it wrong, in which case, how should I be looking at it? Thanks! Upon further reflection, here is a partial answer: Each project should build all its artifacts. This is done by having the plugins configured to run as per the prepare-resources and package phases. So, in my case, I prepare all that needs to be generated (jar, javadocs, xsd documentation, ...) as different artifacts so that a single "package" goal execution creates all. So, it's not "how project 2 forces project 1 to run different goals", but it's "make project 1 create all of its artifact as part as the normal lifecycle). This seems to simplify things.

    Read the article

  • when to use Hibernate vs. Simple ResultSets for small application

    - by luke
    I just started working on upgrading a small component in a distributed java application. The main application is a rather complicated applet/servlet combo running on JBoss and it extensively uses Hibernate for its DataAccess. The component i am working on however is very a very straightforward data importing service. Basically the workflow is Listen for a network event Parse the data packet, extract a set of identifiers Map the identifier set to a primary key in our database Parse the rest of the packet and insert items in a related table using the foreign key found in step 3 Repeat in the previous version of this component it used a hibernate based DAL, that is no longer usable for a variety of reasons (in particular it is EOL), so I am in charge of replacing the Data Access layer for this component. So on the one hand I think i should use Hibernate because that's what the rest of the application does, but on the other i think i should just use regular java.sql.* classes because my requirements are really straightforward and aren't expected to change any time soon. So my question is (and i understand it is subjective) at what point do you think that the added complexity of using an ORM tool (in terms of configuration, dependencies...) is worth it? UPDATE due to the way the DataAccesLayer for the main application was written (weird dependencies) i cannot easily use it, i would have to implement it myself.

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • Behavior of Struts2 and convention-plugin when there is Index(extends ActionSupport)

    - by hanishi
    We have an Action class named 'Index' immediately under com.example.common.action and is annotated @ParentPackage('default') which is declared in package directive in struts.xml and has "/" for its namespace and extends "struts-default". It also declares @Result so that it responses with jsp files corresponding the string values returned by its execute() method. In our struts.xml, the following struts setting is configured along with other necessary configurations that are needed for convention-plugin. <constant name="struts.action.extension" value=","/> When accessing /my_context/none_existing_path, the request apparently hits this Index class and the contents of the jsp declared in the Index's @Result section gets returned. However, if we provide /my_context/, we receive the following error: HTTP Status 404-There is no Action mapped for namespace[/] and action name [] associated with context path [/my_context]. We want to know the reason why accessing /my_context/none_existing_path, where none_existing_path has no matching action, can fallback to Index class, but error is returned when when the URL requested is just /my_context/. Currently, our convention-plugin settings are declared as follows: <constant name="struts.convention.package.locators.basePackage" value="com.example"/> <constant name="struts.convention.package.locators" value="action"/> Strangely, if we changed the value of the struts.convention.package.locators.basePackage to om.example.common, in which the aforementioned Index file can be immediately found by narrowing the search scope, requesting /my_context/ displays the content of the jsps declared in @Result section of the Index class. However, as our action classes are distributed throughout the com.example.[a-z].action packages, where [a-z] represents the large volume of directories we have in our package structure, we cannot use this trick as a workaround. We have also tried placing index.jsp at the top level of the class path, and have the index.jsp redirect to /my_context/index, which worked but not what we want. Could this be a bug? We appreciate your responses. Thank you in advance. EDIT: JIRA registered, problem solved (from Struts 2.3.12 up)

    Read the article

  • Hadoop File Read

    - by user3684584
    Hadoop Distributed Cache Wordcount example in hadoop 2.2.0. Copied file into hdfs filesystem to be used inside setup of mapper class. protected void setup(Context context) throws IOException,InterruptedException { Path[] uris = DistributedCache.getLocalCacheFiles(context.getConfiguration()); cacheData=new HashMap(); for(Path urifile: uris) { try { BufferedReader readBuffer1 = new BufferedReader(new FileReader(urifile.toString())); String line; while ((line=readBuffer1.readLine())!=null) { System.out.println("**************"+line); cacheData.put(line,line); } readBuffer1.close(); } catch (Exception e) { System.out.println(e.toString()); } } } Inside Driver Main class Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs(); if (otherArgs.length != 3) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word_count"); job.setJarByClass(WordCount.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); Path outputpath=new Path(otherArgs[1]); outputpath.getFileSystem(conf).delete(outputpath,true); FileOutputFormat.setOutputPath(job,outputpath); System.out.println("CachePath****************"+otherArgs[2]); DistributedCache.addCacheFile(new URI(otherArgs[2]),job.getConfiguration()); System.exit(job.waitForCompletion(true) ? 0 : 1); But getting exception java.io.FileNotFoundException: file:/home/user12/tmp/mapred/local/1408960542382/cache (No such file or directory) So Cache functionality not working properly. Any Idea ?

    Read the article

  • Major JQuery bug on IE not reproducible - What can i do in this situation to solve this bug?

    - by ming yeow
    I am hoping to get some help on this issue. Some users on IE have been reporting this javascript issue, but I have been unable to re-produce it. In essence, for some class of windows IE users, the game doesn't work (or $.ajax() is not working). What I know: I swapped out an ajax call (ajax_init_trainer) and used a standard link with some request parameters to do the initialization and ppl seemed to get passed the problem until they hit the next ajax call. I read somewhere that IE does crazy caching so you need to make the urls unique, which is why i added the _requestno parameter. However, setting the cache:false is said to also do this. This didn't fix it for someone who was complaining. function done(res, status) { var data = JSON.parse(res.responseText); hide_loading(); if (status == "success") { window.location.href="/bamo/battle/?{{ fb_sig}}"; } else { display_alert("Problem!",data.msg,$("#notifications")); } }; $(".monster_select_class").click(function() { $(this).attr("src","{{MEDIA_URL}}/bamo/button_select_click.png"); monster_class = $(this).attr("monster_class"); monster_type = $(this).attr("monster_type"); ajax_init_trainer(monster_class,monster_type); }); function ajax_init_trainer(trainer_class,monster_type) { var data = {trainer_class:trainer_class,monster_type:monster_type}; var d = new Date(); var args = { type:"POST",url:"/bamo/api/init_trainer/?_requestno="+d.getTime(),data:data,contentType:"application/json;", dataType: "json",cache:false,complete:done}; $.ajax(args); return false; };

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >