Search Results

Search found 7719 results on 309 pages for 'impress js'.

Page 16/309 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Which game engine for HTML5 + Node.js

    - by Chrene
    I want to create a realtime multiplayer game using and HTML5. I want to use node.js as the server, and I only need to be able to render images in a canvas, play some sounds, and do some basic animations. The gameloop should be done in the server, and the client should do callback via sockets to render the canvas. I am not going to spend any money on the engine, and I don't want to use cocos2d-javascript.

    Read the article

  • emberjs on symfony2 dev enviroment dont work propertly

    - by rkmax
    I've builded a app with symfony2 the app expose an REST Api. now i build a simple client for consuming app.coffee - app.js App = Em.Application.create ready: -> @.entradas.load() Entrada: Em.Object.extend() entradas: Em.ArrayController.create content: [] load: -> url = 'http://localhost/api/1/entrada' me = @ $.ajax( url: url, method: 'GET', success: (data) -> me.set('content', []) for entrada in data.data.objects me.pushObject DBPlus.Entrada.create(entrada) ) MyBundle:Home:index.html.twig <script type="text/x-handlebars" src="{{ asset('js/templates/entradas.hbs') }}"></script> <script src="{{ asset('js/libs/jquery-1.7.2.min.js') }}"></script> <script src="{{ asset('js/libs/handlebars-1.0.0.beta.6.js') }}"></script> <script src="{{ asset('js/libs/ember-1.0.pre.min.js') }}"></script> <script src="{{ asset('js/app.js') }}"></script> the problem here is when i run on dev enviroment and link the template like <script type="text/x-handlebars" src="{{...}}"> the app dont work, nothing show but works fine over prod enviroment. he only way that works on dev enviroment is inline template MyBundle:Home:index.html.twig <script type="text/x-handlebars"> {% raw %} <ul class="entradas"> {{#each App.entradas}} <li class="entrada">{{nombre}}</li> {{/each}} </ul> {% endraw %} </script> can explain why this behavoir? Note: I disabled the debug profiler toolbar, and nothing

    Read the article

  • How to filter Backbone.js Collection and Rerender App View?

    - by Jeremy H.
    Is is a total Backbone.js noob question. I am working off of the ToDo Backbone.js example trying to build out a fairly simple single app interface. While the todo project is more about user input, this app is more about filtering the data based on the user options (click events). I am completely new to Backbone.js and Mongoose and have been unable to find a good example of what I am trying to do. I have been able to get my api to pull the data from the MongoDB collection and drop it into the Backbone.js collection which renders it in the app. What I cannot for the life of me figure out how to do is filter that data and re-render the app view. I am trying to filter by the "type" field in the document. Here is my script: (I am totally aware of some major refactoring needed, I am just rapid prototyping a concept.) $(function() { window.Job = Backbone.Model.extend({ idAttribute: "_id", defaults: function() { return { attachments: false } } }); window.JobsList = Backbone.Collection.extend({ model: Job, url: '/api/jobs', leads: function() { return this.filter(function(job){ return job.get('type') == "Lead"; }); } }); window.Jobs = new JobsList; window.JobView = Backbone.View.extend({ tagName: "div", className: "item", template: _.template($('#item-template').html()), initialize: function() { this.model.bind('change', this.render, this); this.model.bind('destroy', this.remove, this); }, render: function() { $(this.el).html(this.template(this.model.toJSON())); this.setText(); return this; }, setText: function() { var month=new Array(); month[0]="Jan"; month[1]="Feb"; month[2]="Mar"; month[3]="Apr"; month[4]="May"; month[5]="Jun"; month[6]="Jul"; month[7]="Aug"; month[8]="Sep"; month[9]="Oct"; month[10]="Nov"; month[11]="Dec"; var title = this.model.get('title'); var description = this.model.get('description'); var datemonth = this.model.get('datem'); var dateday = this.model.get('dated'); var jobtype = this.model.get('type'); var jobstatus = this.model.get('status'); var amount = this.model.get('amount'); var paymentstatus = this.model.get('paymentstatus') var type = this.$('.status .jobtype'); var status = this.$('.status .jobstatus'); this.$('.title a').text(title); this.$('.description').text(description); this.$('.date .month').text(month[datemonth]); this.$('.date .day').text(dateday); type.text(jobtype); status.text(jobstatus); if(amount > 0) this.$('.paymentamount').text(amount) if(paymentstatus) this.$('.paymentstatus').text(paymentstatus) if(jobstatus === 'New') { status.addClass('new'); } else if (jobstatus === 'Past Due') { status.addClass('pastdue') }; if(jobtype === 'Lead') { type.addClass('lead'); } else if (jobtype === '') { type.addClass(''); }; }, remove: function() { $(this.el).remove(); }, clear: function() { this.model.destroy(); } }); window.AppView = Backbone.View.extend({ el: $("#main"), events: { "click #leads .highlight" : "filterLeads" }, initialize: function() { Jobs.bind('add', this.addOne, this); Jobs.bind('reset', this.addAll, this); Jobs.bind('all', this.render, this); Jobs.fetch(); }, addOne: function(job) { var view = new JobView({model: job}); this.$("#activitystream").append(view.render().el); }, addAll: function() { Jobs.each(this.addOne); }, filterLeads: function() { // left here, this event fires but i need to figure out how to filter the activity list. } }); window.App = new AppView; });

    Read the article

  • min.js to clear source

    - by dole
    Hi there As far i know until now, the min version of a .js(javascript) file is obtaining by removing the unncessary blank spaces and comments, in order to reduce the file size. My questions are: How can I convert a min.js file into a clear, easy readable .js file Besides, size(&and speed) are there any other advtages of the min.js file. the js files can be encripted? can js be infected. I think the answer is yes, so the question is how to protect the .js files from infections? Only the first question is most important and I'm looking for help on it. TY

    Read the article

  • How to integrate history.js to my ajax site?

    - by vzhen
    I have left and right div in my website and with navigation buttons. Let's say button_1, button_2. When clicking on these button will change the right div content using ajax so means the left div do nothing. The above is what I have done but I have a problem here, the url doesn't change. And I found history.js is able to solve this issue but I cannot find any tutorial about history.js + ajax. Can anyone help me out? Thank in adv

    Read the article

  • How can I return JSON from node.js backend to frontend without reloading or re-rendering the page?

    - by poleapple
    I am working with node.js. I want to press a search button, make some rest api calls to another server in the backend and return the json back to the front end, and reload a div in the front end so that I won't have to refresh the page. Now I know I can reload just a div with jQuery or just Javascript dom manipulation. But how do I make it call a method on the server side? I could have a submit button and it will make a post request and I can catch it and make my api calls from there, however from the node.js side, when I return, I will have to render the page again. How do I go about returning JSON from the back end to the front end without re-rendering or refreshing my page? Thanks.

    Read the article

  • How to implement CSRF protection in Ajax calls using express.js (looking for complete example)?

    - by Benjen
    I am trying to implement CSRF protection in an app built using node.js using the express.js framework. The app makes abundant use of Ajax post calls to the server. I understand that the connect framework provides CSRF middleware, but I am not sure how to implement it in the scope of client-side Ajax post requests. There are bits and pieces about this in other Questions posted here in stackoverflow, but I have yet to find a reasonably complete example of how to implement it from both the client and server sides. Does anyone have a working example they care to share?

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • How to fetch and populate backbone model for Google Places JS API?

    - by code-gijoe
    I'm implementing a system that require access to Google Places JS API. I've been using rails for most of the project, but now I want to inject a bit of AJAX in one of my views. Basically it is a view that displays places near your location. For this, I'm using the JS API of Google places. A quick workflow would be: 1- The user inputs a text query and hits enter. 2- There is an AJAX call to request data from Google Places API. 3- The successful result is presented to the user. The problem is primarily in step 2. I want to use backbone for this but when I create a backbone model, it requests to the 'rootURL'. This wouldn't be a problem if the requests to Places was done from the server but it is not. A place call is done like this: service = new google.maps.places.PlacesService(map); service.nearbySearch(request, callback); Passing a callback function: function callback(results, status) { if (status == google.maps.places.PlacesServiceStatus.OK) { for (var i = 0; i < results.length; i++) { var place = results[i]; createMarker(results[i]); } } } Is it possible to override the 'fetch' method in backbone model and populate the model with the successful Places result? Is this a bad idea?

    Read the article

  • Is this method of static file serving safe in node.js? (potential security hole?)

    - by MikeC8
    I want to create the simplest node.js server to serve static files. Here's what I came up with: fs = require('fs'); server = require('http').createServer(function(req, res) { res.end(fs.readFileSync(__dirname + '/public/' + req.url)); }); server.listen(8080); Clearly this would map http://localhost:8080/index.html to project_dir/public/index.html, and similarly so for all other files. My one concern is that someone could abuse this to access files outside of project_dir/public. Something like this, for example: http://localhost:8080/../../sensitive_file.txt I tried this a little bit, and it wasn't working. But, it seems like my browser was removing the ".." itself. Which leads me to believe that someone could abuse my poor little node.js server. I know there are npm packages that do static file serving. But I'm actually curious to write my own here. So my questions are: Is this safe? If so, why? If not, why not? And, if further, if not, what is the "right" way to do this? My one constraint is I don't want to have to have an if clause for each possible file, I want the server to serve whatever files I throw in a directory.

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Today is my first day in the land of backbone.js

    - by Andrew Siemer - www.andrewsiemer.com
    I am semi-excited to say that today is my first day into the land of backbone.js.  This will of course take me into many other new javascript-y areas.  As I have primarily been focused on building backend systems for the past many years…with no focus on client side bits…this will be all new ground for me.  Very exciting! I am sure that this endeavor will lead to writing about many new findings along the way.  Expect the subject of near future postings to not be related to MVC or server side code. I am starting this journey by reading through the online book “Backbone Fundamentals”.  http://addyosmani.com/blog/backbone-fundamentals/  Has anyone read this yet?  Any feed back on that title. I have read though Derrick Bailey’s thoughts here and here…also very good. Any suggestions on other nuggets of learning backbone?

    Read the article

  • Why do node packages put a comma on a newline?

    - by SomeKittens
    I'm learning node.js and am trying out Express. My first app had this code: var express = require('express') , routes = require('./routes') , user = require('./routes/user') , http = require('http') , path = require('path'); Reading through the mongoose tutorial gives me this: var mongoose = require('mongoose') , db = mongoose.createConnection('localhost', 'test'); On strict mode, JSHint gives me app.js: line 6, col 32, Bad line breaking before ','. Which shows that I'm not the only one out there who's bugged by this syntax. Is there any reason to declare vars this way instead of adding the comma at the end of the line?

    Read the article

  • JS / HTML 5 Compatablity issue on iOS 6

    - by Dhaval
    I'm using HTML 5 to play video and there are some content before the video so I'm using flexroll to scroll that whole window. I'm checking it on iPad, now problem is that in iOS 5 its working fine but when I update to iOS 6 then screen is not scrolling only video is scroll up and down, content is as it is in the position. I can't understand what is the exact problem. Is that js compatibility issue or HTML 5 video compatibility issue. Can anyone please help me to figure out, your help will really be appreciated.

    Read the article

  • 3d js map rendering

    - by gotha
    In the past I've done a 2D tile map using HTML, CSS and Javascript. Now I have the task of creating a 3D version using the same technologies - think of it like a space map where all planets have x/y/z positions. Currently, I have no idea to do this. Is there an existing library or something I can modify to do my job? If not, what method of rendering the map should I use? It needs to be as browser independent as possible, so I can't use webgl, flash or canvas. I'm considering plain JS & HTML or SVG (using Raphael for compatibility).

    Read the article

  • Using Minified Page Specific JS [migrated]

    - by Mike C
    I've been working on a rather large scale project which makes use of a number of different pages with some very specific Javascript for each of them. To lessen load times, I plan to minify it all in to one file before deploying. The problem is this: how should I avoid launching page specific JS on pages which don't require it? So far my best solution has been to wrap each page in some additional container <div id='some_page'> ...everything else... </div> and I extended jQuery so I can do something like this: // If this element exists when the DOM is ready, execute the function $('#some_page').ready(function() { ... }); Which, while kind of cool, just rubs me the wrong way.

    Read the article

  • JS framework with conditionally loaded fragments

    - by kjs3
    I'm doing a single-page, responsive, mobile first design. I found this article about conditionally loaded fragments but am wondering what the different js frameworks have build-in to handle this? I'm imagining the mobile version with a list view fragment that transitions to a show view fragment. A larger portal could just show both fragments and change the show fragment when items in the list are clicked. I'd love thoughts on what is available from the various frameworks, not an argument. Ember?, Angular?, etc. Maybe I'm missing it, but I'm not seeing the responsive fragments issue brought up in the various demos I've found so far.

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • How do I change hyperlink colours in LibreOffice Impress?

    - by Marita Moll
    I have a lot of Powerpoint slides that I converted to LibreOffice Impress. The resulting hyperlinks are very faded, very hard to see. I can't seem to find any way to change the colour of hyperlinks as a whole. Any colour change I do make on an individual url link does not hold when converted back to .ppt which is sometimes necessary. I have tried the tools=options=libreoffice=appearance route but it only seems to affect the very first hyperlink in the slide set

    Read the article

  • Rendering Flickr Cats Via Backbone.js

    - by Geertjan
    Create a JavaScript file and refer to it inside an HTML file. Then put this into the JavaScript file: (function($) {     var CatCollection = Backbone.Collection.extend({         url: 'http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?',         parse: function(response) {             return response.items;         }     });     var CatView = Backbone.View.extend({         el: $('body'),         initialize: function() {             _.bindAll(this, 'render');             carCollectionInstance.fetch({                 success: function(response, xhr) {                     catView.render();                 }             });         },         render: function() {             $(this.el).append("<ul></ul>");             for (var i = 0; i < carCollectionInstance.length; i++) {                 $('ul', this.el).append("<li>" + i + carCollectionInstance.models[i].get("description") + "</li>");             }         }     });     var carCollectionInstance = new CatCollection();     var catView = new CatView(); })(jQuery); Apologies for any errors or misused idioms. It's my second day with Backbone.js, in fact, my second day with JavaScript. I haven't seen anywhere online so far where an example such as the above is found, though plenty that do kind of or pieces of the above, or explain in text, without an actual full example. The next step, and the only reason for the above experiment, is to create some JPA entities and expose them via RESTful webservices created on EJB methods, for consumption into an HTML5 application via a Backbone.js script very similar to the above. 

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >