Search Results

Search found 7724 results on 309 pages for 'backbone js'.

Page 17/309 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Compress with Gzip or Deflate my CSS & JS files

    - by muhammad usman
    i ve a fashion website & using wordpress. I want to Compress or Gzip or Deflate my CSS & JS files. i have tried many codes with .htaccess to compress but not working. Would any body help me please? My phpinfo is http://deemasfashion.co.uk/1.php below are the codes i have tried not not working. Few of them might be same but there is a difference in the syntax. <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> other code I have tried but not working... <files *.css> SetOutputFilter DEFLATE </files> <files *.js> SetOutputFilter DEFLATE </files> I have also tried this code as well but no success. <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> This code is also not working <FilesMatch "\.(html?|txt|css|js|php|pl)$"> SetOutputFilter DEFLATE </FilesMatch> Here is another code not working. <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x- javascript application/javascript </ifmodule> Here is another code not working. <IFModule mod_deflate.c> <filesmatch "\.(js|css|html|jpg|png|php)$"> SetOutputFilter DEFLATE </filesmatch> </IFModule> Here is another code not working. <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript text/javascript application/javascript application/json <FilesMatch "\.(css|js)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> Here is another code not working. #Gzip - compress text, html, javascript, css, xml <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript </ifmodule> #End Gzip Here is another code not working. <Location /> SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|gz2|sit|rar)$ no-gzip dont-vary </Location>

    Read the article

  • Présentation de ClassObject.js : un framework JavaScript de construction de classes, par Abraham Tewa

    Bonjour, Je vous propose de découvrir un article sur ClassObject, un framework javascript de construction de classes, développé par votre serviteur. Ce framework permet de créer simplement des classes avec des attributs et des méthodes publiques, protégées et privées, statiques (ou non), constantes (ou non), tout en prenant en charge l'héritage. Vous pouvez poster dans cette discussion vos commentaires concernant l'article ClassObject.js : un framework JavaScript de construction de classes Merci à tous....

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Which game engine for HTML5 + Node.js

    - by Chrene
    I want to create a realtime multiplayer game using and HTML5. I want to use node.js as the server, and I only need to be able to render images in a canvas, play some sounds, and do some basic animations. The gameloop should be done in the server, and the client should do callback via sockets to render the canvas. I am not going to spend any money on the engine, and I don't want to use cocos2d-javascript.

    Read the article

  • emberjs on symfony2 dev enviroment dont work propertly

    - by rkmax
    I've builded a app with symfony2 the app expose an REST Api. now i build a simple client for consuming app.coffee - app.js App = Em.Application.create ready: -> @.entradas.load() Entrada: Em.Object.extend() entradas: Em.ArrayController.create content: [] load: -> url = 'http://localhost/api/1/entrada' me = @ $.ajax( url: url, method: 'GET', success: (data) -> me.set('content', []) for entrada in data.data.objects me.pushObject DBPlus.Entrada.create(entrada) ) MyBundle:Home:index.html.twig <script type="text/x-handlebars" src="{{ asset('js/templates/entradas.hbs') }}"></script> <script src="{{ asset('js/libs/jquery-1.7.2.min.js') }}"></script> <script src="{{ asset('js/libs/handlebars-1.0.0.beta.6.js') }}"></script> <script src="{{ asset('js/libs/ember-1.0.pre.min.js') }}"></script> <script src="{{ asset('js/app.js') }}"></script> the problem here is when i run on dev enviroment and link the template like <script type="text/x-handlebars" src="{{...}}"> the app dont work, nothing show but works fine over prod enviroment. he only way that works on dev enviroment is inline template MyBundle:Home:index.html.twig <script type="text/x-handlebars"> {% raw %} <ul class="entradas"> {{#each App.entradas}} <li class="entrada">{{nombre}}</li> {{/each}} </ul> {% endraw %} </script> can explain why this behavoir? Note: I disabled the debug profiler toolbar, and nothing

    Read the article

  • min.js to clear source

    - by dole
    Hi there As far i know until now, the min version of a .js(javascript) file is obtaining by removing the unncessary blank spaces and comments, in order to reduce the file size. My questions are: How can I convert a min.js file into a clear, easy readable .js file Besides, size(&and speed) are there any other advtages of the min.js file. the js files can be encripted? can js be infected. I think the answer is yes, so the question is how to protect the .js files from infections? Only the first question is most important and I'm looking for help on it. TY

    Read the article

  • Is it hacky to manually construct JSON and manually handle GET, POST instead of using a proper RESTful API for AJAX functionality?

    - by kliao
    I started building a Django app, but this probably applies to other frameworks as well. In Backbone.js methods that call the server (fetch(), create(), destroy(), etc.), should you be using a proper RESTful API such as one provided by Tastypie or Django-Piston? I've founded it easier and more flexible to just construct the JSON in my Django Views, which are mapped to some URLs that Backbone.js can use. Then again, I'm probably not leveraging Tastypie/Django-Piston functionality to the fullest. I'm not ready to make a full-fledged RESTful API for my app yet. I simply would like to use some of the AJAXy functionality that Backbone.js supports. Pros/Cons of doing this?

    Read the article

  • How to integrate history.js to my ajax site?

    - by vzhen
    I have left and right div in my website and with navigation buttons. Let's say button_1, button_2. When clicking on these button will change the right div content using ajax so means the left div do nothing. The above is what I have done but I have a problem here, the url doesn't change. And I found history.js is able to solve this issue but I cannot find any tutorial about history.js + ajax. Can anyone help me out? Thank in adv

    Read the article

  • How can I return JSON from node.js backend to frontend without reloading or re-rendering the page?

    - by poleapple
    I am working with node.js. I want to press a search button, make some rest api calls to another server in the backend and return the json back to the front end, and reload a div in the front end so that I won't have to refresh the page. Now I know I can reload just a div with jQuery or just Javascript dom manipulation. But how do I make it call a method on the server side? I could have a submit button and it will make a post request and I can catch it and make my api calls from there, however from the node.js side, when I return, I will have to render the page again. How do I go about returning JSON from the back end to the front end without re-rendering or refreshing my page? Thanks.

    Read the article

  • How to implement CSRF protection in Ajax calls using express.js (looking for complete example)?

    - by Benjen
    I am trying to implement CSRF protection in an app built using node.js using the express.js framework. The app makes abundant use of Ajax post calls to the server. I understand that the connect framework provides CSRF middleware, but I am not sure how to implement it in the scope of client-side Ajax post requests. There are bits and pieces about this in other Questions posted here in stackoverflow, but I have yet to find a reasonably complete example of how to implement it from both the client and server sides. Does anyone have a working example they care to share?

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • Is this method of static file serving safe in node.js? (potential security hole?)

    - by MikeC8
    I want to create the simplest node.js server to serve static files. Here's what I came up with: fs = require('fs'); server = require('http').createServer(function(req, res) { res.end(fs.readFileSync(__dirname + '/public/' + req.url)); }); server.listen(8080); Clearly this would map http://localhost:8080/index.html to project_dir/public/index.html, and similarly so for all other files. My one concern is that someone could abuse this to access files outside of project_dir/public. Something like this, for example: http://localhost:8080/../../sensitive_file.txt I tried this a little bit, and it wasn't working. But, it seems like my browser was removing the ".." itself. Which leads me to believe that someone could abuse my poor little node.js server. I know there are npm packages that do static file serving. But I'm actually curious to write my own here. So my questions are: Is this safe? If so, why? If not, why not? And, if further, if not, what is the "right" way to do this? My one constraint is I don't want to have to have an if clause for each possible file, I want the server to serve whatever files I throw in a directory.

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Why do node packages put a comma on a newline?

    - by SomeKittens
    I'm learning node.js and am trying out Express. My first app had this code: var express = require('express') , routes = require('./routes') , user = require('./routes/user') , http = require('http') , path = require('path'); Reading through the mongoose tutorial gives me this: var mongoose = require('mongoose') , db = mongoose.createConnection('localhost', 'test'); On strict mode, JSHint gives me app.js: line 6, col 32, Bad line breaking before ','. Which shows that I'm not the only one out there who's bugged by this syntax. Is there any reason to declare vars this way instead of adding the comma at the end of the line?

    Read the article

  • JS / HTML 5 Compatablity issue on iOS 6

    - by Dhaval
    I'm using HTML 5 to play video and there are some content before the video so I'm using flexroll to scroll that whole window. I'm checking it on iPad, now problem is that in iOS 5 its working fine but when I update to iOS 6 then screen is not scrolling only video is scroll up and down, content is as it is in the position. I can't understand what is the exact problem. Is that js compatibility issue or HTML 5 video compatibility issue. Can anyone please help me to figure out, your help will really be appreciated.

    Read the article

  • 3d js map rendering

    - by gotha
    In the past I've done a 2D tile map using HTML, CSS and Javascript. Now I have the task of creating a 3D version using the same technologies - think of it like a space map where all planets have x/y/z positions. Currently, I have no idea to do this. Is there an existing library or something I can modify to do my job? If not, what method of rendering the map should I use? It needs to be as browser independent as possible, so I can't use webgl, flash or canvas. I'm considering plain JS & HTML or SVG (using Raphael for compatibility).

    Read the article

  • JS framework with conditionally loaded fragments

    - by kjs3
    I'm doing a single-page, responsive, mobile first design. I found this article about conditionally loaded fragments but am wondering what the different js frameworks have build-in to handle this? I'm imagining the mobile version with a list view fragment that transitions to a show view fragment. A larger portal could just show both fragments and change the show fragment when items in the list are clicked. I'd love thoughts on what is available from the various frameworks, not an argument. Ember?, Angular?, etc. Maybe I'm missing it, but I'm not seeing the responsive fragments issue brought up in the various demos I've found so far.

    Read the article

  • Using Minified Page Specific JS [migrated]

    - by Mike C
    I've been working on a rather large scale project which makes use of a number of different pages with some very specific Javascript for each of them. To lessen load times, I plan to minify it all in to one file before deploying. The problem is this: how should I avoid launching page specific JS on pages which don't require it? So far my best solution has been to wrap each page in some additional container <div id='some_page'> ...everything else... </div> and I extended jQuery so I can do something like this: // If this element exists when the DOM is ready, execute the function $('#some_page').ready(function() { ... }); Which, while kind of cool, just rubs me the wrong way.

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >