Search Results

Search found 31762 results on 1271 pages for 'js future software'.

Page 16/1271 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Java Future and infinite computation

    - by Chris
    I'm trying to optimize an (infinite) computation algorithm. I have an infinte Sum to calculate ( Summ_{n- infinity} (....) ) My idea was to create several threads using the Future < construct, then combine the intermediate results together. My problem hoewer is that I need a certain precision. So I need to constantly calculate the current result while other threads keep calculating. My question is: Is there some sort of result queue where each finished thread can put its results in, while a main thread can receive those results and then either lets the computation continues or terminate the whole ExecutorService? Any Help would really be appreciated! Thanks!

    Read the article

  • Inconsistency in java.util.concurrent.Future?

    - by loganj
    For the sake of argument, let's say I'm implementing Future for a task which is not cancelable. The Java 6 API doc says: After [cancel()] returns, subsequent calls to isDone() will always return true. [cancel()] returns false if the task could not be cancelled, typically because it has already completed normally It also says: [isDone()] returns true if this task completed. But what if my cancellation fails not because the task is already completed, but because it simply cannot be cancelled? Is there a way out of this contradiction (other than making my uncancelable task cancelable and sidestepping it altogether)?

    Read the article

  • Future of web services

    - by Landon Ashes
    I want to know what are the possible Future research areas Regarding "Web Services" and in what direction "Web Services" are moving. I am not talking about "Microsoft Web Services". I am talking about "Web Services" in general. I did google but what ever i found was like couple of years old and obsolete. couldnt get any direction from IEEE too. Plz some expert of this line should guide me. I will be obliged like anything. Thanks in Advance.

    Read the article

  • node.js ./configure warnings

    - by microspino
    I'm trying to install node.js and I get this list of warnings when I run ./configer. I don't know which libraries/packages I miss on my Debian installation (Linux 2.6.32.6 ppc GNU/Linux on a Bubba2 sever) Checking for gnutls >= 2.5.0 : fail --- libeio --- Checking for pread(2) and pwrite(2) : fail Checking for sync_file_range(2) : fail --- libev --- Checking for header sys/inotify.h : not found Checking for header port.h : not found Checking for header sys/event.h : not found Checking for function kqueue : not found Checking for header sys/eventfd.h : not found Can you tell me what I need to install?

    Read the article

  • History.js not working in Internet Explorer

    - by Wilcoholic
    I am trying to get history.js to work in Internet Explorer because I need history.pushState() to work. I have read over the instructions on GitHub (https://github.com/balupton/History.js/) and have tried implementing it, but havent had any success. Here's what I have <!DOCTYPE html> <html> <head> <!-- jQuery --> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script> <!-- History.js --> <script defer src="http://balupton.github.com/history.js/scripts/bundled/html4+html5/jquery.history.js"></script> <script type="text/javascript"> function addHistory(){ // Prepare var History = window.History; // Note: We are using a capital H instead of a lower h // Change our States History.pushState(null, null, "?mylink.html"); } </script> </head> <body> <a href="mylink.html">My Link</a> <a href="otherlink.html">Other Link</a> <button onclick="addHistory()" type="button">Add History</button> </body> Not sure what I'm doing wrong, but it's definitely not working in IE. Any help is appreciated

    Read the article

  • Passing ViewModel for backbone.js from MVC3 Server-Side

    - by Roman
    In ASP.NET MVC there is Model, View and Controller. MODEL represents entities which are stored in database and essentially is all the data used in a application (for example, generated by EntityFramework, "DB First" approach). Not all data from model you want to show in the view (for example, hashs of passwords). So you create VIEW MODEL, each for every strongly-typed-razor-view you have in application. Like this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MyProject.ViewModels.SomeController.SomeAction { public class ViewModel { public ViewModel() { Entities1 = new List<ViewEntity1>(); Entities2 = new List<ViewEntity2>(); } public List<ViewEntity1> Entities1 { get; set; } public List<ViewEntity2> Entities2 { get; set; } } public class ViewEntity1 { //some properties from original DB-entity you want to show } public class ViewEntity2 { } } When you create complex client-side interfaces (I do), you use some pattern for javascript on client, MVC or MVVM (I know only these). So, with MVC on client you have another model (Backbone.Model for example), which is third model in application. It is a bit much. Why don`t we use the same ViewModel model on a client (in backbone.js or another framework)? Is there a way to transfer CS-coded model to JS-coded? Like in MVVM pattern, with knockout.js, when you can do like this: in SomeAction.cshtml: <div style="display: none;" id="view_model">@Json.Encode(Model)</div> after that in Javascript-code var ViewModel = ko.mapping.fromJSON($("#view_model").get(0).innerHTML); now you can extend your ViewModel with some actions, event handlers, etc: ko.utils.extend(ViewModel, { some_function: function () { //some code } }); So, we are not building the same view model on the client again, we are transferring existing view model from server. At least, data. But knockout.js is not suitable for me, you can`t build complex UI with it, it is just data-binding. I need something more structural, like backbone.js. The only way to build ViewModel for backbone.js I can see now is re-writing same ViewModel in JS from server with hands. Is there any ways to transfer it from server? To reuse the same viewmodel on server view and client view?

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cross platform mobile development VS Native Mobile Development: Present And Future.

    - by MobileDev123
    I just completed one year in Smart phone development, working on BlackBerry and Android and also developed one application exclusively targeted to nokia feature phones. And just a month ago I come to know about Titanium Appcelerator tool that enables cross platform development, but there are some developers who complain about it's sub-par functionalities. Even a little bit experience of mine says that developing in native environment rather than these cross platform tools will give you more advantages by giving a developer a chance to add more features with better performance. Do you have same experience? Or you find such cross development tools really useful regarding to advance functionality and performance? As porting (or co developing) same application to different mobile platform is common thing nowadays, what do you think will these cross platform tools evolve and force developers to get a hands on approach on them or majority will stick to the native development environment?

    Read the article

  • Developing Browser plugins. Spcially for chrome and firefox, what is the future?

    - by MobileDev123
    I am interested to learn developing some plug ins for chrome and Firefox browsers as hobby projects. However I know nothing about it, and I don't know if it is valuable for professional experience or not. Do you know anybody who's developing this kind of plug ins professionally? In case I can develop any plug in, does it have any significance in my resume? (Extra info: I have 1.5 years of experience in various Java Techs, from servers to mobile). How some companies and some other employers can see this plug in development professionally?

    Read the article

  • Can I distribute a software with the following permission notice

    - by Parham
    I've recently written a piece of software (without any other contributors) for a company which I part own. I was wondering if I could distribute it with the following permission notice, which is a modified version of the MIT License. Are there any obvious risks if I do distribute with this licence and does it give me the right to reuse the code in other projects? Permission is hereby granted, to any person within CompanyName (the "Company") obtaining a copy of this software and associated documentation files, excluding any third party libraries (the "Software"), to deal with the Software, with limitations restricted to use, copy, modify and merge, the Software may not be published, distributed, sublicensed and/or sold without the explicit permission from AuthorName (the "Author"). This notice doesn't apply to sections of the Software where copyright is held by any persons other than the Author. The Author remains the owner of the Software and may deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    Read the article

  • Ensure Future Browser Compatibility - SharePoint Branding

    - by KunaalKapoor
    HTML and Future Internet Explorer Compatibility with SharePointAs new versions of Internet Explorer are released, the way HTML is rendered by the browser could change over time. To address the possibility of changes, Microsoft uses the X-UA-Compatible META tag that targets HTML markup to a specific version of Internet Explorer. The default SharePoint 2010 master pages are set to force current and future versions of Internet Explorer to render HTML in Internet Explorer 8 mode like the following markup:<meta http-equiv="X-UA-Compatibile" content="IE=IE8" />The Adventure Works Travel HTML includes the META tag to help ensure future Internet Explorer versions will display the SharePoint HTML properly.For more information about the Internet Explorer Standards Mode, see META Tags and Locking in Future Compatibility.

    Read the article

  • What is the Future of Search Engine Optimisation?

    Though those who are into Internet marketing would like to know what the future holds for them, but frankly, it is very difficult to predict this accurately. Forget about the future of SEO, actually it is very difficult to even predict the future of Internet and computers in general. For instance, if 40 years back anyone had predicted that a computer would be sitting on a table of almost every home in the country, then everyone would have thought that he or she was crazy.

    Read the article

  • Confused about my future. Doubt about .Net or Java way.

    - by dotNET
    I'm very confused about choosing the programming langage to follow in the next step of my life. I'm right now so familiar with C++, VB.NET and PHP, but to jump to a higher level I must choose between JEE(JSP, Servlets, JSF, Spring, EJB, Struts, Hibernate,...) and .NET(ASP.NET, C#). Because I cant learn them at the same time. And you realize that, when I mentioned JEE a lot of things comes to the head. In my personnal experience I prefere the .NET, but Java seems to be a better choice. believe me, i'm not writing a subjective topic. I just want to know what must I follow to get succes in my life. The question here is : Is there any things that can be done with Java, and cannot be done with .NET. Is there any chances that I can follow the uncounted number of frameworks that are always in developpement. ... (also something not said) ?

    Read the article

  • Node.js Production Server and Ubuntu Users

    - by baffonero
    I'm setting up a production server on Ubuntu 10.04 using this technology stack: Nodejs Nginx to serve static contents Mongo Redis Upstart for running applications as services Monit for monitoring node application and nginx server The server will host only 5 applications of this type. Nothing else. How would you setup Ubuntu Users? It's a good idea to create a User per Application? Would you install software (node, mongo...) as root or as user(s)? Thanks in advance

    Read the article

  • What's wrong with the analogy between software and building construction?

    - by kuosan
    Many people like to think of building software as constructing a building so we have terms like building blocks and architecture. However, lately I've been to a couple of talks and most people say this analogy is wrong especially around the idea of having a non-coding software architect in a project. In my experience, good software architects are those who also write code so they won't design things that only looks good on paper. I've worked with several Architecture Astronauts, who have either limited or outdated experience in programming. These architecture astronauts quite often missed out critical details in their design and cause more harm than good in a project. This makes me wonder what are the differences between constructing a software and a building? How come in the building industry they can have architects who probably never build a house in their life and purely handles design work but not in the software development field?

    Read the article

  • Node.js server crashes , Database operations halfway?

    - by Ranadeep
    I have a node.js app with mongodb backend going to production in a week and i have few doubts on how to handle app crashes and restart . Say i have a simple route /followUser in which i have 2 database operations /followUser ----->Update User1 Document.followers = User2 ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation What happens if there is a server crash(due to power failure or maybe the remote mongodb server is down ) like this scenario : ----->Update User1 Document.followers = User2 SERVER CRASHED , FOREVER RESTARTS NODE What happens to these operations below ? The system is now in inconsistent state and i may have error everytime i ask for User2 followers ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation Also please recommend good logging and restart/monitor modules for apps running in linux

    Read the article

  • Routing to various node.js servers on same machine

    - by Dtang
    I'd like to set up multiple node.js servers on the same machine (but listening on different ports) for different projects (so I can pull any down to edit code without affecting the others). However I want to be able to access these web apps from a browser without typing in the port number, and instead map different urls to different ports: e.g. 45.23.12.01/app - 45.23.12.01:8001. I've considered using node-http-proxy for this, but it doesn't yet support SSL. My hunch is that nginx might be the most suitable. I've never set up nginx before - what configuration do I need to do? The examples of config files I've seen only deal with subdomains, which I don't have. Alternatively, is there a better (stable, hassle-free) way of hosting multiple apps under the same IP address?

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What's the best Wireframing tool?

    - by DaNieL
    Im looking for something similar to iPlotz or Mockup. I've found Pencil Project, but it require xulrunner-1.9 (seem to be incompatible with xulrunner-1.9.2) to be run as a standalone application; However it can be used as a firefox plugin..but is a bit slower. The error on my desktop (ubuntu 10.04) is: Could not find compatible GRE between version 1.9.1 and 1.9.2.* Does anyone know other software? Edit: open-source softwares are preferred, doesnt matter if free.

    Read the article

  • Software for monitoring internal software?

    - by Tyler Eaves
    Is there any good software for monitoring the health of a collection of related software? Requirements are as follows: Web-based, deployable on standard Linux/BSD software. Configurable to support a variety of processes, scheduled at various intervals. Some sort of dashboard interface, for monitoring status, viewing errors, etc. As an example, suppose we have a daily export that's scheduled to run at 6AM each morning. After the export completes, it would POST a status message, saying it had completed, passing in some sort of application key to identify the export. If that status message hadn't come in, by, say, 6:30AM, an e-mail might be sent, that application should go red on the dashboard, etc. Applications should also be able to post error/warning messages. Basically the goal is to be able to monitor all of our internal projects from one system, rather than a multitude of e-mails, log files, etc. I suspect that I'll probably have to end up writing this from scratch, but I just thought I'd ask.

    Read the article

  • Is there any reason to install software as root as opposed to sudo installing software as a sudoer?

    - by Tchalvak
    I'm setting up a new server running Ubuntu Server Edition, and I'm not certain what the difference would be between installing most of the basic software as root, vs installing the basic software as an admin user using sudo apt-get install . For one thing, I'm not sure whether after installing the software as root, I'll need sudo access when running the software as a user (e.g. if I install git as root). On the other hand, if I install software as a user, I could conceive of it not being available to other users that I create in the future. What's the best practice here?

    Read the article

  • Software for scanning installation process?

    - by no name
    I forgot what is the name of the very good software which make some kind of restore point (save registry, and program files folder, etc...) before installing any software. After you install some program (ie "Notepad++") you can easily see what registry data use new installed program, on what location are the file's is stored and many more. The reason that I'm asking for help is that I have to automate some installation of public software, so after installation I need to uninstall it, so i have to delete all junk files. The software is called something like install wizard, or wizard install I forgot. If you have any idea of other application that do same thing , or you know exactly name of that software, or you have some good idea how to solve easy solve many installation and uninstallation, please let me know. Os: win7/xp 64/32

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >