Search Results

Search found 16491 results on 660 pages for 'root node'.

Page 22/660 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Change default global installation directory for node.js modules in Windows?

    - by gremo
    In my windows installation PATH includes C:\Program Files\nodejs, where executable node.exe is. I'm able to launch node from the shell, as well as npm. I'd like new executables to be installed in C:\Program Files\nodejs as well, but it seems impossible to achieve. Setting NODE_PATH and NODE_MODULES variables doesn't change anything: things are still installed in %appdata%\npm by default. How can I change the global installation path?

    Read the article

  • Converting "A* Search" code from C++ to Java [on hold]

    - by mr5
    Updated! I get this code from this site It's A* Search Algorithm(finding shortest path with heuristics) I modify most of variable names and some if conditions from the original version to satisfy my syntactic taste. It works in C++ (as I can't see any trouble with it) but fails in Java version. Java Code: String findPath(int startX, int startY, int finishX, int finishY) { @SuppressWarnings("unchecked") LinkedList<Node>[] nodeList = (LinkedList<Node>[]) new LinkedList<?>[2]; nodeList[0] = new LinkedList<Node>(); nodeList[1] = new LinkedList<Node>(); Node n0; Node m0; int nlIndex = 0; // queueList index // reset the node maps for(int y = 0;y < ROW_COUNT; ++y) { for(int x = 0;x < COL_COUNT; ++x) { close_nodes_map[y][x] = 0; open_nodes_map[y][x] = 0; } } // create the start node and push into list of open nodes n0 = new Node( startX, startY, 0, 0 ); n0.updatePriority( finishX, finishY ); nodeList[nlIndex].push( n0 ); open_nodes_map[startY][startX] = n0.getPriority(); // mark it on the open nodes map // A* search while( !nodeList[nlIndex].isEmpty() ) { LinkedList<Node> pq = nodeList[nlIndex]; // get the current node w/ the highest priority // from the list of open nodes n0 = new Node( pq.peek().getX(), pq.peek().getY(), pq.peek().getIterCount(), pq.peek().getPriority()); int x = n0.getX(); int y = n0.getY(); nodeList[nlIndex].pop(); // remove the node from the open list open_nodes_map[y][x] = 0; // mark it on the closed nodes map close_nodes_map[y][x] = 1; // quit searching when the goal state is reached //if((*n0).estimate(finishX, finishY) == 0) if( x == finishX && y == finishY ) { // generate the path from finish to start // by following the directions String path = ""; while( !( x == startX && y == startY) ) { int j = dir_map[y][x]; int c = '0' + ( j + Node.DIRECTION_COUNT / 2 ) % Node.DIRECTION_COUNT; path = (char)c + path; x += DIR_X[j]; y += DIR_Y[j]; } return path; } // generate moves (child nodes) in all possible directions for(int i = 0; i < Node.DIRECTION_COUNT; ++i) { int xdx = x + DIR_X[i]; int ydy = y + DIR_Y[i]; // boundary check if (!(xdx >= 0 && xdx < COL_COUNT && ydy >= 0 && ydy < ROW_COUNT)) continue; if ( ( gridMap.getData( ydy, xdx ) == GridMap.WALKABLE || gridMap.getData( ydy, xdx ) == GridMap.FINISH) && close_nodes_map[ydy][xdx] != 1 ) { // generate a child node m0 = new Node( xdx, ydy, n0.getIterCount(), n0.getPriority() ); m0.nextLevel( i ); m0.updatePriority( finishX, finishY ); // if it is not in the open list then add into that if( open_nodes_map[ydy][xdx] == 0 ) { open_nodes_map[ydy][xdx] = m0.getPriority(); nodeList[nlIndex].push( m0 ); // mark its parent node direction dir_map[ydy][xdx] = ( i + Node.DIRECTION_COUNT / 2 ) % Node.DIRECTION_COUNT; } else if( open_nodes_map[ydy][xdx] > m0.getPriority() ) { // update the priority info open_nodes_map[ydy][xdx] = m0.getPriority(); // update the parent direction info dir_map[ydy][xdx] = ( i + Node.DIRECTION_COUNT / 2 ) % Node.DIRECTION_COUNT; // replace the node // by emptying one queueList to the other one // except the node to be replaced will be ignored // and the new node will be pushed in instead while( !(nodeList[nlIndex].peek().getX() == xdx && nodeList[nlIndex].peek().getY() == ydy ) ) { nodeList[1 - nlIndex].push( nodeList[nlIndex].pop() ); } nodeList[nlIndex].pop(); // remove the wanted node // empty the larger size queueList to the smaller one if( nodeList[nlIndex].size() > nodeList[ 1 - nlIndex ].size() ) nlIndex = 1 - nlIndex; while( !nodeList[nlIndex].isEmpty() ) { nodeList[1 - nlIndex].push( nodeList[nlIndex].pop() ); } nlIndex = 1 - nlIndex; nodeList[nlIndex].push( m0 ); // add the better node instead } } } } return ""; // no route found } Output1: Legends . = PATH ? = START X = FINISH 3,2,1 = OBSTACLES (Misleading path) Output2: Changing these lines: n0 = new Node( a, b, c, d ); m0 = new Node( e, f, g, h ); to n0.set( a, b, c, d ); m0.set( e, f, g, h ); I get (I'm really confused) C++ Code: std::string A_Star::findPath(int startX, int startY, int finishX, int finishY) { typedef std::queue<Node> List_Container; List_Container nodeList[2]; // list of open (not-yet-tried) nodes Node n0; Node m0; int pqIndex = 0; // nodeList index // reset the node maps for(int y = 0;y < ROW_COUNT; ++y) { for(int x = 0;x < COL_COUNT; ++x) { close_nodes_map[y][x] = 0; open_nodes_map[y][x] = 0; } } // create the start node and push into list of open nodes n0 = Node( startX, startY, 0, 0 ); n0.updatePriority( finishX, finishY ); nodeList[pqIndex].push( n0 ); open_nodes_map[startY][startX] = n0.getPriority(); // mark it on the open nodes map // A* search while( !nodeList[pqIndex].empty() ) { List_Container &pq = nodeList[pqIndex]; // get the current node w/ the highest priority // from the list of open nodes n0 = Node( pq.front().getX(), pq.front().getY(), pq.front().getIterCount(), pq.front().getPriority()); int x = n0.getX(); int y = n0.getY(); nodeList[pqIndex].pop(); // remove the node from the open list open_nodes_map[y][x] = 0; // mark it on the closed nodes map close_nodes_map[y][x] = 1; // quit searching when the goal state is reached //if((*n0).estimate(finishX, finishY) == 0) if( x == finishX && y == finishY ) { // generate the path from finish to start // by following the directions std::string path = ""; while( !( x == startX && y == startY) ) { int j = dir_map[y][x]; char c = '0' + ( j + DIRECTION_COUNT / 2 ) % DIRECTION_COUNT; path = c + path; x += DIR_X[j]; y += DIR_Y[j]; } return path; } // generate moves (child nodes) in all possible directions for(int i = 0; i < DIRECTION_COUNT; ++i) { int xdx = x + DIR_X[i]; int ydy = y + DIR_Y[i]; // boundary check if (!( xdx >= 0 && xdx < COL_COUNT && ydy >= 0 && ydy < ROW_COUNT)) continue; if ( ( pGrid->getData(ydy,xdx) == WALKABLE || pGrid->getData(ydy, xdx) == FINISH) && close_nodes_map[ydy][xdx] != 1 ) { // generate a child node m0 = Node( xdx, ydy, n0.getIterCount(), n0.getPriority() ); m0.nextLevel( i ); m0.updatePriority( finishX, finishY ); // if it is not in the open list then add into that if( open_nodes_map[ydy][xdx] == 0 ) { open_nodes_map[ydy][xdx] = m0.getPriority(); nodeList[pqIndex].push( m0 ); // mark its parent node direction dir_map[ydy][xdx] = ( i + DIRECTION_COUNT / 2 ) % DIRECTION_COUNT; } else if( open_nodes_map[ydy][xdx] > m0.getPriority() ) { // update the priority info open_nodes_map[ydy][xdx] = m0.getPriority(); // update the parent direction info dir_map[ydy][xdx] = ( i + DIRECTION_COUNT / 2 ) % DIRECTION_COUNT; // replace the node // by emptying one nodeList to the other one // except the node to be replaced will be ignored // and the new node will be pushed in instead while ( !( nodeList[pqIndex].front().getX() == xdx && nodeList[pqIndex].front().getY() == ydy ) ) { nodeList[1 - pqIndex].push( nodeList[pqIndex].front() ); nodeList[pqIndex].pop(); } nodeList[pqIndex].pop(); // remove the wanted node // empty the larger size nodeList to the smaller one if( nodeList[pqIndex].size() > nodeList[ 1 - pqIndex ].size() ) pqIndex = 1 - pqIndex; while( !nodeList[pqIndex].empty() ) { nodeList[1-pqIndex].push(nodeList[pqIndex].front()); nodeList[pqIndex].pop(); } pqIndex = 1 - pqIndex; nodeList[pqIndex].push( m0 ); // add the better node instead } } } } return ""; // no route found } Output: Legends . = PATH ? = START X = FINISH 3,2,1 = OBSTACLES (Just right) From what I read about Java's documentation, I came up with the conclusion: C++'s std::queue<T>::front() == Java's LinkedList<T>.peek() Java's LinkedList<T>.pop() == C++'s std::queue<T>::front() + std::queue<T>::pop() What might I be missing in my Java version? In what way does it became different algorithmically from the C++ version?

    Read the article

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • Sync csv file using nodejs

    - by Amit Dugar
    There is a remote csv file that gets updated every second or so. I need to download it(on a Windows machine) ONCE and always sync local file with the remote one. Obviously, downloading the whole file every time is not an option. I need to download only the changes.(something like rsync, rdiff-backup) I searched quite a bit but could not find how I can do this. I am sort of new to nodejs and am using this app as an opportunity to expand my nodejs skills. Also, I am planning to use nodejs and to package it using node-webkit(https://github.com/rogerwang/node-webkit)

    Read the article

  • linux automatic change permissions in resolv.file

    - by rikr
    In various linux servers I see how the permissions of the /etc/resolv.conf file change automatically. In state normal: -r--r--r-- 1 root root 103 Jul 4 11:50 resolv.conf In changed state: -r--r----- 1 root root 103 Jul 4 11:50 resolv.conf I installed auditd for monitoring it, and these are the two entries between the change: type=PATH msg=audit(07/04/2012 12:20:02.719:303) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,644 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:20:02.719:303) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:20:02.719:303) : arch=x86_64 syscall=open success=yes exit=3 a0=7feeb1405dec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3445 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) type=PATH msg=audit(07/04/2012 12:50:03.727:304) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,440 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:50:03.727:304) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:50:03.727:304) : arch=x86_64 syscall=open success=yes exit=3 a0=7f2bcf7abdec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3610 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) any ideas?

    Read the article

  • Should a complete newbie use mongoose js? [on hold]

    - by Squirrl
    I drank from the koolaid and jumped aboard the node.js bandwagon even though I barely know javascript. That said, I have the opportunity to work with one of 2 templates. One is just node, express and mongodb, and the second includes mongoose and jade with the other 3 and is easier for me to understand. Yet I'm concerned that if I begin with mongoose, I'll be too high level and miss some of the fundamentals. Is my concern warranted? Should I work my way up or should I just start playing with all the toys from day one?

    Read the article

  • Reading graph inputs for a programming puzzle and then solving it

    - by Vrashabh
    I just took a programming competition question and I absolutely bombed it. I had trouble right at the beginning itself from reading the input set. The question was basically a variant of this puzzle http://codercharts.com/puzzle/evacuation-plan but also had an hour component in the first line(say 3 hours after start of evacuation). It reads like this This puzzle is a tribute to all the people who suffered from the earthquake in Japan. The goal of this puzzle is, given a network of road and locations, to determine the maximum number of people that can be evacuated. The people must be evacuated from evacuation points to rescue points. The list of road and the number of people they can carry per hour is provided. Input Specifications Your program must accept one and only one command line argument: the input file. The input file is formatted as follows: the first line contains 4 integers n r s t n is the number of locations (each location is given by a number from 0 to n-1) r is the number of roads s is the number of locations to be evacuated from (evacuation points) t is the number of locations where people must be evacuated to (rescue points) the second line contains s integers giving the locations of the evacuation points the third line contains t integers giving the locations of the rescue points the r following lines contain to the road definitions. Each road is defined by 3 integers l1 l2 width where l1 and l2 are the locations connected by the road (roads are one-way) and width is the number of people per hour that can fit on the road Now look at the sample input set 5 5 1 2 3 0 3 4 0 1 10 0 2 5 1 2 4 1 3 5 2 4 10 The 3 in the first line is the additional component and is defined as the number of hours since the resuce has started which is 3 in this case. Now my solution was to use Dijisktras algorithm to find the shortest path between each of the rescue and evac nodes. Now my problem started with how to read the input set. I read the first line in python and stored the values in variables. But then I did not know how to store the values of the distance between the nodes and what DS to use and how to input it to say a standard implementation of dijikstras algorithm. So my question is two fold 1.) How do I take the input of such problems? - I have faced this problem in quite a few competitions recently and I hope I can get a simple code snippet or an explanation in java or python to read the data input set in such a way that I can input it as a graph to graph algorithms like dijikstra and floyd/warshall. Also a solution to the above problem would also help. 2.) How to solve this puzzle? My algorithm was: Find shortest path between evac points (in the above example it is 14 from 0 to 3) Multiply it by number of hours to get maximal number of saves Also the answer given for the variant for the input set was 24 which I dont understand. Can someone explain that also. UPDATE: I get how the answer is 14 in the given problem link - it seems to be just the shortest path between node 0 and 3. But with the 3 hour component how is the answer 24 UPDATE I get how it is 24 - its a complete graph traversal at every hour and this is how I solve it Hour 1 Node 0 to Node 1 - 10 people Node 0 to Node 2- 5 people TotalRescueCount=0 Node 1=10 Node 2= 5 Hour 2 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5(Rescued) Node 0 to Node 1 = 10 Node 0 to Node 2 = 5 Node 1 to Node 2 = 4 TotalRescueCount = 10 Node 1 = 10 Node 2= 5+4 = 9 Hour 3 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5+4 = 9(Rescued) TotalRescueCount = 9+5+10 = 24 It hard enough for this case , for multiple evac and rescue points how in the world would I write a pgm for this ?

    Read the article

  • How do i close a socket after a timeout in node.js?

    - by rramsden
    I'm trying to close a socket after a connection times out after 1000ms. I am able to set a timeout that gets triggered after a 1000ms but I can't seem to destroy the socket... any ideas? var connection = http.createClient(80, 'localhost'); var request = connection.request('GET', '/somefile.xml', {'host':'localhost'}); var start = new Date().getTime(); request.socket.setTimeout(1000); request.socket.addListener("timeout", function() { request.socket.destroy(); sys.puts("socket timeout connection closed"); }); request.addListener("response", function(response) { var responseBody = []; response.setEncoding("utf8"); response.addListener("data", function(chunk) { sys.puts(chunk); responseBody.push(chunk); }); response.addListener("end", function() { }); }); request.end(); returns socket timeout connection closed node.js:29 if (!x) throw new Error(msg || "assertion error"); ^ Error: assertion error at node.js:29:17 at Timer.callback (net:152:20) at node.js:204:9

    Read the article

  • How to use Node.js to build pages that are a mix between static and dynamic content?

    - by edt
    All pages on my 5 page site should be output using a Node.js server. Most of the page content is static. At the bottom of each page, there is a bit of dynamic content. My node.js code currently looks like: var http = require('http'); http.createServer(function (request, response) { console.log('request starting...'); response.writeHead(200, { 'Content-Type': 'text/html' }); var html = '<!DOCTYPE html><html><head><title>My Title</title></head><body>'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some more static content'; html += 'Some dynamic content'; html += '</body></html>'; response.end(html, 'utf-8'); }).listen(38316); I'm sure there are numerous things wrong about this example. Please enlighten me! For example: How can I add static content to the page without storing it in a string as a variable value with += numerous times? What is the best practices way to build a small site in Node.js where all pages are a mix between static and dynamic content?

    Read the article

  • How do i close a socket after a timeout in node.js?

    - by rramsden
    I'm trying to close a socket after a connection times out after 1000ms. I am able to set a timeout that gets triggered after a 1000ms but I can't seem to destroy the socket... any ideas? var connection = http.createClient(80, 'localhost'); var request = connection.request('GET', '/somefile.xml', {'host':'localhost'}); var start = new Date().getTime(); request.socket.setTimeout(1000); request.socket.addListener("timeout", function() { request.socket.destroy(); sys.puts("socket timeout connection closed"); }); request.addListener("response", function(response) { var responseBody = []; response.setEncoding("utf8"); response.addListener("data", function(chunk) { sys.puts(chunk); responseBody.push(chunk); }); response.addListener("end", function() { }); }); request.end(); returns socket timeout connection closed node.js:29 if (!x) throw new Error(msg || "assertion error"); ^ Error: assertion error at node.js:29:17 at Timer.callback (net:152:20) at node.js:204:9

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • nginx proxying websockets, must be missing something

    - by CodeMonkey
    I have a basic chat app written in node.js using express and socket.io; it works fine when connecting directly to node on port 3000 But doesn't work when I try to use nginx v1.4.2 as a proxy. I start off using the connection map map $http_upgrade $connection_upgrade { default upgrade; '' close; } Then add the locations location /socket.io/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; proxy_cache off; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; } location /web-service/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; rewrite /web-service/(.*) /$1 break; proxy_cache off; } These are built up using all of the tips to get it working that I could find. The error log does not show any errors. (except when I stop node to test the error logging is working) When through nginx I do see a websocket connection in the dev tools, with the status of 101; but the frames tab under the resuects is empty. The only differnece I can see in the response headers is a case difference - "upgrade" vs "Upgrade" - through nginx : Connection:upgrade Date:Fri, 08 Nov 2013 11:49:25 GMT Sec-WebSocket-Accept:LGB+iEBb8Ql9zYfqNfuuXzdzjgg= Server:nginx/1.4.2 Upgrade:websocket direct from node Connection:Upgrade Sec-WebSocket-Accept:8nwPpvg+4wKMOyQBEvxWXutd8YY= Upgrade:websocket output from node (when used through nginx) debug - served static content /socket.io.js debug - client authorized info - handshake authorized iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/websocket/iaej2VQlsbLFIhachyb1 debug - set heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - client authorized for debug - websocket writing 1:: debug - websocket writing 5:::{"name":"message","args":[{"message":"welcome to the chat"}]} debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client 7My3F4CuvZC0I4Olhybz debug - jsonppolling closed due to exceeded duration debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client AkCYl0nWNZAHeyUihyb0 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/xhr-polling/iaej2VQlsbLFIhachyb1?t=1383911206158 debug - setting poll timeout debug - discarding transport debug - cleared heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911216160&i=0 debug - setting poll timeout debug - discarding transport debug - clearing poll timeout debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client iaej2VQlsbLFIhachyb1 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911236429&i=0 debug - setting poll timeout debug - discarding transport debug - cleared close timeout for client iaej2VQlsbLFIhachyb1 when direct to node, the client does not start polling. The normal http stuff node outputs works fine with nginx. Clearly something I am not seeing, but I am stuck, thanks :)

    Read the article

  • Why is 50.22.53.71 hitting my localhost node.js in an attempt to find a php setup

    - by laggingreflex
    I just created a new app using angular-fullstack yeoman generator, edited it a bit to my liking, and ran it with grunt on my localhost, and immediately upon starting up I get this flood of requests to paths that I haven't even defined. Is this a hacking attempt? And if so, how does the hacker (human or bot) immediately know where my server is and when it came online? Note that I haven't made anything online, it's just a localhost setup and I'm merely connected to the internet. (Although my router does allow 80 port incoming.) Whois shows that the IP address belongs to a SoftLayer Technologies. Never heard of it. Express server listening on 80, in development mode GET / [200] | 127.0.0.1 (Chrome 31.0.1650) GET /w00tw00t.at.blackhats.romanian.anti-sec:) [404] | 50.22.53.71 (Other) GET /scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /db/scripts/setup.php [404] | 50.22.53.71 (Other) GET /dbadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /myadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysqladmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /typo3/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin1/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /xampp/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /websql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5-pl1/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/ [404] | 50.22.53.71 (Other) GET /phpmyadmin/ [404] | 50.22.53.71 (Other) GET /mysqladmin/ [404] | 50.22.53.71 (Other)

    Read the article

  • Is anyone using Node.js as an actual web server?

    - by Jeremy
    I am trying to convince myself to pick it up and start developing with it, but I want to know if anyone has expected stability issues or anything of the sort. I understand it isn't "production" quality, like Apache or IIS. I figure for a small site, it should be fine (max of 200 concurrent connections). Should I assume this?

    Read the article

  • Active DFS node did not restore after failure

    - by Mark Henderson
    On Tuesday we had a Server 2008 R2 DFS-R node go offline unexpectedly. DFS did the right thing and started routing requests to a different node, which was in a remote site. This is by design, because even though it's slow, at least it's still working. We had the local DFS-R node back online within an hour, and it had synced all its changes 10 minutes after that. 3 of the 5 terminal servers reset themselves to the local DFS node, but the other two stayed pointing at the remote DFS node for three days, until someone finally piped up about how slow requests were. What reasons could there be why some, but not all, of the server reverted? Is the currently active DFS node for a namespace exposed anywhere in the OS (WMI, or even scripts) so that we can monitor the active nodes?

    Read the article

  • Elastic beanstalk access private git repo

    - by user221676
    I am trying to currently add an ssh key to my elastic beanstalk instances using .ebextensions commands. The keys I have stored are in my application code and I try to copy them to the root .ssh folder so I can access them when doing a git+ssh clone later here is an example of the config file in my .ebextensions folder packages: yum: git: [] container_commands: 01-move-ssh-keys: command: "cp .ssh/* ~root/.ssh/; chmod 400 ~root/.ssh/tca_read_rsa; chmod 400 ~root/.ssh/tca_read_rsa.pub; chmod 644 ~root/.ssh/known_hosts;" 02-add-ssh-keys: command: "ssh-add ~root/.ssh/tca_read_rsa" the problem is that I get is an error when attempting to clone the repo Host key verification failed. I have tried many ways of try to add the host to the known_hosts file but none have worked! The command that is doing the clone is npm install as the repo points to a node module

    Read the article

  • With a node.js powered server on EC2, how can I decrease the TCP connection time?

    - by talentedmrjones
    While profiling my application I've noticed that in the Firebug Net panel, the "Connecting" time—that is the time waiting for a TCP connection—is consistently around 70–100ms. See image below: Of course in the grand scheme of things, 100ms is not long, but I have seen other services that respond with 0ms Connect time. So if other servers can, I should be able to as well. Any thoughts on how I might even beging to troubleshoot this?

    Read the article

  • Ubuntu, User Accounts messed up

    - by Vor
    I need to fix Ubuntu Accounts some how but don't really see how it could be done. The problem is: files /etc/passwd and /etc/hostname and /etc/hosts where changed. /etc/passwd After John:x:1000:1000:John,,,:/home/serg:/bin/bash Befoure serg:x:1000:1000:John,,,:/home/serg:/bin/bash /etc/hosts After 127.0.0.1 localhost 127.0.1.1 John-The-Rippe Befoure 127.0.0.1 localhost 127.0.1.1 serg-Protege /etc/hostname After John-The-Ripper Befoure serg-PORTEGE-Z835 I was trying to simply changed this files but can not do this because permission denied. When I'm trying to login as a root I got this message: John@John-The-Ripper:~$ sudo -s [sudo] password for John: John is not in the sudoers file. This incident will be reported The file sudoers is empty: John@John-The-Ripper:~$ vi /etc/sudoers When I type users in cp: John@John-The-Ripper:~$ users John John When I type id, I got this: John@John-The-Ripper:~$ id uid=1000(John) gid=1000(serg) groups=1000(serg) This doesn't work eather: John@John-The-Ripper:~$ usermod -l John serg usermod: user 'serg' does not exist John@John-The-Ripper:~$ adduser serg adduser: Only root may add a user or group to the system. ater. Then I tried to go to the GRUB menu and from there log in as a root. I did this, but however When I tryed to create user serg, It gave me an error that group already exist. When I tried to change /etc/passwd it said 'permission denied' And this doens't do the trick: John@John-The-Ripper:~$ visudo visudo: /etc/sudoers: Permission denied visudo: /etc/sudoers: Permission denied Also The last thing I tried to do is to create a bootable USB and reinstall ubuntu, however I can not open USB-Creator because it asked me a root passwd. But it doesn't work. HELP ME PLEASE =)))

    Read the article

  • How to get rid of grub menu after boot?

    - by umpirsky
    Here is my /etc/default/grub: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" I tried various things including: How do I hide the GRUB menu showing up in the beginning of boot? How to disable Grub's menu from showing up after failed boot http://www.itworld.com/software/306238/disable-grub-boot-menu-ubuntu-1210 But I still get grub menu each time I boot. My generated /boot/grub/grub.cfg: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ] ; then set timeout=-1 else if [ x$feature_timeout_style = xy ] ; then set timeout_style=hidden set timeout=0 # Fallback hidden-timeout code in case the timeout_style feature is # unavailable. elif sleep --interruptible 0 ; then set timeout=0 fi fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 45,51,53; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { menuentry 'Ubuntu, with Linux 3.13.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-recovery-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-recovery-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Ubuntu 14.04 LTS (14.04) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-simple-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu 14.04 LTS (14.04) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' $menuentry_id_option 'osprober-gnulinux-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { menuentry 'Ubuntu (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed-root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-24-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-24-generic.efi.signed-root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet initrd /boot/initrd.img-3.13.0-24-generic } } set timeout_style=menu if [ "${timeout}" = 0 ]; then set timeout=10 fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### menuentry 'System setup' $menuentry_id_option 'uefi-firmware' { fwsetup } ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

  • Removing 301 redirect from site root

    - by Jon Clements
    I'm having a look at a friends website (a fairly old PHP based one) which they've been advised needs re-structuring. The key points being: URLs should be lower case and more "friendly". The root of the domain should be not be re-directed. The first point I'm happy with (and the URLs needed tidying up anyway) and have a draft plan of action, however the second is baffling me as to not only the best way to do it, but also whether it should be done. Currently http://www.example.com/ is redirected to http://www.example.com/some-link-with-keywords/ using the follow index.php in the root of the Apache2 instance. <?php $nextpage = "some-link-with-keywords/"; header( "HTTP/1.1 301 Moved Permanently" ); header( "Status: 301 Moved Permanently" ); header("Location: $nextpage"); exit(0); // This is Optional but suggested, to avoid any accidental output ?> As far as I'm aware, this has been the case for around three years -- and I'm sorely tempted to advise to not worry about it. It would appear taking off the 301 could: Potentially affect page ranking (as the 'homepage' would disappear - although it couldn't disappear because of the next point...) Introduce maintainance issues as existing users would still have the re-directed page in their cache Following the above, introduce duplicate content Confuse Google/other SE's as to what the homepage actually is now I may be over-analysing this but I have a feeling it's not as simple as removing the 301 from the root, and 301'ing the previous target to the root... Any suggestions (including it's not worth it) are sincerely appreciated.

    Read the article

  • ubuntu 10.04: boot error for custom compiled kernel - gave up wating for root device

    - by atharva
    Hi, I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platoform) and it is working fine. Now I have compiled kernel 2.6.37 from the downloaded from the kernel tree. I followed usual procedure of compileing kernel (make menuconfig,make. make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using upadate grub command. Update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different ubuntu forums: 1. disable uuid and point root=/dev/sda8 (sda8 is where my kernele image resides (both default kernel and compiled one) from /etc/default/grub 2. compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution ?

    Read the article

  • Ubuntu 10.04: boot error for custom compiled kernel - gave up waiting for root device

    - by atharva
    I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platform) and it is working fine. Now I have compiled kernel 2.6.37 downloaded from the kernel tree. I followed usual procedure of compiling kernel (make menuconfig, make, make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using update-grub command. update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different Ubuntu forums: disable uuid and point root=/dev/sda8 (sda8 is where my kernel image resides (both default kernel and compiled one) from /etc/default/grub compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution?

    Read the article

  • Finding the XPath with the node name

    - by julien.schneider(at)oracle.com
    A function that i find missing is to get the Xpath expression of a node. For example, suppose i only know the node name <theNode>, i'd like to get its complete path /Where/is/theNode.   Using this rather simple Xquery you can easily get the path to your node. declare namespace orcl = "http://www.oracle.com/weblogic_soa_and_more"; declare function orcl:findXpath($path as element()*) as xs:string { if(local-name($path/..)='') then local-name($path) else concat(orcl:findXpath($path/..),'/',local-name($path)) }; declare function orcl:PathFinder($inputRecord as element(), $path as element()) as element(*) { { for $index in $inputRecord//*[local-name()=$path/text()] return orcl:findXpath($index) } }; declare variable $inputRecord as element() external; declare variable $path as element() external; orcl:PathFinder($inputRecord, $path)   With a path         <myNode>nodeName</myNode>  and a message         <node1><node2><nodeName>test</nodeName></node2></node1>  the result will be         node1/node2/nodeName   This is particularly useful when you use the Validate action of OSB because Validate only returns the xml node which is in error and not the full location itself. The following OSB project reuses this Xquery to reformat the result of the Validate Action. Just send an invalid xml like <myElem http://blogs.oracle.com/weblogic_soa_and_more"http://blogs.oracle.com/weblogic_soa_and_more">      <mySubElem>      </mySubElem></myElem>   you'll get as nice <MessageIsNotValid> <ErrorDetail  nbr="1"> <dataElementhPath>Body/myElem/mySubElem</dataElementhPath> <message> Expected element 'Subelem1@http://blogs.oracle.com/weblogic_soa_and_more' before the end of the content in element mySubElem@http://blogs.oracle.com/weblogic_soa_and_more </message> </ErrorDetail> </MessageIsNotValid>   Download the OSB project : sbconfig_xpath.jar   Enjoy.            

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >