Search Results

Search found 16491 results on 660 pages for 'root node'.

Page 16/660 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Sorted exsl:node-set. Return node by it position.

    - by kalininew
    Good afternoon, gentlemen. Help me solve a very simple task. I have a set of nodes <menuList> <mode name="aasdf"/> <mode name="vfssdd"/> <mode name="aswer"/> <mode name="ddffe"/> <mode name="ffrthjhj"/> <mode name="dfdf"/> <mode name="vbdg"/> <mode name="wewer"/> <mode name="mkiiu"/> <mode name="yhtyh"/> and so on... </menuList> I have it sorted now this way <xsl:variable name="rtf"> <xsl:for-each select="//menuList/mode"> <xsl:sort data-type="text" order="ascending" select="@name"/> <xsl:value-of select="@name"/> </xsl:for-each> </xsl:variable> Now I need to get an arbitrary element in the sorted array to the number of its position. I write code <xsl:value-of select="exsl:node-set($rtf)[position() = 3]"/> and get a response error. How to do it right?

    Read the article

  • express+jade: provided local variable is undefined in view (node.js + express + jade)

    - by Jake
    Hello. I'm implementing a webapp using node.js and express, using the jade template engine. Templates render fine, and can access helpers and dynamic helpers, but not local variables other than the "body" local variable, which is provided by express and is available and defined in my layout.jade. This is some of the code: app.set ('view engine', 'jade'); app.get ("/test", function (req, res) { res.render ('test', { locals: { name: "jake" } }); }); and this is test.jade: p hello =name when I remove the second line (referencing name), the template renders correctly, showing the word "hello" in the web page. When I include the =name, it throws a ReferenceError: 500 ReferenceError: Jade:2 NaN. 'p hello' NaN. '=name' name is not defined NaN. 'p hello' NaN. '=name' I believe I'm following the jade and express examples exactly with respect to local variables. Am I doing something wrong, or could this be a bug in express or jade?

    Read the article

  • Azure SDK causes Node.js service bus call to run slow

    - by PazoozaTest Pazman
    I am using this piece of code to call the service bus queue from my node.js server running locally using web matrix, I have also upload to windows azure "web sites" and it still performs slowly. var sb1 = azure.createServiceBusService(config.serviceBusNamespace, config.serviceBusAccessKey); sbMessage = { "Entity": { "SerialNumbersToCreate": '0', "SerialNumberSize": config.usageRates[3], "BlobName": 'snvideos' + channel.ChannelTableName, "TableName": 'snvideos' + channel.ChannelTableName } }; sb1.getQueue('serialnumbers', function(error, queue){ if (error === null){ sb1.sendQueueMessage('serialnumbers', JSON.stringify(sbMessage), function(error) { if (!error) res.send(req.query.callback + '({data: ' + JSON.stringify({ success: true, video: newVideo }) + '});'); else res.send(req.query.callback + '({data: ' + JSON.stringify({ success: false }) + '});'); }); } else res.send(req.query.callback + '({data: ' + JSON.stringify({ success: false }) + '});'); }); It can be up to 5 seconds before the server responds back to the client with the return result. When I comment out the sb1.getQueue('serialnumbers', function(error, queue){ and just have it return without sending a queue message it performs in less than 1 second. Why is that? Is my approach to using the azure sdk service bus correct? Any help would be appreciated.

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • What is the most common way to use a middleware in node with express and connect

    - by Bernhard
    Thinking about the correct way, how to make use of middlewares in a node.js web project using express and connect which is growing up at the moment. Of course there are middlewares right now wich has to pass or extend requests globally but in a lot of cases there are special jobs like prepare incoming data and in this case the middleware would only work for a set of http-methods and routes. I've a component based architecture and each component brings it's own middleware layer which can implement those for requests this component can handle. On app startup any required component is loaded and prepared. Is it a good idea to bind the middleware code execution to URLs to keep cpu load lower or is it better to use middlewares only for global purposes? Here's some dummy how an url related middleware look like. app.use(function(req, res, next) { // Check if requested route is a part of the current component // or if the middleware should be passed on any request if (APP.controller.groups.Component.isExpectedRoute(req) || APP.controller.groups.Component.getConfig().MIDDLEWARE_PASS_ALL === true) { // Execute the midleware code here console.log('This is a route which should be afected by middleware'); ... next(); }else{ next(); } });

    Read the article

  • node.js / socket.io, cookies only working locally

    - by Ben Griffiths
    I'm trying to use cookie based sessions, however it'll only work on the local machine, not over the network. If I remove the session related stuff, it will however work just great over the network... You'll have to forgive the lack of quality code here, I'm just starting out with node/socket etc etc, and finding any clear guides is tough going, so I'm in n00b territory right now. Basically this is so far hacked together from various snippets with about 10% understanding of what I'm actually doing... The error I see in Chrome is: socket.io.js:1632GET http://192.168.0.6:8080/socket.io/1/?t=1334431940273 500 (Internal Server Error) Socket.handshake ------- socket.io.js:1632 Socket.connect ------- socket.io.js:1671 Socket ------- socket.io.js:1530 io.connect ------- socket.io.js:91 (anonymous function) ------- /socket-test/:9 jQuery.extend.ready ------- jquery.js:438 And in the console for the server I see: debug - served static content /socket.io.js debug - authorized warn - handshake error No cookie My server is: var express = require('express') , app = express.createServer() , io = require('socket.io').listen(app) , connect = require('express/node_modules/connect') , parseCookie = connect.utils.parseCookie , RedisStore = require('connect-redis')(express) , sessionStore = new RedisStore(); app.listen(8080, '192.168.0.6'); app.configure(function() { app.use(express.cookieParser()); app.use(express.session( { secret: 'YOURSOOPERSEKRITKEY', store: sessionStore })); }); io.configure(function() { io.set('authorization', function(data, callback) { if(data.headers.cookie) { var cookie = parseCookie(data.headers.cookie); sessionStore.get(cookie['connect.sid'], function(err, session) { if(err || !session) { callback('Error', false); } else { data.session = session; callback(null, true); } }); } else { callback('No cookie', false); } }); }); var users_count = 0; io.sockets.on('connection', function (socket) { console.log('New Connection'); var session = socket.handshake.session; ++users_count; io.sockets.emit('users_count', users_count); socket.on('something', function(data) { io.sockets.emit('doing_something', data['data']); }); socket.on('disconnect', function() { --users_count; io.sockets.emit('users_count', users_count); }); }); My page JS is: jQuery(function($){ var socket = io.connect('http://192.168.0.6', { port: 8080 } ); socket.on('users_count', function(data) { $('#client_count').text(data); }); socket.on('doing_something', function(data) { if(data == '') { window.setTimeout(function() { $('#target').text(data); }, 3000); } else { $('#target').text(data); } }); $('#textbox').keydown(function() { socket.emit('something', { data: 'typing' }); }); $('#textbox').keyup(function() { socket.emit('something', { data: '' }); }); });

    Read the article

  • How to tell gnome which file to use to change backlight brightness?

    - by cebe
    I have a Dell Inspiron M5010 and I am unable to change my backlight brightness with the F-Keys and it also does not work with gnome brightness widget. I am able to change backlight brightness manually on the terminal with $ sudo -s $ cd /sys/devices/virtual/backlight/dell_backlight $ ls -la insgesamt 0 drwxr-xr-x 3 root root 0 2012-04-06 13:03 . drwxr-xr-x 3 root root 0 2012-04-06 13:03 .. -r--r--r-- 1 root root 4096 2012-04-06 13:17 actual_brightness -rw-r--r-- 1 root root 4096 2012-04-06 13:17 bl_power -rw-r--r-- 1 root root 4096 2012-04-06 13:03 brightness -r--r--r-- 1 root root 4096 2012-04-06 13:03 max_brightness drwxr-xr-x 2 root root 0 2012-04-06 13:17 power lrwxrwxrwx 1 root root 0 2012-04-06 13:03 subsystem -> ../../../../class/backlight -rw-r--r-- 1 root root 4096 2012-04-06 13:03 uevent $ echo 8 > brightness Can I configure gnome power manager to use the right files somehow?

    Read the article

  • Got problem with installation. "No root file system is defined."

    - by user92322
    I'm very new with Ubuntu and generally with linux. I saw ubuntu and it seems like this OS is really good and stable, and so I decided to install it alongside my windows 7 OS. I have a few problems with the installation. Here is what I did: I downloaded the 64bit version from Ubuntu official website, and burned it on a dvd. I set the boot sequence to first load from my CD-Rom. Ubuntu installation started, and I chose "Install Ubuntu" in the menu. (where there is also a "Try Ubuntu" option) I clicked forward until I got into the installation type screen As you can see, the installation wont show my actual details about my hard drive! I have 1 hard drive with 750 GB - 80 GB - My main drive with windows 7 OS 600GB - All of my stuff 20GB Free space that I saved for Ubuntu But the installation wont show that!

    Read the article

  • C++ linked list based tree structure. Sanely move nodes between lists.

    - by krunk
    The requirements: Each Node in the list must contain a reference to its previous sibling Each Node in the list must contain a reference to its next sibling Each Node may have a list of child nodes Each child Node must have a reference to its parent node Basically what we have is a tree structure of arbitrary depth and length. Something like: -root(NULL) --Node1 ----ChildNode1 ------ChildOfChild --------AnotherChild ----ChildNode2 --Node2 ----ChildNode1 ------ChildOfChild ----ChildNode2 ------ChildOfChild --Node3 ----ChildNode1 ----ChildNode2 Given any individual node, you need to be able to either traverse its siblings. the children, or up the tree to the root node. A Node ends up looking something like this: class Node { Node* previoius; Node* next; Node* child; Node* parent; } I have a container class that stores these and provides STL iterators. It performs your typical linked list accessors. So insertAfter looks like: void insertAfter(Node* after, Node* newNode) { Node* next = after->next; after->next = newNode; newNode->previous = after; next->previous = newNode; newNode->next = next; newNode->parent = after->parent; } That's the setup, now for the question. How would one move a node (and its children etc) to another list without leaving the previous list dangling? For example, if Node* myNode exists in ListOne and I want to append it to listTwo. Using pointers, listOne is left with a hole in its list since the next and previous pointers are changed. One solution is pass by value of the appended Node. So our insertAfter method would become: void insertAfter(Node* after, Node newNode); This seems like an awkward syntax. Another option is doing the copying internally, so you'd have: void insertAfter(Node* after, const Node* newNode) { Node *new_node = new Node(*newNode); Node* next = after->next; after->next = new_node; new_node->previous = after; next->previous = new_node; new_node->next = next; new_node->parent = after->parent; } Finally, you might create a moveNode method for moving and prevent raw insertion or appending of a node that already has been assigned siblings and parents. // default pointer value is 0 in constructor and a operator bool(..) // is defined for the Node bool isInList(const Node* node) const { return (node->previous || node->next || node->parent); } // then in insertAfter and friends if(isInList(newNode) // throw some error and bail I thought I'd toss this out there and see what folks came up with.

    Read the article

  • XML: remove child node of a node

    - by nebenmir
    I want to find all nodes in a xml file that have a certain tag-name, lets say "foo". If those foo-tags have them thelves child nodes with node-name "bar", then I want to remove those nodes. The result should be written to a file. // remove this one // don't remove this one Thanx for any hints. As the tag indicates, I would like to do this with python.

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • sudo in Debian squeeze inside linux-vserver always wants password

    - by mark
    Every since I upgraded all my linux-vserver Debian guests from Lenny to Squeeze I've the apparent problem that whenever I want to use sudo it asks me for my password. Every time. I've configured sudo to have a timeout of 30 minutes: Defaults timestamp_timeout=30 . This has been configured when it was still Lenny (note: as suggested by EightBitTony I've also tried without this setting - no change). I've a hard time figuring out what the problem here is, since I think my configuration is right. I thought about it being a problem with the file used to record the timestamp, maybe a permission issue, but was unlucky to find any hard evidence. I've compared the contents of /var/lib/sudo/ between a working and a non-working system but couldn't spot any difference. The version of sudo used in both environments is 1.7.4p4-2.squeeze.3. My non-working system(s): find /var/lib/sudo/ -ls 17319289 4 drwx------ 4 root root 4096 Jan 1 1985 /var/lib/sudo/ 17319286 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 17319312 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 17319361 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 17319490 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 17319326 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/4 17319491 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/2 A working system: find /var/lib/sudo -ls 2598921 4 drwx------ 5 root root 4096 Jan 1 1985 /var/lib/sudo 1999522 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 2000781 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/8 1998998 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/17 1999459 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/26 1998930 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/24 2000771 4 -rw------- 1 root mark 40 Jun 25 11:39 /var/lib/sudo/mark/4 2000773 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/5 1999223 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/0 1998908 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/14 2000769 4 -rw------- 1 root mark 40 Jul 9 13:30 /var/lib/sudo/mark/2 2000770 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/3 2000782 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 2000778 4 -rw------- 1 root mark 40 Jul 8 00:11 /var/lib/sudo/mark/7 1998892 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/19 1999264 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/23 2000789 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/12 1999093 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/25 1998880 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/18 1998853 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/20 2000790 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/15 1998878 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/16 1998874 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/13 2000774 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 2000786 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/11 1998893 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/22 2000783 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 1998949 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/1 Despite the obvious (some up2date timestamps on the working system) I don't see anything wrong here, so it could be as well be a wrong track. Here's my current /etc/sudoers: # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification User_Alias FULLADMIN = user1, user2, user3 # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL FULLADMIN ALL = (ALL) ALL # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d #Defaults always_set_home,timestamp_timeout=30

    Read the article

  • cf3 Can't stat ... in files.copyfrom promise

    - by Xerxes
    On the client: # cf-agent -KIv ... cf3 -> Handling file existence constraints on /etc/cfengine3 cf3 -> Copy file /etc/cfengine3 from /srv/cfengine/sysconf/server/inputs check cf3 No existing connection to 172.31.69.83 is established... cf3 Set cfengine port number to 5308 = 5308 cf3 -> Connect to 172.31.69.83 = 172.31.69.83 on port 5308 cf3 LastSaw host 172.31.69.83 now cf3 Loaded /var/lib/cfengine3/ppkeys/root-172.31.69.83.pub cf3 .....................[.h.a.i.l.]................................. cf3 Strong authentication of server=172.31.69.83 connection confirmed cf3 Server returned error: Unspecified server refusal (see verbose server output) cf3 Can't stat /srv/cfengine/sysconf/server/inputs in files.copyfrom promise cf3 ?> defining promise result class Cfengine_Inputs_Updated_Failed .... cf3 ......................................................... cf3 Promise handle: cf3 Promise made by: [cf-agent.cf ] FAILED 172.31.69.83:///srv/cfengine/sysconf/server/inputs -> localhost:///etc/cfengine3 However, on the server (172.31.69.83), there's no reason why it can't stat the directory: cyrus:/srv/cfengine/sysconf/server# ls -l /srv/cfengine/sysconf/server/inputs total 52 -rw-r--r-- 1 root root 2142 Sep 6 21:54 cf-agent.cf -rw-r--r-- 1 root root 831 Sep 6 18:31 cf-execd.cf -rw-r--r-- 1 root root 4517 Sep 6 21:44 cf-serverd.cf -rw-r--r-- 1 root root 3082 Sep 6 21:44 dns.cf -rw-r--r-- 1 root root 2028 Sep 6 15:12 failsafe.cf -rw-r--r-- 1 root root 5966 Sep 6 21:44 ldap-masters.cf -rw-r--r-- 1 root root 4380 Sep 6 18:31 ldap-security.cf -rw-r--r-- 1 root root 2735 Sep 6 08:21 lib-core.cf -rw-r--r-- 1 root root 1506 Sep 6 21:45 lib-utils.cf -rw-r--r-- 1 root root 2635 Sep 6 20:27 lib-vars.cf -rw-r--r-- 1 root root 2057 Sep 3 17:46 nss.cf -rw-r--r-- 1 root root 1472 Sep 6 18:31 packages.cf -rw-r--r-- 1 root root 1257 Sep 6 18:01 pam-security.cf -rw-r--r-- 1 root root 4019 Sep 6 19:32 promises.cf -rw-r--r-- 1 root root 2808 Sep 3 17:22 site.cf -rw-r--r-- 1 root root 1670 Sep 6 18:31 sudo-security.cf -rw-r--r-- 1 root root 831 Sep 6 18:31 sys-security.cf -rw-r--r-- 1 root root 890 Sep 6 18:31 sys-users.cf cyrus:/srv/cfengine/sysconf/server# I don't see anything interesting server side either when running: /usr/sbin/cf-serverd -d4 --verbose --no-fork And the following does not have any complaints: /usr/sbin/cf-promises -v Any ideas? I'm running cfengine3 on debian, v3.0.5+dfsg-1 - and the cf-agent.cf file is as follows: bundle agent Update { files: linux:: "${cf3.path[inputs]}" action => immediate, move_obstructions => "true", depth_search => Recursive, copy_from => MirrorFrom( "${cf3.host[server]}", "${cf3.path[scm-inputs]}", "true", "0400" ), classes => DefineSoftClass("Cfengine_Inputs_Updated") ; "${cf3.path[sbin]}" comment => "Setting cf3 client sbin scripts: ${cf3.path[sbin]}/", action => immediate, depth_search => Recursive, copy_from => MirrorFrom( "${cf3.host[server]}", "${cf3.path[scm-cnt-scripts]}", "false", "0555" ) ; reports: Cfengine_Inputs_Updated:: "[cf-agent.cf ] Services:CFAgent:Inputs:Updated"; Cfengine_Inputs_Updated_Failed:: "[cf-agent.cf ] FAILED ${cf3.host[server]}://${cf3.path[scm-inputs]} -> localhost://${cf3.path[inputs]}"; } I lie, there is something interesting with a little more debugging... AccessControl(/srv/cfengine/sysconf/server/inputs) AccessControl, match(/srv/cfengine/sysconf/server/inputs,client.com.au) encrypt request=1 Examining rule in access list (/srv/cfengine/sysconf/server/inputs,/home/cfengine)? cf3 Host client.com.au denied access to /srv/cfengine/sysconf/server/inputs Unappending Host client.com.au denied access to /srv/cfengine/sysconf/server/inputs cf3 Access control in sync Unappending Access control in sync Transaction Send[t 59][Packed text] Attempting to send 67 bytes SendSocketStream, sent 67 cf3 From (host=client.com.au,user=root,ip=172.31.69.3) Unappending From (host=client.com.au,user=root,ip=172.31.69.3) cf3 REFUSAL of request from connecting host: (SYNCH 1283777156 STAT /srv/cfengine/sysconf/server/inputs) Unappending REFUSAL of request from connecting host: (SYNCH 1283777156 STAT /srv/cfengine/sysconf/server/inputs) RecvSocketStream(8) cf3 -> Accepting a connection I'll keep looking.

    Read the article

  • Installing rtorrent on my ubuntu server

    - by Shishant
    Hello, I am try to install rtorrent on my ubuntu server. I ran these commands and they worked fine. ./autogen.sh ./configure --with-xmlrpc-c make and then when i tried to use make install i guess it didnt get install because no .rtorrent.rc' was created in home directory and running rtorrent returned this error rtorrent: error while loading shared libraries: libtorrent.so.11: cannot open shared object file: No such file or directory below is the log of my make install. root@ubuntu:~/rtorrent-0.8.6# make install Making install in doc make[1]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Nothing to be done for `install-exec-am'. test -z "/usr/local/share/man/man1" || /bin/mkdir -p "/usr/local/share/man/man1" /usr/bin/install -c -m 644 './rtorrent.1' '/usr/local/share/man/man1/rtorrent.1 ' make[2]: Leaving directory `/root/rtorrent-0.8.6/doc' make[1]: Leaving directory `/root/rtorrent-0.8.6/doc' Making install in src make[1]: Entering directory `/root/rtorrent-0.8.6/src' Making install in core make[2]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/core' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/core' Making install in display make[2]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/display' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/display' Making install in input make[2]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/input' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/input' Making install in rpc make[2]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' Making install in ui make[2]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/ui' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/ui' Making install in utils make[2]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Entering directory `/root/rtorrent-0.8.6/src' make[3]: Entering directory `/root/rtorrent-0.8.6/src' test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin" /bin/bash ../libtool --mode=install /usr/bin/install -c 'rtorrent' '/usr/loc al/bin/rtorrent' libtool: install: /usr/bin/install -c rtorrent /usr/local/bin/rtorrent make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src' make[2]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Entering directory `/root/rtorrent-0.8.6' make[2]: Entering directory `/root/rtorrent-0.8.6' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/root/rtorrent-0.8.6' make[1]: Leaving directory `/root/rtorrent-0.8.6' Thank You.

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • Debugging in XCode as root

    - by Anton
    In my program I need to create sockets and bind them to listen HTTP port (80). The program works fine when I launch it from command line with sudo, escalating permissions to root. Running under XCode gives a 'permission denied' error on the call to binding function (asio::ip::tcp::acceptor::bind()). How can I do debugging under XCode? All done in C++ and boost.asio on Mac OS X 10.5 with XCode 3.1.2.

    Read the article

  • Grails default root path

    - by srinath
    Hi, Is there a variable where we can find out the root directory of my Grails application? I tried request.getSession().getServletContext().getRealPath("/") But shows tmp/App-Test-0.1/ . My app is located in tomcat "/home/srinath/work/projects/tomcat-6.0.18/webapps/App-Test-0.1" could any one help me . thanks in advance, sri..

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • How to get a random node from a tree?

    - by ooboo
    It looks easy, but I found the implementation tricky. I need that for a simple genetic programming problem I'm trying to implement. The function should, given a node, return the node itself or any of its children such that the probability of choosing a node is normally distributed relative to its depth (so the function should return mostly middle nodes, but sometimes the root itself or the lowest ones - but that's not really necessary if that makes it significantly more complex, if all any node is chosen with equal probability, that's good enough). Thanks

    Read the article

  • Insert XML node before specific node using c#

    - by sam
    This is my XML file <employee> <name ref="a1" type="xxx"></name> <name ref="a2" type="yyy"></name> <name ref="a3" type="zzz"></name> </employee> Using C#, I need to insert this node <name ref="b2" type="aaa"></name> between the "a2" and "a3" nodes. Any pointer how to sort this out?

    Read the article

  • How to Populate a 'Tree' structure 'Declaratively'

    - by mackenir
    I want to define a 'node' class/struct and then declare a tree of these nodes in code in such a way that the way the code is formatted reflects the tree structure, and there's not 'too much' boiler plate in the way. Note that this isn't a question about data structures, but rather about what features of C++ I could use to arrive at a similar style of declarative code to the example below. Possibly with C++0X this would be easier as it has more capabilities in the area of constructing objects and collections, but I'm using Visual Studio 2008. Example tree node type: struct node { string name; node* children; node(const char* name, node* children); node(const char* name); }; What I want to do: Declare a tree so its structure is reflected in the source code node root = node("foo", [ node("child1"), node("child2", [ node("grand_child1"), node("grand_child2"), node("grand_child3" ]), node("child3") ]); NB: what I don't want to do: Declare a whole bunch of temporary objects/colls and construct the tree 'backwards' node grandkids[] = node[3] { node("grand_child1"), node("grand_child2"), node("grand_child3" }; node kids[] = node[3] { node("child1"), node("child2", grandkids) node("child3") }; node root = node("foo", kids);

    Read the article

  • Adding a child node to a JSON node dynamically

    - by Sai
    I have to create a nested multi level json depending on the resultset that I get from MYSQL. I created a json object initially. Now I want to add child nodes to the already child nodes in the object. d = collections.OrderedDict() jsonobj = {"test": dict(updated_at="today", ID="ID", ads=[])} for rows1 in rs: jsonobj['list']["ads"].append(dict(unit = "1", type ="ad_type", id ="123", updated_at="today", x_id="111", x_name="test")) cur.execute("SELECT * from f_test") rs1 = cur.fetchall() for rows2 in rs1: propertiesObj = [] d["name"]="propName" d["type"]="TypeName" d["value"]="Value1" propertiesObj.append(d) jsonobj['play_list']["ads"].append() Here in the above line I want to add another child node to [play_list].[ads] which is a array list again. the output should look like the following [list].[ads].[preferences].

    Read the article

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • How to make a non-root user to use chown for any user group files?

    - by user1877716
    I would like to make a user super powerful, with almost all root rights but unable to touch a the root user (to change the password of the root). My goal is to user "B" to manage my web server. The problem is user B need to able to run the chown and chmod commands on some files belonging to other users. I tried to put B in root group or use visudo, but it's not enough. I'm working an Centos 6 system. If some body have ideas!

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >