Search Results

Search found 12077 results on 484 pages for 'node js'.

Page 237/484 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • allow infiniband for non root users

    - by user1219721
    I got Infiniband running on RHEL 6.3 [root@master ~]# ibv_devinfo hca_id: mthca0 transport: InfiniBand (0) fw_ver: 4.7.927 node_guid: 0017:08ff:ffd0:6f1c sys_image_guid: 0017:08ff:ffd0:6f1f vendor_id: 0x08f1 vendor_part_id: 25208 hw_ver: 0xA0 board_id: VLT0060010001 phys_port_cnt: 2 port: 1 state: PORT_ACTIVE (4) max_mtu: 2048 (4) active_mtu: 2048 (4) sm_lid: 2 port_lid: 3 port_lmc: 0x00 link_layer: InfiniBand port: 2 state: PORT_DOWN (1) max_mtu: 2048 (4) active_mtu: 512 (2) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: InfiniBand but it's only working as root. when trying from a non-super user, I got nothing : [nicolas@master ~]$ ibv_devices device node GUID ------ ---------------- mthca0 001708ffffd06f1c So, how to allow regular users to use infiniband ?

    Read the article

  • How should I use LVM with Ganeti?

    - by javano
    I am building a small Ganeti cluster on some low end hardware (I only have the resources given sadly). I am confused as to the use of LVMs with DRBD. I have two instances and three nodes. What I want is instance1 replicated between node 1 & 2, and instance2 replicated between nodes 3 & 2 (so node2 is doing nothing, except waiting for either node1 or 3 to fail, is it is the secondary node for both instances). This is because node2 is a lower hardware spec than 1 and 3, so I just want it as an hot-spare. How can I achieve this? I don't want instance1 being replicated to node3 for example, nor instance2 replicated to node1. Nodes 1 & 2 have /dev/sda5 which is 150GBs (for example). Nodes 2 & 3 have /dev/sda6 which is also 75GBs (for example). Using just nodes 1 & 2, after looking at the Ganeti docs I would; vgcreate my-vg Next I would create the cluster via gnt-cluster VG = "my-vg". It is here I believe that I am missing some knowledge. I believe that what I need to do is create the same Logical Volume on nodes 1 & 2 in Volume Group "my-vg", that solely consists of /dev/sda5 and call it "lv1". Then create an Logical Volume on nodes 2 & 3 the solely consists of /dev/sda6 in "my-vg" that is called "lv2". When creating instance1 I would then use "-vg=lv1 -n node1:node2", and when creating instance2 I would use "-vg=lv2 -n node3:node2". I breifly had a go at this today and I'm dubious if this will be possible. When trying to create instance2, "lv2" wont exist on node1 (the cluster master) so I don't believe it will allow the instance creation. Could I create a 1kb parition (/dev/sda6) on node1 and put it into a LV called "lv2" or is that too flakey? Is this set up possible? Thank you.

    Read the article

  • Cassandra on heterogeneous servers

    - by happy-coding
    I am currently running 4 cassandra nodes with the following hardware in a Apache Cassandra cluster: AMD Athlon 64 X2 6000+ 8G RAM 750G hard disk It shows not such a good writing performance and a really bad read performance with sometimes also timeouts. I was wondering if it makes sense to add 2 nodes with a different hardware (8 CPUs and more RAM) to improve this. Or does a cassandra cluster works best with the same hardware in every node? Thanks & best regards

    Read the article

  • Dell PowerEdge R720 - Corrupted RAID

    - by BT643
    Apologies in advance for the lengthy question. We have a Dell PowerEdge R720 server with: 2 x 136GB SAS drives in RAID 1 for the OS (Ubuntu Server 12.04) 6 x 3TB SATA drives in RAID 5 for data A few days ago we were getting errors when trying to access files on the large RAID 5 partition. We rebooted the server and got a message about the raid controller has found a foriegn config. We've had this before, and just needed to use Dell's RAID configuration utility to import foreign config on the RAID. Last time this worked, but this time, it started doing a disk check then we got this: FSCK has returned the following: "/dev/sdb1 inode 364738 has a bad extended attribute block 7 /dev/sdb1 unexpected inconsistency run fsck manually (i.e without -a or -p options) MOUNTALL fsck /ourdatapartition [1019] terminated with status 4 MOUNTALL filesystem has errors /ourdatapartition errors where found while checking the disk drive for /ourdatapartition Press F to fix errors, I to Ignore or M for Manual Recovery" We pressed F to try and fix the errors, but it eventually errored with: Inode 275841084, i_blocks is 167080, should be 0. Fix? yes Inode 275841141 has an invalid extend node (blk 2206761006, lblk 0) Clear? yes Inode 275841141, i_blocks is 227872, should be 0. Fix? yes Inode 275842303 has an invalid extend node (blk 2206760975, lblk 0) Clear? yes .... Error storing directory block information (inode=275906766, block=0, num=2699516178): Memory allocation failed /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** e2fsck: aborted /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** mountall: fsck /ourdatapartition [1286] terminated with status 9 mountall: Unrecoverable fsck error: /ourdatapartition We noticed one of the drive lights was not lit at all, and thought this may have failed and be the problem. We replaced the drive with a spare, and tried "F" to repair it again, but we keep just getting the same error as above. In the RAID configuration utility, all drives show as "online" and "optimal". We do have this data on another replicated server, so we're not worried about "recovering" anything, we just want to get the system back online asap. The server has 64 or 32GB memory, can't remember off the top of my head, but either way, with a 14TB RAID, I think it may still not be enough. Thanks EDIT - I checked the memory usage while fsck was running as suggested and after 2 or 3 minutes, it looked like this, using up nearly all of our servers memory: When it failed after 5 minutes or so with the error in my post, the memory immediately freed up again:

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • puppet variables

    - by Joey Bagodonuts
    I am trying to use variables in my modules manifest.pp with little luck class mysoftware($version="dev-2011.02.04b") { File { links => follow } file { "/opt/mysoftware": ensure => directory } file { "/opt/mysoftware/share": source => "puppet://puppet/mysoftware/air/$version", recurse => "true", } } This does not seem to be working when I assign this to a node via the nodes.pp file. I am running puppetmaster 2.6.4 puppetd clients are 0.25

    Read the article

  • Run WMIC command across network

    - by C-dizzle
    Instead of typing this in a command prompt one at a time: wmic /node:ipaddress /user:administrator /password:mypassword bios get serialnumber How can I run that against one entire subnet and output to a text document? Since I do this every couple months to verify our inventory of computers, I would assume there would be a much of easier way I could put this in a batch script instead of doing it manually.

    Read the article

  • Is it safe to run two instances of svnserve on one repository, or only one?

    - by fredden
    We've two nodes running heartbeat/drbd, and one of the services we're using is subversion. What I want to know is: is it safe to run svnserve on both nodes all the time, or should it only run on the active node? Does svnserve use file-level locking, or is it all in memory? What are the implications of running svnserve without its repositories accessible? Please let me know if this isn't clear, and I'll try my best to rephrase/clarify. :)

    Read the article

  • Where is my Git/Ungit Packages?

    - by T?n Tri?n Nguy?n
    I've install these follow packages: node --version : v0.10.4 npm --version : 1.2.18 git --version : 1.7.1 and i used this command: npm install -g ungit I want to use Ungit/Git via apache. But i don't know where is Git/Ungit DocumentRoot to define on virtualhost 80. I've tried to search folder which's name git or ungit but it seems not really exactly. Anybody help me about this? very thanks.

    Read the article

  • Opscode Chef Ohai plugin - How to get a custom plugin to run automatically?

    - by JDS
    The Ohai docs are incomplete. Here's what I've been able to do so far: I've created a custom plugin that adds one piece of node data called "my_custom_data" it works when I load it manually in IRB I've used the Ohai cookbook to get it loaded on the servers that need it However, Ohai doesn't load it, neither during Chef runs nor if I run Ohai manually. The docs, here, are of little use in answering this question. http://docs.opscode.com/ohai.html

    Read the article

  • automated GUI tests fails when running from Jenkins

    - by adm
    Jenkins(master) is installed on the Linux system and runs automated tests on the node slave (Win-XP) via ssh connection. But all the GUi tests are failed, when GUI tests are running locally(WINXP system) testst are passed. I tried tscon.exe 0 /dest:console for forwards the calls to the console but I am getting the error: Could not connect sessionID 0 to sessionname console, Error code 7045 Error [7045]:The requested session access is denied. thanks

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • Running Fedora 8, never upgraded. How to do so?

    - by TreyK
    Hey all, I'm a student working on a website for my robotics team. I've recently decided to experiment with a node.js/CouchDB setup instead of our current LAMP configuration. While trying to install these systems, I was appalled to discover that our current version of Fedora (version 8) is almost two years past EOL. If I were to upgrade our server, what version of Fedora should I install, and how should I do this? Thanks, -Trey

    Read the article

  • Set LD_LIBRARY_PATH and CLASSPATH on cluster nodes before running a hadoop job

    - by Ashish Sharma
    I need to set LD_LIBRARY_PATH and CLASSPATH before running a job a cluster. In LD_LIBRARY_PATH i need to add location of some jars which are required while running the job, As these jars are avaiable at my cluster, similar with CLASSPATH. I have a 3 NODE cluster, I need to set this LD_LIBRARY_PATH and CLASSPATH for all the 3 data nodes so that the following jar are available while running the job

    Read the article

  • Accountability in a cloud infrastructure (Amazon, etc)

    - by WinkyWolly
    I was curious how companies such as Amazon would handle some sort of investigation that needed to look into data potentially stored on one of their on-demand nodes. What typically happens to data in an environment like this after the VM is destroyed (literally what happens on the disk / FS)? Would it actually be possible to recover data from a destroyed node? Just a curiosity :)

    Read the article

  • Creating a file with Puppet with facts from multiple hosts

    - by Belly
    I'm trying to have puppet build a configuration file that looks like this: [All] Hosts=apt-dater@puppetmaster;apt-dater@blaster; (etc...) Basically, this file needs an entry for each node that includes the apt-dater class. I've been experimenting with exported resources, but I can't find a clean way of putting it together. How should I go about creating this file?

    Read the article

  • How to configure `lxc` like `openvz`

    - by cam
    I was going to fire up a OpenVZ node to test out some software, but it looks like OpenVZ is no longer supported in Ubuntu (deprecated in favor of lxc. It looks like can do more than simply virtualize an entire system, and I'm having trouble finding good documentation that would explain how I can start a virtualized system (using an openvz template or something). Could someone give me some pointers or direct me to some good documentation?

    Read the article

  • Nginx refuses to bind to 8080

    - by Stofke
    I have setup Varnish to run on port 80 which seems to work fine. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME varnishd 8005 nobody 7u IPv4 14055 0t0 TCP *:http (LISTEN) varnishd 8005 nobody 8u IPv6 14056 0t0 TCP *:http (LISTEN) Under available sites in /etc/nginx I have the file default with: server { listen 8080; .... nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) Why is it still looking for port 80?

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • What kind of scaling method is it, when you add new software to a single server to handle more users? [on hold]

    - by Phil
    I have read about scaling (in terms of terminology and methods). This got me confused about the following: On a single computer, running a web server (say apache), if the system administrator adds a front, caching, reverse-proxy such as Varnish, which in that scenario increase the amount of requests this server is able to handle. My question: Setting up such cache increases the capacity of the server to handle work, hence scales it, but without increasing neither the amount of nodes or the node's capacity. What is the name for this type of scaling?

    Read the article

  • How to setup GIMP add-ons

    - by Juza
    I'm trying to setup androidicon.py file after I downloaded it from internet, but I can not find the menu item Android Icon batch mode and Android Icon even though I reboot it. What I did as follows: Download it from http://registry.gimp.org/node/25274 Control+click on androidicon.py.txt link and save it as file "androidicon.py". Copy it to plug-in folder Reboot GIMP Confirm the menu "Android Icon batch mode" and "Android Icon" wasn't shown. Could you tell me how to fix this?

    Read the article

  • XSL testing empty strings with <xsl:if> and sorting

    - by AdRock
    I am having trouble with a template that has to check 3 different nodes and if they are not empty, print the data I am using for each node then doing the output but it is not printing anything. It is like the test returns zero. I have selected the parent node of each node i want to check the length on as the template match but it still doesn't work. Another thing, how do i sort the list using . I tried using this but i get an error about loading the stylesheet. If i take out the sort it works <xsl:template match="folktask/member"> <xsl:if test="user/account/userlevel='3'"> <xsl:sort select="festival/event/datefrom"/> <div class="userdiv"> <xsl:apply-templates select="user"/> <xsl:apply-templates select="festival"/> </div> </xsl:if> </xsl:template> <xsl:template match="festival"> <xsl:apply-templates select="contact"/> </xsl:template> This should hopefully finish all my stylesheets. This is the template I am calling <xsl:template match="contact"> <xsl:if test="string-length(contelephone)!=0"> <div class="small bold">TELEPHONE:</div> <div class="large"> <xsl:value-of select="contelephone/." /> </div> </xsl:if> <xsl:if test="string-length(conmobile)!=0"> <div class="small bold">MOBILE:</div> <div class="large"> <xsl:value-of select="conmobile/." /> </div> </xsl:if> <xsl:if test="string-length(fax)!=0"> <div class="small bold">FAX:</div> <div class="large"> <xsl:value-of select="fax/." /> </div> </xsl:if> </xsl:template> and a section of my xml. If you need me to edit my post so you can see the full code i will but the rest works fine. <folktask> <member> <user id="4"> <personal> <name>Connor Lawson</name> <sex>Male</sex> <address1>12 Ash Way</address1> <address2></address2> <city>Swindon</city> <county>Wiltshire</county> <postcode>SN3 6GS</postcode> <telephone>01791928119</telephone> <mobile>07338695664</mobile> <email>[email protected]</email> </personal> <account> <username>iTuneStinker</username> <password>3a1f5fda21a07bfff20c41272bae7192</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX1 9BE</eventpostcode> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> <contact> <conname>Stuart Vincent</conname> <conaddress1>P.O. Box 642</conaddress1> <conaddress2></conaddress2> <concity>Oxford</concity> <concounty>Bedfordshire</concounty> <conpostcode>OX1 3BY</conpostcode> <contelephone>01865 79073</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> </folktask>

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >