Search Results

Search found 5157 results on 207 pages for 'node'.

Page 142/207 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • How to parse HTML with TouchXML or some other alternative.

    - by 0SX
    Hi, I'm trying to parse the HTML presented below with TouchXML but it keeps crashing when I try to extract certain attributes. I'm totally new to the parser world so I apologize for being a complete idiot. I need help to parse this HTML. What I'm trying to accomplish is to parse each attribute and value or what not and copy them to a string. I've been trying to find a good parser to parse HTML and I believe TouchXML is the best I've seen because of Tidy. Speaking of Tidy, How could I run this HTML through Tidy first then parse it? I'm not sure how to do this. Here is the code that I have so far that doesn't work due to it's not pulling everything I need from the HTML. Any help or advice would be much appreciated. Thanks My current code: NSMutableArray *res = [[NSMutableArray alloc] init]; // using local resource file NSString *XMLPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"example.html"]; NSData *XMLData = [NSData dataWithContentsOfFile:XMLPath]; CXMLDocument *doc = [[[CXMLDocument alloc] initWithData:XMLData options:0 error:nil] autorelease]; NSArray *nodes = NULL; nodes = [doc nodesForXPath:@"//div" error:nil]; for (CXMLElement *node in nodes) { NSMutableDictionary *item = [[NSMutableDictionary alloc] init]; [item setObject:[[node attributeForName:@"id"] stringValue] forKey:@"id"]; [res addObject:item]; [item release]; } NSLog(@"%@", res); [res release]; HTML file that needs to be parsed: <html> <head> <base target="_blank" /> </head> <body style="margin:2;"> <div id="group"> <div id="groupURL"><a href="http://www.example.com/groups">Group URL</a></div> <img id="grouplogo" src="http://images.example.com/groups/image.png" /> <div id="groupcomputer"><a href="http://www.example.com/groups/page" title="Group Title">Group title this would be here</a></div> <div id="groupinfos"> <div id="groupinfo-l">Person</div><div id="groupinfo-r">Ralph</div> <div id="groupinfo-l">Years</div><div id="groupinfo-r">4 years</div> <div id="groupinfo-l">Salary</div><div id="groupinfo-r">100K</div> <div id="groupinfo-l">Other</div><div id="groupoth" style="width:15px">other info</div> </body> </html> EDIT: I could use Element Parser but I need to know how to extract the Person's Name from the following example which would be Ralph in this case. <div id="groupinfo-l">Person</div><div id="groupinfo-r">Ralph</div>

    Read the article

  • linux audit - exclude a process that updates the time

    - by user185704
    I have set my auditd rules to log when the system time is changed However, our servers are VMs and thus have problems with the time drifting out. We needed to solve this issue so we used a VMware tool to regularly synchronize the time. My problem now is that my audit logs are overwhelmed with time change entries like this: Jun 1 15:08:39 ***** audispd: node=****** type=SYSCALL msg=audit(1338559719.053:344291): arch=c000003e syscall=159 success=yes exit=5 a0=7ffff2084050 a1=0 a2=144b a3=485449575f4c4c55 items=0 ppid=1 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="vmtoolsd" exe="/usr/lib/vmware-tools/bin64/appLoader" key="time_change" How can I exclude this vmware tool from the audit, but still capture a user changing the time? Here are my current audit rules to capture time changes: -a always,exit -F arch=b32 -S adjtimex -S settimeofday -k time_change -a always,exit -F arch=b32 -S clock_settime -k time_change

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Linux HA - Best Heartbeat hardware solution

    - by Martino Dino
    Hi all I would ask anyone what is the best layer 2 medium for heartbeat in Linux and how it's best configured. More precisely I've been thinking about a dedicated NIC for that purpose but then i thought that if a switch breaks then i would loose the heartbeat connection for most of the cluster and STONITH 'BUM'!!! Will probably loose my job after :) Distributing the heartbeat onto the main NICs of every node trough a vif sounds reasonable but im not sure if this is the best option (at least the switches are redundant to some extent). Is it possible to use heartbeat over a bonded interface and that sounds reasonable? Do you have any other tip/solution for that issue?

    Read the article

  • Can't set acl for afs even though i'm in the right group

    - by Torandi
    I've got a dir that i've lost controll over in an afs system. According to the system adminstrators my adminin subgroup (dsekt:admin) got rlidwka on the dir. I'm a member of this group (and i can list the members of the group and see my nick there) though I can't set the acl's for the dir. The pts: $pts membership dsekt:admin Members of dsekt:admin (id: -6813) are: /.../ taran And my klist. $>klist Credentials cache: FILE:/tmp/krb5cc_56782 Principal: [email protected] Both dsekt:admin and the dir is on the NADA.KTH.SE node.

    Read the article

  • NFS denies mount, even though the client is listed in exports

    - by ajdecon
    We have a couple of servers (part of an HPC cluster) in which we're currently seeing some NFS behavior which is not making sense to me. node1 exports its /lscratch directory via NFS to node2, mounted at /scratch/node1. node2 also exports its own lscratch, which is correspondingly mounted at /scratch/node2 on node1. Unfortunately, whenever I attempt to mount either NFS export on the opposite node, I get the following error: mount: node1:/lscratch failed, reason given by server: Permission denied This despite the fact that I have included first the IP range (10.6.0.0) and then the specific IPs (10.6.7.1, 10.6.7.2) in /etc/exports. Any suggestions? Edit to remove ambiguity: I've made sure that exports only contains either the range, or the specific IPs, not both at the same time.

    Read the article

  • WMIC returns error when querying product

    - by Stu
    I'm trying to automate the installation of an MSI on my server, however before the installation can go ahead I need to uninstall the previous version from the server. Searching on the internet I've found that WMIC is the tool required but there seems to be a problem with the setup of WMI on the server. Running the following command gives errors: command promptwmic then inside the tool /trace:on product get name This returns a long string of successes and one failure: FAIL: IEnumWbemClassObject->Next(WBEM_INFINITE, 1, -, -) Line: 396 File: d:\nt\admin\wmi\wbem\tools\wmic\execengine.cpp Node - ENTECHORELDEV ERROR: Code = 0x80041010 Description = The specified class is not valid. Facility = WMI I'm trying to run this on a standard install of Windows Server 2003 R2 with administrator privelages. Thanks Stu

    Read the article

  • Distributed file systems

    - by Neeraj
    I need to implement a distributed storage system for a set of nodes(devices) connected in a mesh network. So what basically my design goals are: The storage system should be capable of handling dynamic entry and exit of nodes. Replication (for fault tolerance). For this i am thinking of using a Distributed file system. Every node could access data in the other nodes in a transparent manner. Are there some simple, easily pluggable opensource implementations? Thanks for your thoughts!

    Read the article

  • Using AsyncTask to display data in ListView, but onPostExecute not being called

    - by sumisu
    I made a simple AsyncTask class to display data in ListView with the help of this stackoverflow question. But the AsyncTask onPostExecute is not being called. This is my code: public class Start extends SherlockActivity { // JSON Node names private static final String TAG_ID = "id"; private static final String TAG_NAME = "name"; // category JSONArray JSONArray category = null; private ListView lv; @Override public void onCreate(Bundle savedInstanceState) { setTheme(SampleList.THEME); //Used for theme switching in samples super.onCreate(savedInstanceState); setContentView(R.layout.test); new MyAsyncTask().execute("http://...."); // Launching new screen on Selecting Single ListItem lv.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // getting values from selected ListItem String name = ((TextView) view.findViewById(R.id.name)).getText().toString(); String cost = ((TextView) view.findViewById(R.id.mail)).getText().toString(); // Starting new intent Intent in = new Intent(getApplicationContext(), SingleMenuItemActivity.class); in.putExtra("categoryname", name); System.out.println(cost); in.putExtra("categoryid", cost); startActivity(in); } }); } public class MyAsyncTask extends AsyncTask<String, Void, ArrayList<HashMap<String, String>> > { // Hashmap for ListView ArrayList<HashMap<String, String>> contactList = new ArrayList<HashMap<String, String>>(); @Override protected ArrayList<HashMap<String, String>> doInBackground(String... params) { // Creating JSON Parser instance JSONParser jParser = new JSONParser(); // getting JSON string from URL category = jParser.getJSONArrayFromUrl(params[0]); try { // looping through All Contacts for(int i = 0; i < category.length(); i++){ JSONObject c = category.getJSONObject(i); // Storing each json item in variable String id = c.getString(TAG_ID); String name = c.getString(TAG_NAME); // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); // adding each child node to HashMap key => value map.put(TAG_ID, id); map.put(TAG_NAME, name); // adding HashList to ArrayList contactList.add(map); } } catch (JSONException e) { Log.e("log_tag", "Error parsing data "+e.toString()); } return contactList; } @Override protected void onPostExecute(ArrayList<HashMap<String, String>> result) { ListAdapter adapter = new SimpleAdapter(Start.this, result , R.layout.list_item, new String[] { TAG_NAME, TAG_ID }, new int[] { R.id.name, R.id.mail }); // selecting single ListView item lv = (ListView) findViewById(R.id.ListView); lv.setAdapter(adapter); } } } Eclipse: 11-25 11:40:31.896: E/AndroidRuntime(917): java.lang.RuntimeException: Unable to start activity ComponentInfo{de.essentials/de.main.Start}: java.lang.NullPointerException

    Read the article

  • how to create <tr> row and append/insert it into a table a run time ina web page + MSHTML

    - by madhu
    Hi I'm using IHTMLdocument2 to create Element This is my code: IHTMLdocument2 pDoc2;//it is initialized in ma code BSTR eTag = SysAllocString(L"TR"); IHTMLElement *pTRElmt = NULL; hr = pDoc2->createElement(eTag,&pTRElmt); if(FAILED(hr)) return hr; IHTMLDOMNode *pTRNode = NULL; hr = pTRElmt->QueryInterface(IID_IHTMLDOMNode, (void **)&pTRNode); if(FAILED(hr)) return hr; // create TD node IHTMLElement *pTDElmt = NULL; hr = pDoc2->createElement(L"TD",&pTDElmt); if(FAILED(hr)) return hr; IHTMLDOMNode *pTDNode = NULL; hr = pTDElmt->QueryInterface(IID_IHTMLDOMNode,(void **)&pTDNode); if(FAILED(hr)) return hr; IHTMLDOMNode *pRefNode = NULL; hr = pTRNode->appendChild(pTDNode,&pRefNode); if(FAILED(hr)) return hr; // create TEXT Node IHTMLDOMNode *pTextNode = NULL; hr = pDoc3->createTextNode(L"madhu", &pTextNode); if(FAILED(hr)) return hr; IHTMLDOMNode *pRefNod = NULL; hr = pTDNode->appendChild(pTextNode,&pRefNod); if(FAILED(hr)) return hr; //********* setting attributes for <tr> /* VARIANT bgclor; bgclor.vt = VT_I4; bgclor.lVal =0xC0C0C0; hr = newElem->setAttribute(L"bgcolor",bgclor,1); if(FAILED(hr)) return hr; VARIANT style; style.vt = VT_BSTR; style.bstrVal = SysAllocString(L"display: table-row"); hr = newElem->setAttribute(L"style",style,1); if(FAILED(hr)) return hr; VARIANT id; id.vt = VT_BSTR; id.bstrVal = SysAllocString(L"AttrRowMiddleName"); hr = newElem->setAttribute(L"id",id,1); if(FAILED(hr)) return hr; */ //create <td> for row <tr> /* VARIANT Name; Name.vt = VT_BSTR; Name.bstrVal = SysAllocString(L"MiddleName"); hr = newElem->setAttribute(L"name",Name,1); if(FAILED(hr)) return hr; VARIANT Type; Type.vt = VT_BSTR; Type.bstrVal = SysAllocString(L"text"); hr = newElem->setAttribute(L"type",Type,1); if(FAILED(hr)) return hr; VARIANT Value; Value.vt = VT_BSTR; Value.bstrVal = SysAllocString(L"button"); hr = newElem->setAttribute(L"value",Value,1); if(FAILED(hr)) return hr; */ //IHTMLDOMNode *pReturn = NULL; //hr = pParentNode->replaceChild(pdn,pFirstchild,&pReturn); //if(FAILED(hr)) // return hr; VARIANT refNode; refNode.vt = VT_DISPATCH; refNode.pdispVal = pDomNode; IHTMLDOMNode *pREfTochild = NULL; hr = pParentNode->insertBefore(pTRNode,refNode,&pREfTochild); if(FAILED(hr)) return hr; This is inserting something but not visible and inserting as and when tr tag comes I even tried with clone but same problem. pls anybody give right code for this

    Read the article

  • Cannot connect puppet agent to puppet master

    - by u123
    I have installed puppet 3.3.1 on a debian 7 machine (test-puppet-master) and the puppet agent on another debian 7 machine (test-puppet-agent/192.11.80.246) acting as a client. I start the master with: puppet master --verbose --no-daemonize And I start the agent with: puppet agent --server=test-puppet-master --no-daemonize --verbose Notice: Did not receive certificate which gives the following output on the master: Notice: Starting Puppet master version 3.3.1 Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Inserting default '~ ^/catalog/([^/]+)$' (auth true) ACL Info: Inserting default '~ ^/node/([^/]+)$' (auth true) ACL Info: Inserting default '/file' (auth ) ACL Info: Inserting default '/certificate_revocation_list/ca' (auth true) ACL Info: Inserting default '~ ^/report/([^/]+)$' (auth true) ACL Info: Inserting default '/certificate/ca' (auth any) ACL Info: Inserting default '/certificate/' (auth any) ACL Info: Inserting default '/certificate_request' (auth any) ACL Info: Inserting default '/status' (auth true) ACL Info: Not Found: Could not find certificate test-puppet-agent Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Not Found: Could not find certificate test-puppet-agent Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Not Found: Could not find certificate test-puppet-agent Any ideas why the agent cannot connect?

    Read the article

  • Difference between VMWare tools?

    - by tore-
    I'm currently writing a module for puppet which installs VMWare tools to virtual nodes. I want to do this via yum and and yum-repo. VMWare have their own repo (http://packages.vmware.com/tools/esx/3.5latest/rhel5/x86_64/index.html) which I thought I could use, rather than creating my own. But then I noticed that their repo files is alot different than the rpm file used when installing VMWare Tools on the node, via the "Install/upgrade VMWare Tools" in vSphere. Does anyone know what the real difference is? Does anyone have any preferences?

    Read the article

  • Setting up an Active-Active IIS Cluster with ARR - is it possible?

    - by Ahmed Zubair
    I would like to know if we can setup an Active-Active IIS Cluster using Windows Cluster services that shares a common storage to store web content and WITHOUT the use of Windows NLB. I'm aware that this may not be a best practice or not a recommended setup, however, the setup is to be configured as below: Two web servers running IIS 7.5 (needs a common storage for web content) for HA and another set of two servers for sql cluster in active-passive mode for HA. Also is it possible to enable ARR on 2 node active-active IIS cluster for load balancing http requests? Appreciate if someone replies with both pros & cons of the setup.

    Read the article

  • AWS EC2 and build-essential

    - by Randy Hartmen
    Hi, I am trying to compile Node.js on Amazon EC2, but I can't even install "build essential". Where's the problem? Thanks. sudo yum install build-essential Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile (...) No package build-essential available. Error: Nothing to do ./configure Checking for program g++ or c++ : not found Checking for program icpc : not found Checking for program c++ : not found error: could not configure a cxx compiler! could not configure a cxx compiler!

    Read the article

  • netsh wlan add profile not imported passphrase

    - by sirlancelot
    I exported a wireless network connection profile from a Windows 7 machine correctly connected to a WiFi network with a WPA-TKIP passphrase. The exported xml file shows the correct settings and a keyMaterial node which I can only guess is the encrypted passphrase. When I take the xml to another Windows 7 computer and import it using netsh wlan add profile filename="WiFi.xml", it correctly adds the profile's SSID and encryption type, but a balloon pops up saying that I need to enter the passphrase. Is there a way to import the passphrase along with all other settings?

    Read the article

  • vim NERDTree shortcut to an existing function

    - by Ned Batchelder
    I want to use right-arrow to open a node in NERDtree. I see there is NERDTreeAddKeyMap, but I'm too much of a vimscript newb to know how to invoke it properly. I want right-arrow to invoke activateNode. I've done it by adding this line into NERD_tree.vim itself: exec "nnoremap <silent> <buffer> <Right> :call <SID>activateNode(0)<cr>" but I want to do it the right way in my .vimrc

    Read the article

  • SQL Server 2008 Replication Promotion

    - by Stefan Mai
    I have a 4 node cluster, 1 subscriber and 3 publishers, all running SQL Server 2008 R2 Enterprise. The intention is that if the subscriber goes down, we can use one of the publishers to quickly build up its replacement. Our testing reveals a problem though: the subcriber databases all have Not For Replication set to Yes on the identity columns so that they can maintain the identity set in the subscriber. This causes a problem when they become subscribers because now we don't have identity insert functionality: we get a primary key error. Any way to "promote" a subscriber to publisher?

    Read the article

  • Disable MOUSE wakeup when doing suspend on UBUNTU

    - by Shadyabhi
    When I do SUSPEND on ubuntu, in order to wake up, i can just move the mouse and the computer will wake up. But, I dont want that the computer is waked up when I move my mouse. How can I do that? My /proc/acpi/wakeup file:- shadyabhi@shadyabhi-desktop:~$ cat /proc/acpi/wakeup Device S-state Status Sysfs node SLPB S4 *enabled P32 S4 disabled pci:0000:00:1e.0 UAR1 S4 disabled pnp:00:09 ILAN S4 disabled pci:0000:00:19.0 PEGP S4 disabled PEX0 S4 disabled pci:0000:00:1c.0 PEX1 S4 disabled pci:0000:00:1c.1 PEX2 S4 disabled pci:0000:00:1c.2 PEX3 S4 disabled pci:0000:00:1c.3 PEX4 S4 disabled pci:0000:00:1c.4 PEX5 S4 disabled UHC1 S3 disabled pci:0000:00:1d.0 UHC2 S3 disabled pci:0000:00:1d.1 UHC3 S3 disabled pci:0000:00:1d.2 UHC4 S3 disabled EHCI S3 disabled pci:0000:00:1d.7 EHC2 S3 disabled pci:0000:00:1a.7 UH42 S3 disabled pci:0000:00:1a.0 UHC5 S3 disabled pci:0000:00:1a.1 UHC6 S3 disabled pci:0000:00:1a.2 AZAL S3 disabled pci:0000:00:1b.0 shadyabhi@shadyabhi-desktop:~$

    Read the article

  • How to set up JBoss with S3_Ping on AWS?

    - by Jonik
    I'm looking into running clustered JBoss on Amazon Web Services (AWS). I'd like to try out S3_PING, i.e. making JBoss use an S3 bucket for dynamic node discovery etc, since no multicast is available. I found a piece of example config XML related to S3_Ping, but I'm not sure where in JBoss installation you're supposed to configure this. So, what JBoss config files would I need to tweak to get S3_PING working? Can anyone point me to a more complete example? JBoss 5.1.0 GA. (This is probably more a JGroups/JBoss question than anything else. I've already got the S3 bucket for this set up, so no problem there.)

    Read the article

  • How do I remove Slony from a restored PostgreSQL database?

    - by Scott Herbert
    I've restored a database which came from a server on which Slony was running. The server on which the database has been restored does not have Slony installed. When the database restored, there were a lot of errors reported, with Slony related objects not getting created due to Slony related logins being missing. This I thought was not a problem, as losing the Slony objects didn't seem to matter, and infact seemed desirable. However, now I've got an anoying, if not critical problem. Whenever one clicks on a table in the newly restored DB in PGAdmin, a Slony related error popup ... pops up. The first one reads: "An error has occured: ERROR: function _rmscl.getlocalnodeid(unknown) does not exist" I notice that under the Replication node in PGAdmin, that there is a Slony replication cluster. Trying to drop this cluster results in more object missing type errors. Does anyone have any ideas how we can remove the last vestiges of Slony from this database?

    Read the article

  • Manually accessing GMail via IMAP

    - by Jeff Mc
    I'm trying to connect to gmail imap, but I am unable to execute any commands after login. I'm running openssl s_client -connect imap.gmail.com:993 to connect then, * OK Gimap ready for requests from 128.146.221.118 42if6514983iwn.40 . CAPABILITY * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA XLIST CHILDREN XYZZY SASL-IR AUTH=XOAUTH . OK Thats all she wrote! 42if6514983iwn.40 . LOGIN {email removed} {password removed} * CAPABILITY IMAP4rev1 UNSELECT LITERAL+ IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE . OK {email removed} authenticated (Success) . CAPABILITY at which point it simply hangs with the connection open. I'm guessing gmail pushes you off to a node in a cluster after it authenticate me?

    Read the article

  • Chef bootstrap giving 401 unauthorized

    - by loddy1234
    I'm trying to bootstrap a new chef node by running: knife bootstrap <server ip> -x lewis -N gitlab --sudo But I get the following output: [Mon, 03 Sep 2012 14:45:17 +0000] INFO: *** Chef 10.12.0 *** [Mon, 03 Sep 2012 14:45:17 +0000] INFO: Client key /etc/chef/client.pem is not present - registering [Mon, 03 Sep 2012 14:45:17 +0000] INFO: HTTP Request Returned 401 Unauthorized: Failed to authenticate. Ensure that your client key is valid. [Mon, 03 Sep 2012 14:45:17 +0000] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out [Mon, 03 Sep 2012 14:45:17 +0000] FATAL: Net::HTTPServerException: 401 "Unauthorized" My chef server is running Ubuntu 12.04 x32 and the machine I'm trying to bootstrap is running CentOS 6.3 x64 Any idea what's going wrong?

    Read the article

  • GNS3 Cannot ping/resolve DNS record

    - by Eldad Cohen
    I set up an internet lab with GNS3, which has 3 routers, in each node there is a computer directly connected. One of the hosts is a DNS server, Windows 2003 Server. The other one is a Windows XP machine. Ping is good between routers and machines but no ability to ping domain.com record on DNS server 2003. I set a static nat on the router to route all traffic from gateway to the DNS server internal ip address, still no answer for the dns request. Any ideas or thoughts will be most welcome.

    Read the article

  • Puppet: how to use data from a MySQL table in Puppet 3.0 templates?

    - by Luke404
    I have some data whose source-of-truth is in a MySQL database, size is expected to max out at the some-thousands-rows range (in a worst-case scenario) and I'd like to use puppet to configure files on some servers with that data (mostly iterating through those rows in a template). I'm currently using Puppet 3.0.x, and I cannot change the fact that MySQL will be the authoritative source for that data. Please note, data comes from external sources and not from puppet or from the managed nodes. What possible approaches are there? Which one would you recommend? Would External Node Classifiers be useful here? My "last resort" would be regularly dumping the table to a YAML file and reading that through Hiera to a Puppet template, or to directly dump the table in one or more pre-formatted text file(s) ready to be copied to the nodes. There is an unanswered question on SF about system users but the fundamental issue is probably similar to mine - he's trying to get data out of MySQL.

    Read the article

  • puppet service not stopping service

    - by Gregg Leventhal
    notice ("This should be echoed") service { "iptables": ensure => "stopped", } This does not stop iptables, I am not sure why. service iptables stop works fine. Puppet 2.6.17 on CentOS 6.3. UPDATE: /etc/puppet/manifests/nodes.pp node 'linux-dev' { include mycompany::install::apache::init include mycompany::config::services::init } /etc/puppet/modules/mycompany/manifests/config/services/init.pp class mycompany::config::services::init { if ($::id == "root") { service { 'iptables': #name => '/sbin/iptables', #enable => false, #hasstatus => true, ensure => stopped } notice ("IPTABLES is now being stopped...") file { '/tmp/puppet_still_works': ensure => 'present', owner => root } else { err("Error: this manifest must be run as the root user!") } }

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >