Search Results

Search found 1888 results on 76 pages for 'definitions'.

Page 60/76 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Name lookup for names not dependent on template parameter in VC++2008 Express. Is it a bug?

    - by Maciej H
    While experimenting a bit with C++ templates I managed to produce this simple code, for which the output is different, than I expected according to my understanding of C++ rules. void bar(double d) { std::cout << "bar(double) function called" << std::endl; } template <typename T> void foo(T t) { bar(3); } void bar(int i) { std::cout << "bar(int) function called" << std::endl; } int main() { foo(3); return 0; } When I compile this code is VC++2008 Express function bar(int) gets called. That would be the behaviour I would expect if bar(3);in the template body was dependent on the template parameter. But it's not. The rule I found here says "The C++ standard prescribes that all names that are not dependent on template parameters are bound to their present definitions when parsing a template function or class". Am I wrong, that "present definition" of bar when parsing the template function foo is the definition of void bar(double d);? Why it's not the case if I am wrong. There are no forward declarations of bar in this compilation unit.

    Read the article

  • Trouble using 'eval' to define a toplevel function when called from within an object.

    - by mschaef
    I've written (in JavaScript) an interactive read-eval-print-loop that is encapsulated within an object. However, I recently noticed that toplevel function definitions specified to the interpreter do not appear to be 'remembered' by the interpreter. After some diagnostic work, I've reduced the core problem to this: var evaler = { eval: function (str) { return eval(str); }, }; eval("function t1() { return 1; }"); // GOOD evaler.eval("function t2() { return 2; }"); // FAIL After running this script, I have a definition for t1, and no defintion for t2. The act of calling eval from within evaler is sufficiently different from the toplevel call that the global definition does not get recorded. What does happen is that the call to evaler.eval returns a function object, so I'm presuming that t2 is being defined and stored in some other set of bindings that I don't have access to. (It's not defined as a member in evaler.) Is there any easy fix for this? I've tried all sorts of fixes, and haven't stumbled upon one that works. (Most of what I've done has centered around putting the call to eval in an anonymous function, and altering the way that's called, chainging __parent__, etc.) Any thoughts on how to fix this?

    Read the article

  • templated method on T inside a templated class on TT : Is that possible/correct.

    - by paercebal
    I have a class MyClass which is templated on typename T. But inside, I want a method which is templated on another type TT (which is unrelated to T). After reading/tinkering, I found the following notation: template <typename T> class MyClass { public : template<typename TT> void MyMethod(const TT & param) ; } ; For stylistic reasons (I like to have my templated class declaration in one header file, and the method definitions in another header file), I won't define the method inside the class declaration. So, I have to write it as: template <typename T> // this is the type of the class template <typename TT> // this is the type of the method void MyClass<T>::MyMethod(const TT & param) { // etc. } I knew I had to "declare" the typenames used in the method, but didn't know how exactly, and found through trials and errors. The code above compiles on Visual C++ 2008, but: Is this the correct way to have a method templated on TT inside a class templated on T? As a bonus: Are there hidden problems/surprises/constraints behind this kind of code? (I guess the specializations can be quite amusing to write)

    Read the article

  • What are the linkage of the following functions?

    - by Derui Si
    When I was reading the c++ 03 standard (7.1.1 Storage class specifiers [dcl.stc]), there are some examples as below, I'm not able to tell how the linkage of each successive declarations is determined? Could anyone help here? Thanks in advance! static char* f(); // f() has internal linkage char* f() { /* ... */ } // f() still has internal linkage char* g(); // g() has external linkage static char* g() { /* ... */ } // error: inconsistent linkage void h(); inline void h(); // external linkage inline void l(); void l(); // external linkage inline void m(); extern void m(); // external linkage static void n(); inline void n(); // internal linkage static int a; // a has internal linkage int a; // error: two definitions static int b; // b has internal linkage extern int b; // b still has internal linkage int c; // c has external linkage static int c; // error: inconsistent linkage extern int d; // d has external linkage static int d; // error: inconsistent linkage UPD: Additionally, how can I understand the statement in the standard, " The linkages implied by successive declarations for a given entity shall agree. That is, within a given scope, each declaration declaring the same object name or the same overloading of a function name shall imply the same linkage. Each function in a given set of overloaded functions can have a different linkage, however."

    Read the article

  • LNK4221 and LNK4006 Warnings!

    - by user295030
    Hi, I basically making a static library of my own. I have taken my code which works and now put it into a static library for another program to use. In my library I am using another static library which I don't want the people who will be using my API to know. Since, I want to hide that information from them I can't tell them to install the other static library. Anyway, I used the command line Lib.exe to extract and create a smaller lib file of just the obj's I used. However, I get a bunch of "LNK4006 :second definition ignored" linker warnings for each obj I use followed by "LNK4221 no public symbols found;archive member will be inaccessible". I am doing this work in vs2008 and I am not sure what I am doing wrong. I am using the #pragma comment line in my .cpp file I have also modified the librarian to add my smaller .lib along with its location. my code simply makes calls to a couple functions which it should be able to get from those Obj file in the smaller lib. All my functions are implemented in .cpp file and my header just have the includes of the third party header files and come standard c++ header files. nothing fancy. I have actually no function definitions in there atm. I was going to put the API definition in there and implement that in the .cpp for this static lib that i was going to make. However, I just wanted to build my code before I added more to it. s Any help would be appreciated. is this a vs2008 configuration issue? or a program issue I am not sure. thanks for the help!

    Read the article

  • What is wrong with this append func in C

    - by LuckySlevin
    My Struct Definitions. typedef struct inner_list {char word[100]; inner_list*next;} inner_list; typedef struct outer_list { char word [100]; inner_list * head; outer_list * next; } outer_list; And The problem part: void append(outer_list **q,char num[100],inner_list *p) { outer_list *temp,*r; temp = *q; char *str; if(*q==NULL) { temp = (outer_list *)malloc(sizeof(outer_list)); strcpy(temp->word,num); temp->head = p; temp->next=NULL; *q=temp; } else { temp = *q; while(temp->next !=NULL) { temp=temp->next; } r = (outer_list *)malloc(sizeof(outer_list)); strcpy(r->word,num); temp->head = p; r->next=NULL; temp->next=r; } } I don't know what is i'm doing wrong in this append function i'm sending a char array and a linked list to be stored another linked list. But i can't store the linked list in another linked list. I couldn't figure out the problem. Any ideas?

    Read the article

  • How to document an existing small web site (web application), inside and out?

    - by Ricket
    We have a "web application" which has been developed over the past 7 months. The problem is, it was not really documented. The requirements consisted of a small bulleted list from the initial meeting 7 months ago (it's more of a "goals" statement than software requirements). It has collected a number of features which stemmed from small verbal or chat discussions. The developer is leaving very soon. He wrote the entire thing himself and he knows all of the quirks and underlying rules to each page, but nobody else really knows much more than the user interface side of it; which of course is the easy part, as it's made to be intuitive to the user. But if someone needs to repair or add a feature to it, the entire thing is a black box. The code has some minimal comments, and of course the good thing about web applications is that the address bar points you in the right direction towards fixing a problem or upgrading a page. But how should the developer go about documenting this web application? He is a bit lost as far as where to begin. As developers, how do you completely document your web applications for other developers, maintainers, and administrative-level users? What approach do you use, where do you start, do you have a template? An idea of magnitude: it uses PHP, MySQL and jQuery. It has about 20-30 main (frontend) files, along with about 15 included files and a couple folders of some assets. So overall it's a pretty small application. It interfaces with 7 MySQL tables, each one well-named, so I think the database end is pretty self-explanatory. There is a config.inc.php file with definitions of consts like the MySQL user details, some from/to emails, and URLs which PHP uses to insert into emails and pages (relative and absolute paths, basiecally). There is some AJAX via jQuery. Please comment if there is any other information that would help you help me and I will be glad to edit it in.

    Read the article

  • Best practices to deal with "slightly different" branches of source code

    - by jedi_coder
    This question is rather agnostic than related to a certain version control program. Assume there is a source code tree under certain distributed version control. Let's call it A. At some point somebody else clones it and gets its own copy. Let's call it B. I'll call A and B branches, even if some version control tools have different definitions for branches (some might call A and B repositories). Let's assume that branch A is the "main" branch. In the context of distributed version control this only means that branch A is modified much more actively and the owner of branch B periodically syncs (pulls) new updates from branch A. Let's consider that a certain source file in branch B contains a class (again, it's also language agnostic). The owner of branch B considers that some class methods are more appropriate and groups them together by moving them inside the class body. Functionally nothing has changed - this is a very trivial refactoring of the code. But the change gets reflected in diffs. Now, assuming that this change from branch B will never get merged into branch A, the owner of branch B will always get this difference when pulling from branch A and merging into his own workspace. Even if there's only one such trivial change, the owner of branch B needs to resolve conflicts every time when pulling from branch A. As long as branches A and B are modified independently, more and more conflicts like this appear. What is the workaround for this situation? Which workflow should the owner of branch B follow to minimize the effort for periodically syncing with branch A?

    Read the article

  • Acessing elements of this xml

    - by csU
    <wsdl:definitions targetNamespace="http://www.webserviceX.NET/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://www.webserviceX.NET/"> <s:element name="ConversionRate"> <s:complexType> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="FromCurrency" type="tns:Currency"/> <s:element minOccurs="1" maxOccurs="1" name="ToCurrency" type="tns:Currency"/> </s:sequence> </s:complexType> </s:element> <s:simpleType name="Currency"> <s:restriction base="s:string"> <s:enumeration value="AFA"/> <s:enumeration value="ALL"/> <s:enumeration value="DZD"/> <s:enumeration value="ARS"/> i am trying to get at all of the elements in enumeration but cant seem to get it right. This is homework so please no full solutions, just guidance if possible. $feed = simplexml_load_file('http://www.webservicex.net/CurrencyConvertor.asmx?WSDL'); foreach($feed->simpleType as $val){ $ns s = $val->children('http://www.webserviceX.NET/'); echo $ns_s -> enumeration; } what am i doing wrong? thanks

    Read the article

  • [Django] One single page to create a Parent object and its associated child objects

    - by ahmoo
    Hi all, This is my very first post on this awesome site, from which I have been finding answers to a handful of challenging questions. Kudos to the community! I am new to the Django world, so am hoping to find help from some Django experts here. Thanks in advance. Item model: class Item(models.Model): name = models.CharField(max_length=50) ItemImage model: class ItemImage(models.Model): image = models.ImageField(upload_to=get_unique_filename) item = models.ForeignKey(Item, related_name='images') As you can tell from the model definitions above, every Item object can have many ItemImage objects. My requirements are as followings: A single web page that allows users to create a new Item while uploading the images associated with the Item. The Item and the ItemImages objects should be created in the database all together, when the "Save" button on the page is clicked. I have created a variable in a custom config file, called NUMBER_OF_IMAGES_PER_ITEM. It is based on this variable that the system generates the number of image fields per item. Questions: What should the forms and the template be like? Can ModelForm be used to achieve the requirements? For the view function, what do I need to watch out other than making sure to save Item before ItemImage objects?

    Read the article

  • C# language questions

    - by Water Cooler v2
    1) What is int? Is it any different from the struct System.Int32? I understand that the former is a C# alias (typedef or #define equivalant) for the CLR type System.Int32. Is this understanding correct? 2) When we say: IComparable x = 10; Is that like saying: IComparable x = new System.Int32(); But we can't new a struct, right? or in C like syntax: struct System.In32 *x; x=>someThing = 10; 3) What is String with a capitalized S? I see in Reflector that it is the sealed String class, which, of course, is a reference type, unlike the System.Int32 above, which is a value type. What is string, with an uncapitalized s, though? Is that also the C# alias for this class? Why can I not see the alias definitions in Reflector? 4) Try to follow me down this subtle train of thought, if you please. We know that a storage location of a particular type can only access properties and members on its interface. That means: Person p = new Customer(); p.Name = "Water Cooler v2"; // legal because as Name is defined on Person. but // illegal without an explicit cast even though the backing // store is a Customer, the storage location is of type // Person, which doesn't support the member/method being // accessed/called. p.GetTotalValueOfOrdersMade(); Now, with that inference, consider this scenario: int i = 10; // obvious System.object defines no member to // store an integer value or any other value in. // So, my question really is, when the integer is // boxed, what is the *type* it is actually boxed to. // In other words, what is the type that forms the // backing store on the heap, for this operation? object x = i;

    Read the article

  • Scalable way to store files on server (PHP)?

    - by Nathaniel Bennett
    I'm creating my first web application - a really simplistic online text editor. What I need to do is find the best way to store text based files - a lot of them. These text files can be past 10,000 words in size (text words not computer words.) in essence I want the text documents to be limitless in size. I was thinking about storing the text files in my MySQL database - but thought there was a better way. Instead I'm planing on storing the text files in XML based format in a directory on my server. The rows in the database define the name of the xml based text file and the user who created the text along with basic metadata. An ID is generated using a V4 GUID generator , which gives the text an id and stores the text in the "/store" directory on my server. The text definitions in my server contain this id, and the android app I'm developing gets the contents of the text file by retrieving the text definition and then downloading the text to the local device using the GUID in the text definition. I just think this is a botch job? how can I improve this system? There has been cases of GUID colliding. I don't want this to happen. A "slim" possibility isn't good enough - I need to make sure there is absolutely no chance in a GUID collision. I was planning on checking the database for texts that have the same id before storing the text with a particular id - I however believe with over 20,000 pieces of text in my database this would take an long time and produce unneeded stress on the server. How can I make GUID safe? What happens when a GUID collides? The server backend is going to be written in PHP.

    Read the article

  • How to use custom front at SimpleCursorAdapter for a list view at Searchable Dictionary App??? please help me

    - by user1877275
    I am new in android. this is my frist project.I am tring to make a Name dictionary in bangla so i need to change the front. I already add front into the asset folder.. private void showResults(String query) { Cursor cursor = managedQuery(DictionaryProvider.CONTENT_URI, null, null, new String[] {query}, null); if (cursor == null) { // There are no results mTextView.setText(getString(R.string.no_results, new Object[] {query})); } else { // Display the number of results int count = cursor.getCount(); String countString = getResources().getQuantityString(R.plurals.search_results,count, new Object[] {count, query}); mTextView.setText(countString); // Specify the columns we want to display in the result String[] from = new String[] { DictionaryDatabase.KEY_WORD, DictionaryDatabase.KEY_DEFINITION }; // Specify the corresponding layout elements where we want the columns to go int[] to = new int[] { R.id.word, R.id.definition }; // Create a simple cursor adapter for the definitions and apply them to the ListView SimpleCursorAdapter words = new SimpleCursorAdapter(this, R.layout.result, cursor, from, to); mListView.getAdapter(); // Define the on-click listener for the list items mListView.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // Build the Intent used to open WordActivity with a specific word Uri Intent wordIntent = new Intent(getApplicationContext(), WordActivity.class); Uri data = Uri.withAppendedPath(DictionaryProvider.CONTENT_URI, String.valueOf(id)); wordIntent.setData(data); startActivity(wordIntent); } }); } }

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • Is there command-line tool to extract typedef, structure, enumeration, variable, function from a C or C++ file?

    - by FooF
    I am desiring a command-line tool to extract a definition or declaration (typedef, structure, enumeration, variable, or function) from a C or C++ source file. Also a way to replace an existing definition/declaration would be handy (after transforming the extracted definition by a user-submitted script). Is there such generic tool available, or is some resonably close approximation of such a tool? Scriptability and ability to hook-up with user created scripts or programs is of importance here, although I am academically curious of GUI programs too. Open source solutions for Unix/Linux camp are preferred (although I am curious of Windows and OS X tools too). Primary language interests are C and C++ but more generic solution would be even better (I think we do not need super accurate parsing capabilities for finding, extracting and replacing a definition in a program source file). Sample Use Cases (extra - for the curious mind): Given deeply nested structs and variable (array) initializations of these types, suppose there is a need to change a struct definition by adding or reordering fields or rewriting the variable/array definitions in more readable format without introducing errors resulting from manual labor. This would work by extracting the old initializations, then using a script/program to write the new initializations to replace the old ones. For implementing a code browsing tool - extract a definition. Decorative code generation (e.g. logging function entries / returns). Scripted code structuring (e.g. extract this and that thing and put in different place without change - version control commit comment could document the command to perform this operation to make it evident and verifiable that nothing changed). Note about tags: More accurate tag than code-generation would be code-transformation but it does not exist.

    Read the article

  • possible solutions of the warning

    - by lego69
    Hello, I have a very large code, that's why I can't post here all my code, can somebody explain what might be a problem if I have an error incompatible pointer type and give me several ways to solve it, thanks in advance just small clarification: I'm workin with pointers to functions ptrLine createBasicLine(){ DECLARE_RESULT_ALLOCATE_AND_CHECK(ptrLine, Line); result->callsHistory = listCreate(copyCall,destroyCall); <-here result->messagesHistory = listCreate(copyMessage,destroyMessage); <-and here result->linesFeature = NULL; result->strNumber = NULL; result->lastBill = 0; result->lineType = MTM_REGULAR_LINE; result->nCallTime = 0; result->nMessages = 0; result->rateForCalls = 0; result->rateForMessage = 0; return result; } copyCall,destroyCall - pointers to functions /** * Allocates a new List. The list starts empty. * * @param copyElement * Function pointer to be used for copying elements into the list or when * copying the list. * @param freeElement * Function pointer to be used for removing elements from the list * @return * NULL - if one of the parameters is NULL or allocations failed. * A new List in case of success. */ List listCreate(CopyListElement copyElement, FreeListElement freeElement); definitions of the functions ptrCall (*createCall)() = createNumberContainer; void (*destroyCall)(ptrCall) = destroyNumberContainer; ptrCall (*copyCall)(ptrCall) = copyNumberContainer;

    Read the article

  • Make Dell's OpenManage 6.2 information available through SNMP

    - by tronda
    I have successfully installed OpenManage on a CentOS 5.4 server and I'm able to use OpenManage through the web interface running on port 1311, but I would like to be able to expose this information through the SNMP server. I don't know SNMP particularly well so the configuration is a result of trial and error. I've tried to follow the description pointed out in the Open Manage Server Administrator User Guide. I've followed the documentation regarding SNMP configuration, but without success. I've created a small snmpd.conf file: com2sec notConfigUser default public group notConfigGroup v1 notConfigUser group notConfigGroup v2c notConfigUser view systemview included .1.3.6.1.2.1.1 view systemview included .1.3.6.1.2.1.25.1.1 access notConfigGroup "" any noauth exact all all none view all included .1 rwcommunity public 10.200.26.50 syslocation "Somewhere" syscontact [email protected] pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat smuxpeer .1.3.6.1.4.1.674.10892.1 When I try to fetch SNMP information by using snmpwalk I get the following output: SNMPv2-MIB::sysDescr.0 = STRING: Linux myserver.test.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (1180389) 3:16:43.89 SNMPv2-MIB::sysContact.0 = STRING: [email protected] SNMPv2-MIB::sysName.0 = STRING: myserver.test.com SNMPv2-MIB::sysLocation.0 = STRING: "Somewhere" SNMPv2-MIB::sysORLastChange.0 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORID.1 = OID: SNMPv2-MIB::snmpMIB SNMPv2-MIB::sysORID.2 = OID: TCP-MIB::tcpMIB SNMPv2-MIB::sysORID.3 = OID: IP-MIB::ip SNMPv2-MIB::sysORID.4 = OID: UDP-MIB::udpMIB SNMPv2-MIB::sysORID.5 = OID: SNMP-VIEW-BASED-ACM-MIB::vacmBasicGroup SNMPv2-MIB::sysORID.6 = OID: SNMP-FRAMEWORK-MIB::snmpFrameworkMIBCompliance SNMPv2-MIB::sysORID.7 = OID: SNMP-MPD-MIB::snmpMPDCompliance SNMPv2-MIB::sysORID.8 = OID: SNMP-USER-BASED-SM-MIB::usmMIBCompliance SNMPv2-MIB::sysORDescr.1 = STRING: The MIB module for SNMPv2 entities SNMPv2-MIB::sysORDescr.2 = STRING: The MIB module for managing TCP implementations SNMPv2-MIB::sysORDescr.3 = STRING: The MIB module for managing IP and ICMP implementations SNMPv2-MIB::sysORDescr.4 = STRING: The MIB module for managing UDP implementations SNMPv2-MIB::sysORDescr.5 = STRING: View-based Access Control Model for SNMP. SNMPv2-MIB::sysORDescr.6 = STRING: The SNMP Management Architecture MIB. SNMPv2-MIB::sysORDescr.7 = STRING: The MIB for Message Processing and Dispatching. SNMPv2-MIB::sysORDescr.8 = STRING: The management information definitions for the SNMP User-based Security Model. SNMPv2-MIB::sysORUpTime.1 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.2 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.3 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.4 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.5 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.6 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.7 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.8 = Timeticks: (0) 0:00:00.00 I suspect that I should get some DELL specific information when I use the snmpwalk utility. Is there a configuration in snmpd.conf file which is wrong, or do I have to configure on the OpenManage side in order to get the hardware information accessible from SNMP?

    Read the article

  • PHP Error / Mk-livestatus in Nagvis

    - by tod
    I have Nagios and Nagvis installed via Debian packages, but when I run Nagvis and try to get into the "General Configuration" menu I get this error Error: (0) Array to string conversion (/usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php:126) #0 /usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php(126): nagvisExceptionErrorHandler(8, 'Array to string...', '/usr/share/nagv...', 126, Array) #1 /usr/share/nagvis/share/server/core/classes/WuiViewEditMainCfg.php(44): WuiViewEditMainCfg->getFields() #2 /usr/share/nagvis/share/server/core/classes/CoreModMainCfg.php(56): WuiViewEditMainCfg->parse() #3 /usr/share/nagvis/share/server/core/functions/index.php(120): CoreModMainCfg->handleAction() #4 /usr/share/nagvis/share/server/core/ajax_handler.php(63): require('/usr/share/nagv...') #5 {main} I'm also having an issue with backends in Nagvis. check-mk-livestatus is installed, but I get this error when hovering over items: Problem (backend: live_1): Unable to connect to the /var/lib/nagios3/rw/live in backend live_1: Connection refused Or when trying to add things: Unable to fetch data from backend - falling back to input field. /var/lib/nagios3/rw/ exists, but there is no "live" file. I'm really not sure what is going on, especially since these were all Debian packages... Here is the most relevant part of the nagvis.ini.php: ; ---------------------------- ; Backend definitions ; ---------------------------- ; Example definition of a livestatus backend. ; In this case the backend_id is live_1 ; The path /usr/local/nagios/var/rw has to exist [backend_live_1] backendtype="mklivestatus" ; The status host can be used to prevent annoying timeouts when a backend is not ; reachable. This is only useful in multi backend setups. ; ; It works as follows: The assumption is that there is a "local" backend which ; monitors the host of the "remote" backend. When the remote backend host is ; reported as UP the backend is queried as normal. ; When the remote backend host is reported as "DOWN" or "UNREACHABLE" NagVis won't ; try to connect to the backend anymore until the backend host gets available again. ; ; The statushost needs to be given in the following format: ; "<backend_id>:<hostname>" -> e.g. "live_2:nagios" ;statushost="" socket="unix:/var/lib/nagios3/rw/live" There is nothing relating to 'backends' or 'mklivestatus' in /var/log/nagios3/nagios.log Any help would be much appreciated

    Read the article

  • Multiple syslog-ng destination loghosts

    - by pablo808
    I am currently forwarding logs to one remote destination loghost. filter f_windows { program("Security-Audit*"); }; log { source(r_sys); filter(f_windows); destination(d_windows); }; log { source(r_sys); filter (f_windows); destination(d_loghost); }; I would like to forward these logs to two additional remote destination loghost's. The manual defines destination syntax as: destination <identifier> { destination-driver(params); destination-driver(params); ... }; Tried these different configs: Define additional destinations hosts in d_loghost: destination d_loghost { udp("server1" port(514)); udp("server2" port(514)); udp("server3" port(514));}; filter f_windows { program("Security-Audit*"); }; log { source(r_sys); filter (f_windows); destination(d_loghost); }; Define addtional destination hosts in their own d_loghost definitions: destination d_loghost1 { udp("server1" port(514)); destination d_loghost2 { udp("server2" port(514)); destination d_loghost3 { udp("server3" port(514)); filter f_windows { program("Security-Audit*"); }; log { source(r_sys); filter (f_windows); destination(d_loghost1); }; log { source(r_sys); filter (f_windows); destination(d_loghost2); }; log { source(r_sys); filter (f_windows); destination(d_loghost3); }; Both fail unfortuantly, what am I missing? Thanks.

    Read the article

  • NRPE Warning threshold must be a positive integer

    - by Frida
    OS: Ubuntu 12.10 Server 64bits I've installed Icinga, with ido2db, pnp4nagios and icinga-web (last release, following the instruction given in the documentation, installation with apt, etc). I am using icinga-web to monitor my hosts. For the moment, I have just my localhost, and all is perfect. I am trying to add a host and monitor it with NRPE (version 2.12): root@server:/etc/icinga# /usr/lib/nagios/plugins/check_nrpe -H client NRPE v2.12 The configuration looks good. I've created a file in /etc/icinga/objects/client.cfg as below on the server: root@server:/etc/icinga/objects# cat client.cfg define host{ use generic-host ; Name of host template to use host_name client alias client.toto address xx.xx.xx.xx } # Service Definitions define service{ use generic-service host_name client service_description CPU Load check_command check_nrpe_1arg!check_load } define service{ use generic-service host_name client service_description Number of Users check_command check_nrpe_1arg!check_users } And add in my /etc/icinga/commands.cfg: # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -a $ARG2$ } # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe_1arg command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } But it does not work. These are the logs from the client: Dec 3 19:45:12 client nrpe[604]: Connection from xx.xx.xx.xx port 32641 Dec 3 19:45:12 client nrpe[604]: Host address is in allowed_hosts Dec 3 19:45:12 client nrpe[604]: Handling the connection... Dec 3 19:45:12 client nrpe[604]: Host is asking for command 'check_users' to be run... Dec 3 19:45:12 client nrpe[604]: Running command: /usr/lib/nagios/plugins/check_users -w -c Dec 3 19:45:12 client nrpe[604]: Command completed with return code 3 and output: check_users: Warning t hreshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:45:12 client nrpe[604]: Return Code: 3, Output: check_users: Warning threshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx port 32129 Dec 3 19:44:49 client nrpe[32582]: Host address is in allowed_hosts Dec 3 19:44:49 client nrpe[32582]: Handling the connection... Dec 3 19:44:49 client nrpe[32582]: Host is asking for command 'check_load' to be run... Dec 3 19:44:49 client nrpe[32582]: Running command: /usr/lib/nagios/plugins/check_load -w -c Dec 3 19:44:49 client nrpe[32582]: Command completed with return code 3 and output: Warning threshold mu st be float or float triplet!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLO AD15 Dec 3 19:44:49 client nrpe[32582]: Return Code: 3, Output: Warning threshold must be float or float trip let!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLOAD15 Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx closed. Have you any ideas?

    Read the article

  • 403 Forbidden Error when trying to view localhost on Apache

    - by misbehavens
    I think my Apache must be all screwed up. I don't know if it ever worked. I just upgraded to Snow Leopard, and the first step on this tutorial is to start apache and check that it's working by opening http://localhost. It starts fine but when I go to localhost I get a 403 forbidden error. I don't know where to start figuring out how to fix it, so I wonder if a fresh install of Apache would do the trick. What do you think? Update: I found some error logs in /private/var/log/apache2/. Found this in one of the logs. Not sure what it means: [Tue Nov 10 17:53:08 2009] [notice] caught SIGTERM, shutting down [Tue Nov 10 21:49:17 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist httpd: Could not reliably determine the server's fully qualified domain name, using Andrews-Mac-Pro.local for ServerName mod_bonjour: Skipping user 'andrew' - cannot read index file '/Users/andrew/Sites/index.html'. [Tue Nov 10 21:49:19 2009] [notice] Digest: generating secret for digest authentication ... [Tue Nov 10 21:49:19 2009] [notice] Digest: done [Tue Nov 10 21:49:19 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 configured -- resuming normal operations Update: I also found something in the dummy-host.example.com-error_log file. I didn't set these dummy-host things by the way. Is this the default configuration? [Tue Nov 10 21:59:57 2009] [error] [client ::1] client denied by server configuration: /usr/docs Update: Woohoo! I found the file that had the virtual host definitions. It was in /etc/apache2/extra/httpd-vhosts.conf. It had those two dummy virtual host settings in there. I added a localhost virtual host. Not sure if this is necessary, but since it wasn't working before, decided to do it anyway. After removing the old virtual hosts, adding my new localhost virtual host, and restarting apache, it seems to work. So I guess whenever I want to add a virtual host, I only need to add them to this file? Or is there a hosts file somewhere, like there is on Linux? Update: Yes, there is an /etc/hosts file that need to be changed to. Add the virtual host name to that file.

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

  • How to temporary disable a mirror video driver in windows xp registry

    - by happy clicker
    Because its a lot of text, I will ask first my question and then explain what the base problem is. Perhaps someone can give me a solution to the base problem: Is there is a way to temporary disable mirror video drivers (through registry or so), without uninstalling the corresponding software. I tested changing the enumeration in LocalMachine\Hardware\DeviceMap\Video but after reboot always the old configuration is restored. Explanation of the base problem We are working on a wpf-project for a department of a big company. There we have the problem that WPF renders only in software mode, although the hardware they have, must support hardware rendering (Tier 2). After searching for a solution to the problem, we found out that direct 3d does not work properly and we think thats why WPF can only use SW-rendering. In dxdiag.exe the direct3d-acceleration is enabled, but if we start the test-routine it always fails saying that it has not enough memory (it says memory, not video memory!). I have seen there 3 different types of pc’s (they have some hundreds of each type) and every type shows the exactly same behavior. We tried to update all the drivers, also dx (Version 9.0c) and we searched a lot in the web but could not find a solution. All the pcs have Intel Dual-Core processors or better, one type has an Intel gma 9000 graphics card the other two types have actual ATI and NVidia graphic-cards with 256MB onboard memory. Also the system memory is at least 2GB. Windows is XPSP3. The pc’s are of two different manufacturers. Because we see the exactly same behavior on every computer of this three very different computer-types, we don’t think that this is a driver or a direct x problem. What we’ve found in other newsgroups is, that direct x could be disturbed through mirror-video drivers such as NetMeeting, VNC and other remote desktop-installations. In the registry, we see under LocalMachine\Hardware\DeviceMap\Video a lot of such mirror-entries and we find also the definitions in the CurrentControlSet\Control\Video-Section (However this drivers are not shown in the hardware panel of the os). We can have admin-rights to one of these computers to test if disabling these drivers would help, but we must not change the configuration so that some software does not work after the tests. Therefore I cannot uninstall any software because I have not the mediums, licenses or knowhow to reinstall those apps. The support of this company however will only begin to work, if I can tell them what the real problem is. Thats why we search for a way to disable these mirror-drivers (or a hint to solve the dx problem if we are on a false trace)

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >