Search Results

Search found 5157 results on 207 pages for 'checking'.

Page 176/207 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • FileVersionInfo.GetVersionInfo getting old version of an exe swapped at runtime

    - by Richard
    I have a program executing in c# that is sometimes updated while it is running by swapping the exe to a new one. I want the program to routinely check if it has been updated and if so, restart. I use the following function to do this. public static bool DoINeedToRestart(string exe_name) { Version cur_version = new Version(MainProgram.StartVersion); Version file_version = new Version(GetProductVersion(exe_name)); MessageBox.Show("Comparing cur_version " + cur_version.ToString() + " with " + file_version.ToString()); if (file_version > cur_version) { return true; } return false; } public static string GetProductVersion(string path_name) { FileVersionInfo myFI = FileVersionInfo.GetVersionInfo(path_name); return myFI.FileVersion; } MainProgram.StartVersion is set when the program is started to be the current version using the GetProductVersion(exe_name) exe_name is set to be the name of the executable that is being updated. The problem I have is once the MainProgram.exe file has been updated (I verify this manually by looking at the file properties and checking the file version), the GetProductVersion still returns the old file version and I have no idea why! Any help is greatly appreciated.

    Read the article

  • $(selector).text() equivalent in c# (Revised)

    - by Ian Jasper Bardoquillo
    Hi, I am trying check if the inner html of the element is empty but I wanted to do the validation on the server side, I'm treating the html as a string. Here is my code public string HasContent(string htmlString){ // this is the expected value of the htmlString // <span class="spanArea"> // <STYLE>.ExternalClass234B6D3CB6ED46EEB13945B1427AA47{;}</STYLE> // </span> // From this jquery code--------------> // if($('.spanArea').text().length>0){ // // } // <------------------ // I wanted to convert the jquery statement above into c# code. /// c# code goes here return htmlSTring; } using this line $('.spanArea').text() // what is the equivalent of this line in c# I will know if the .spanArea does really have something to display in the ui or not. I wanted to do the checking on the server side. No need to worry about how to I managed to access the DOM I have already taken cared of it. Consider the htmlString as the Html string. My question is if there is any equivalent for this jquery line in C#? Thanks in advance! :)

    Read the article

  • JSON element detection

    - by user3614570
    I’ve created a string… {"atts": [{"name": "wedw"}, {"type": "---"}]} I pile a bunch of these together in an array based on user input and attach them to another string to complete a JSON object that tests out as valid. So I end up with a global array called fields with a bunch of these little snippets. So how do I change the name "weds" with a new name? I’ve tried... function changefieldname(pos){ var obj = JSON.parse(jsonstring); var oldname = obj.tracelog.fields[pos].atts[0]["name"]; var newname = document.getElementById("newlogfieldname"+pos).value; fields[pos].replace(oldname, newname); //writejson(); } And a bunch of variations. I know everything is checking out correct interms of the variables pos, oldname, and newname. I also know that fields[pos] returns the string in the array I want to correct but it’s not happy. I also tried converting fields[pos] to a string, but the replace function doesn't work on it. I’m sure there is a good reason.

    Read the article

  • Changing Value of Array Pointer When Passed to a Function

    - by ZAX
    I have a function which receives both the array, and a specific instance of the array. I try to change the specific instance of the array by accessing one of its members "color", but it does not actually change it, as can be seen by debugging (checking the value of color after function runs in the main program). I am hoping someone can help me to access this member and change it. Essentially I need the instance of the array I'm specifying to be passed by reference if nothing else, but I'm hoping there is an easier way to accomplish what I'm trying to do. Here's the structures: typedef struct adjEdge{ int vertex; struct adjEdge *next; } adjEdge; typedef struct vertex{ int sink; int source; int color; //0 will be white, 1 will be grey, 5 will be black int number; adjEdge *nextVertex; } vertex; And here is the function: void walk(vertex *vertexArray, vertex v, int source, maxPairing *head) { int i; adjEdge *traverse; int moveVertex; int sink; traverse = vertexArray[v.number-1].nextVertex; if(v.color != 5 && v.sink == 5) { sink = v.number; v.color = 5; addMaxPair(head, source, sink); } else { walk(vertexArray, vertexArray[traverse->vertex-1], source, head); } } In particular, v.color needs to be changed to a 5, that way later after recursion the if condition blocks it.

    Read the article

  • c# getting file version of a swapped exe at runtime

    - by Richard
    I have a program executing in c# that is sometimes updated while it is running by swapping the exe to a new one. I want the program to routinely check if it has been updated and if so, restart. I use the following function to do this. public static bool DoINeedToRestart(string exe_name) { Version cur_version = new Version(MainProgram.StartVersion); Version file_version = new Version(GetProductVersion(exe_name)); MessageBox.Show("Comparing cur_version " + cur_version.ToString() + " with " + file_version.ToString()); if (file_version > cur_version) { return true; } return false; } public static string GetProductVersion(string path_name) { FileVersionInfo myFI = FileVersionInfo.GetVersionInfo(path_name); return myFI.FileVersion; } StartVersion is set when the program is started to be the current version using the GetProductVersion(exe_name). exe_name is set to be the name of the executable that is being updated. The problem I have is once the MainProgram.exe file has been updated (I verify this manually by looking at the file properties and checking the file version), the GetProductVersion still returns the old file version and I have no idea why! Any help is greatly appreciated.

    Read the article

  • Calling a constructor from method within the same class

    - by Nathan
    I'm new to java and I'm learning about creating object classes. One of my homework assignment requires that I call the constructor at least once within a method of the same object class. I'm getting an error that says The method DoubleMatrix(double[][]) is undefined for the type DoubleMatrix Here's my constructor: public DoubleMatrix(double[][] tempArray) { // Declaration int flag = 0; int cnt; // Statement // check to see if doubArray isn't null and has more than 0 rows if(tempArray == null || tempArray.length < 0) { flag++; } // check to see if each row has the same length if(flag == 0) { for(cnt = 0; cnt <= tempArray.length - 1 || flag != 1; cnt++) { if(tempArray[cnt + 1].length != tempArray[0].length) { flag++; } } } else if(flag == 1) { makeDoubMatrix(1, 1);// call makeDoubMatrix method } }// end constructor 2 Here's the method where I try and call the constructor: public double[][] addMatrix(double[][] tempDoub) { // Declaration double[][] newMatrix; int rCnt, cCnt; //Statement // checking to see if both are of same dimension if(doubMatrix.length == tempDoub.length && doubMatrix[0].length == tempDoub[0].length) { newMatrix = new double[doubMatrix.length][doubMatrix[0].length]; // for loop to add matrix to a new one for(rCnt = 0; rCnt <= doubMatrix.length; rCnt++) { for(cCnt = 0; cCnt <= doubMatrix.length; cCnt++) { newMatrix[rCnt][cCnt] = doubMatrix[rCnt][cCnt] + tempDoub[rCnt][cCnt]; } } } else { newMatrix = new double[0][0]; DoubleMatrix(newMatrix) } return newMatrix; }// end addMatrix method Can someone point me to the right direction and explain why I'm getting an error?

    Read the article

  • How do I trust a self signed cert using https?

    - by dave
    Edit: I originally thought the server's certificate was self signed. Turns out it was signed by a self-signed CA certificate. I'm trying to write a Node.js application that accesses an HTTPS site that's protected using a self-signed certificate certificate signed by a private, self-signed CA certificate. I'd also like to not completely disable certificate checking. I tried putting the self signed certificate server's certificate in the request options, but that doesn't seem to be working. Anyone know how to do this? I expect the following code to print statusCode 200, but instead it prints [Error: SELF_SIGNED_CERT_IN_CHAIN]. I've tried similar code with request with the same results. var https = require('https'); var fs = require('fs'); var opts = { hostname: host, port: 443, path: '/', method: 'GET', ca: fs.readFileSync(serverCertificateFile, 'utf-8') }; var req = https.request(opts, function (res) { console.log('statusCode', res.statusCode); }); req.end(); req.on('error', function (err) { console.error(err); });

    Read the article

  • Networking issues with Linux server (CentOS 5.3)

    - by sxanness
    I have a Linux server hosting our bug tracking software (CentOS 5.2 Kernel 2.6.18-128.4.1.el5) that I have having some strange network problems with. The machine is configured with two NICS, one for the public interface and the other for our server back end network. The problem is that after doing a service network restart I can ping the public interface and it sends anywhere from 200-500 ICMP packets and then all of a sudden I start getting a request timed out error. Strange but as soon as I connect to the private interface the ping starts working again to the public interface. I clearly have a routing issue somewhere. I have a Juniper Router with the following configuration. Interface 0/0 -- Connect subnet to the ISP at our co-location Interface 0/2 -- For our DRAC network Interface 0/3 -- The Server-backend network (plugs directly into a switch that feeds to all the NICs that are on the 10.3.20.x network. Interface 0/4 -- Plugs directly into another switch that feeds our public interfaces, that interface as all the gateways from our public ip rangs as secondary IP addresses. I hope that someone can ask the right questions that can lead me to check things and figure out what is going on. Has anyone had similar problems and what kind of things should I be checking? Routing issue or something even more complicated? [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth0 BOOTPROTO=static IPADDR=72.249.134.98 NETMASK=255.255.255.248 BROADCAST=72.249.134.103 HWADDR=00:16:3E:AA:BB:EE ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth1 BOOTPROTO=static BROADCAST=10.3.20.255 HWADDR=00:17:3E:AA:BB:EE IPADDR=10.3.20.25 NETMASK=255.255.255.0 NETWORK=10.3.20.0 ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=fogbugz.dfw.hisg-it.net GATEWAY=72.249.134.97 [root@fogbugz ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 72.249.134.96 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.3.20.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 10.0.0.0 10.3.20.1 255.0.0.0 UG 0 0 0 eth1 0.0.0.0 72.249.134.97 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Networking issues with Linux server (CentOS 5.3)

    - by sxanness
    I have a Linux server hosting our bug tracking software (CentOS 5.2 Kernel 2.6.18-128.4.1.el5) that I have having some strange network problems with. The machine is configured with two NICS, one for the public interface and the other for our server back end network. The problem is that after doing a service network restart I can ping the public interface and it sends anywhere from 200-500 ICMP packets and then all of a sudden I start getting a request timed out error. Strange but as soon as I connect to the private interface the ping starts working again to the public interface. I clearly have a routing issue somewhere. I have a Juniper Router with the following configuration. Interface 0/0 -- Connect subnet to the ISP at our co-location Interface 0/2 -- For our DRAC network Interface 0/3 -- The Server-backend network (plugs directly into a switch that feeds to all the NICs that are on the 10.3.20.x network. Interface 0/4 -- Plugs directly into another switch that feeds our public interfaces, that interface as all the gateways from our public ip rangs as secondary IP addresses. I hope that someone can ask the right questions that can lead me to check things and figure out what is going on. Has anyone had similar problems and what kind of things should I be checking? Routing issue or something even more complicated? [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth0 BOOTPROTO=static IPADDR=72.249.134.98 NETMASK=255.255.255.248 BROADCAST=72.249.134.103 HWADDR=00:16:3E:AA:BB:EE ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth1 BOOTPROTO=static BROADCAST=10.3.20.255 HWADDR=00:17:3E:AA:BB:EE IPADDR=10.3.20.25 NETMASK=255.255.255.0 NETWORK=10.3.20.0 ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=fogbugz.dfw.hisg-it.net GATEWAY=72.249.134.97 [root@fogbugz ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 72.249.134.96 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.3.20.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 10.0.0.0 10.3.20.1 255.0.0.0 UG 0 0 0 eth1 0.0.0.0 72.249.134.97 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • `svn checkout` on the SVN server causes the repo to break with a 301 error

    - by Phillip Oldham
    We have an nginx server which proxies to a standard set-up of Apache+SVN. The nginx set-up is a very simple proxy: server { server_name svn.ourdomain.tld; location / { proxy_pass http://localhost:8080; } } Apache is set-up as follows: <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthName "Authentication Required" AuthUserFile /var/svn/.auth Require valid-user </Location> ...which allows us to access repositories using something like http://svn.ourdomain.tld/repo. We've been running this set-up now for about 2 years without issue. Recently we've found that we need to check out one of the repositories onto the server itself, however whenever we do so it seems to break the repo. From that point on, it will only respond with a 301 Moved Permanently error. We've tried: svn co file:///path/to/repo svn co svn://localhost/repo svn co svn://svn.ourdomain.tld/repo svn co svn+ssh://localhost/repo svn co svn+ssh://svn.ourdomain.tld/repo svn co http://localhost/repo svn co http://svn.ourdomain.tld/repo Also tried bypassing nginx, and get the same error: svn co http://localhost:8080/repo svn co http://svn.ourdomain.tld:8080/repo Checking out from a different machine works as expected until we attempt to check out on the server, after that it refuses with the same 301 error. What is more confusing is that this repository server also hosts our HudsonCI server, which can pulls and builds our projects hourly. This leads us to suspect that it's the svn client which is causing an error in communication. Its also very confusing that removing then re-creating the repo using svnadmin doesn't reset the error - the repo is still unavailable even though it's "new"! Restarting apache and subversion (svnserve) has no effect on this, or the original error. Version information: OS: 64-bit CentOS 4.2, 2.6.27 kernel svn client: 1.4.2 (same for both server and remote clients) svn server: 1.4.2 httpd: 2.2.3 UPDATE: This also happens with svn export when run on the repo server. Ran from any other box/client, there isn't a problem. Here's the workflow, to help clarify the error: [~repo-server~]# svnadmin create {repo}; chown -Rf www:www {repo} [remote-client]# svn checkout http://svn.ourdomain.tld/repo [remote-client]# svn add file; svn ci -m '' [~repo-server~]# cd /var/www; svn export file:///path/to/repo/trunk ourproject [remote-client]# svn update fails with 301 error I can also confirm that the hostname of the box doesn't have an effect here, which is very odd: whether or not svn.ourdomain.tld is added to /etc/hosts it still breaks - we thought it could be an issue with localhost routing, but that doesn't seem to be the case. Are we missing something in the documentation which states you can't checkout a repo when the server is on the same box? How can we stop the repos becoming corrupt when we checkout locally?

    Read the article

  • Creating a dynamic lacp trunk from HP Procurve 2412zl to Proliant DL380 G7

    - by Maalobs
    I'm configuring an IEEE 802.3ad (LACP) dynamic trunk from a HP Procurve 2412zl (firmware version K.15.07) switch to a HP Proliant DL380 G7 server. The DL380 has 4 NICs and is running Win2008 R2, so I'm teaming the NICs together and leaving everything on the recommended "automatic" setting in the HP NIC configuration tool. The server is one of two, they'll be connected on interfaces F17-F20 and F21-F24 respectively on the switch. I need the servers in a separate VLAN, here is the configuration for the VLAN: vlan 10 name "Lab_Mgmt" untagged B2,F17-F24 ip address 172.22.71.3 255.255.255.0 tagged B21 exit There is a DHCP-relay into the VLAN 10 from another device beyond interface B21. The Advanced Traffic Management Guide says that in order to run a dynamic LACP trunk on another VLAN besides the DEFAULT_VLAN, you need to first enable GVRP and then use "forbid" to stop the interfaces from automatically joining DEFAULT_VLAN when the dynamic trunk is created. GVRP brings some other stuff with it that I don't want or need, so I disable it with "unknown-vlans disable" on all other interfaces. Here is how I do it: procurve-5412zl-1(config)# gvrp procurve-5412zl-1(config)# interface A1-A24,B1-B24,C1-C24,D1-D10,D13-D24,E1-E24, F1-F16,K1,K2 unknown-vlans disable procurve-5412zl-1(config)# vlan 1 forbid F17-F24 procurve-5412zl-1(config)# interface F17-F20 lacp active The result afterwards looks all successful: procurve-5412zl-1(config)# show trunks Load Balancing Method: L3-based (Default), L2-based if non-IP traffic Port | Name Type | Group Type ---- + -------------------------------- --------- + ------ -------- F17 | XYZTEAM3_NIC1 100/1000T | Dyn2 LACP F18 | XYZTEAM3_NIC2 100/1000T | Dyn2 LACP F19 | XYZTEAM3_NIC3 100/1000T | Dyn2 LACP F20 | XYZTEAM3_NIC4 100/1000T | Dyn2 LACP procurve-5412zl-1(config)# vlan 10 procurve-5412zl-1(vlan-10)# show lacp LACP LACP Trunk Port LACP Admin Oper Port Enabled Group Status Partner Status Key Key ---- ------- ------- ------- ------- ------- ------ ------ F17 Active Dyn2 Up Yes Success 0 0 F18 Active Dyn2 Up Yes Success 0 0 F19 Active Dyn2 Up Yes Success 0 0 F20 Active Dyn2 Up Yes Success 0 0 On the Proliant server, the NIC configuration Tool is also indicating that a 802.3ad dynamic trunk has been established. Everything should be good, but it isn't. The server is not getting an IP-address from the DHCP, which it does if I'm not enabling LACP. If I configure the server to a static IP-address on the VLAN 10 subnet, it can't even ping the switch IP-address, much less anything outside of the VLAN. The switch can't ping the server either. I did another attempt with F17-F20 tagged, and checking the box "Default Native Tag (VLAN 10)" in the NIC configuration tool on the server, but there was no difference. Does anyone have any idea what I might have missed?

    Read the article

  • Sendmail - Multiple Domains, One Box - Blocking One Or Two Domains

    - by TangoOversway
    I have a number of domains hosted at a web hosting service. They use sendmail to handle incoming email. I have six domains on this service (which we can call aaa.com, bbb.com and so on). Each email account has the same name and one email box. In other words, [email protected], [email protected], [email protected] and all the others go into one box, /var/spool/mail/tango, where my email program on my desktop picks it up. I have done very little work in sendmail. I haven't had to, and I've been warned it's a steep learning curve. But now I'm running into an issue. I was in a business situation where, for years, my email address was on the website for aaa.com. (We won't go into why this was necessary - it wasn't my preference and it's in the past.) Now I'm using [email protected] instead of [email protected]. I was getting about 1,000 or more pieces of spam a day, but SpamAssassin and my own email program caught about 75% of that. (Which still left stuff to delete.) Now, after checking, I see that 90% or more goes to [email protected], the one that was on the web for years. I'd like to deactivate [email protected] and possibly [email protected] and [email protected], but want to keep using [email protected]. Remember, email to tango at any of these domains will go into one email box. I've had people tell me that sendmail can be configured so I can deactivate [email protected] (and other domains) and still use [email protected] (and others, if I want to). In other words, I can configure sendmail to use this account on some domains and not others. One of the people who was teling me this was in tech support at the hosting service. But I wrote to tech support with a work order to do this and now I'm told it can't be done. I can modify config files myself on this account if needed, but I was hoping to just let them do it. (I love delegation -- it means I spend more time doing my stuff.) Is it possible to keep an email account active on one domain and not others with sendmail, when all domains are hosted on the same server? Is there a name for this process or setting? Any information would be helpful - either pointers to instructions so I can do it, or enough info so I can tell tech support, "This is where to look, and it can be done, so please pass my request on to someone who works with sendmail and knows how to do it." Is this something sendmail can do?

    Read the article

  • WD1000FYPS harddrive is marked 0 mb in 3ware (and no SMART)

    - by osgx
    After reboot my SATA 1TB WD1000FYPS (previously is was "Drive error") is marked 0 mb in 3ware web gui. Complete message: Available Drives (Controller ID 0) Port 1 WDC WD1000FYPS-01ZKB0 0.00 MB NOT SUPPORTED [Remove Drive] SMART gives me only Device Model and ATA protocol version 1 (not 7-8 as it must be for SATA) What does it mean? Just before reboot, when is was marked only with "Device Error", smart was: Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: WD-WCASJ1130*** Firmware Version: 02.01B01 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Mar 7 18:47:35 2010 MSK SMART support is: Available - device has SMART capability. SMART support is: Enabled SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 188 186 021 Pre-fail Always - 7591 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 229 5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 3 7 Seek_Error_Rate 0x000e 193 193 000 Old_age Always - 125 9 Power_On_Hours 0x0032 078 078 000 Old_age Always - 16615 10 Spin_Retry_Count 0x0012 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 77 192 Power-Off_Retract_Count 0x0032 198 198 000 Old_age Always - 1564 193 Load_Cycle_Count 0x0032 146 146 000 Old_age Always - 164824 194 Temperature_Celsius 0x0022 117 100 000 Old_age Always - 35 196 Reallocated_Event_Count 0x0032 199 199 000 Old_age Always - 1 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 What can be wrong with he? Can it be restored? PS new smart is === START OF INFORMATION SECTION === Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: [No Information Found] Firmware Version: [No Information Found] Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 1 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Mar 8 00:29:44 2010 MSK SMART is only available in ATA Version 3 Revision 3 or greater. We will try to proceed in spite of this. SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported. Checking for SMART support by trying SMART ENABLE command. Command failed, ata.status=(0x00), ata.command=(0x51), ata.flags=(0x01) Error SMART Enable failed: Input/output error SMART ENABLE failed - this establishes that this device lacks SMART functionality. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. PPS There was a rapid grow of " 192 Power-Off_Retract_Count " before dying. The hard was used in raid, with several hards from the same fabric packaging box (close id's). The hard drives were placed identically. Rapid means almost linear grow from 300 to 1700 in 6-7 hours. Maximal temperature was 41C. (thanks to munin's smart monitoring)

    Read the article

  • Cannot login to SQL Server 2008 R2 with Windows authentication

    - by Ian Boyd
    When i try to connect to SQL Server (2008 R2) using Windows authentication: i cannot: Checking the Windows Application event log, i find the error: Login failed for user 'AVATOPIA\ian'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ] Log Name: Application Source: MSSQLSERVER Event ID: 18456 Level: Information User: AVATOPIA\ian OpCode: Task Category: Logon i can login to the computer itself using Windows authentication. i can log into SQL Server using the local Windows Administrator account. We can connect to 8 other SQL Servers on the domain using Windows Authentication. Just this one, whitch is the only one that is 2008 R2 is failing. So i assume it's a bug with *2008 R2. Note: i cannot logon locally, or remotely, using Windows authentication. i can login locally and remotely using SQL Server Authentication. Update Note: It's not limited to SQL Server Management Studio, standalone applications that connect using Windows authentication: fail: Note: It's not a client problem, as we can connect fine to other (non-SQL Server 2008 R2 machines): i'm sure there's a technote or knowledge base article describing why SQL Server 2008 R2 is broken by default, but i can't find it. Update 2 Matt figure out the change that Microsoft made so that SQL Server 2008 R2 is broken by default: Administrators are no longer administrators All that remains is to figure out how to make Administrators administrators. One of these days i'm going to start a list of changes around Microsoft's "broken by default" initiative. Steps to reproduce the problem How do i add a group to the sysadmin fixed server role? Here's the steps i try, that don't work: Click Add: Click Object Types: Ensure that you have no ability to add groups: and click OK. Under Enter the object names to select, enter Administrators: Click Check Names, and ensure that you are not allowed to add groups: and click Cancel. Click Browse..., and ensure that you have no ability to add groups: You should now still not have added any group to the sysadmin role. Additional information SQL Server Management Studio is being run as an administrator: SQL Server is set to use Windows Authentication: tried while logged into SQL with both sa and the only other sysadmin domain account (screenshot can be supplied for those who don't believe)

    Read the article

  • mint linux, DVD drive keeps randomly being accessed. unsure how to find culprit

    - by juicebox
    I have a workstation with mint linux 12. It seems like the DVD drive on the machine keeps randomly "activating". By activating it makes noise, the light turns on, and it seems like it is checking if a disk is in it. At first I thought I was being hacked and someone/something was trying to check if I had media in the DVDRom drive. I ruled that out with netstat and rkhunter. I checked my logs and the only thing I can find that might help point out the problem are these repeated chunks in syslog: Mar 24 17:47:31 rich-MINT kernel: [ 9846.551422] ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in Mar 24 17:47:31 rich-MINT kernel: [ 9846.551424] res 51/40:01:00:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error) Mar 24 17:47:31 rich-MINT kernel: [ 9846.551427] ata2.00: status: { DRDY ERR } Mar 24 17:47:31 rich-MINT kernel: [ 9846.551433] ata2.00: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9846.868012] ata2.01: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9847.344054] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:47:32 rich-MINT kernel: [ 9847.344067] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:47:32 rich-MINT kernel: [ 9847.376118] ata2.00: configured for PIO0 Mar 24 17:47:32 rich-MINT kernel: [ 9847.393047] ata2.01: configured for UDMA/133 Mar 24 17:47:32 rich-MINT kernel: [ 9847.397046] ata2: EH complete and again Mar 24 17:55:28 rich-MINT kernel: [10323.633268] sr 1:0:0:0: ioctl_internal_command return code = 8000002 Mar 24 17:55:28 rich-MINT kernel: [10323.633270] : Sense Key : Aborted Command [current] [descriptor] Mar 24 17:55:28 rich-MINT kernel: [10323.633275] : Add. Sense: No additional sense information Mar 24 17:55:11 rich-MINT kernel: [10306.640009] ata2.00: link is slow to respond, please be patient (ready=0) Mar 24 17:55:16 rich-MINT kernel: [10310.840009] ata2.00: SRST failed (errno=-16) Mar 24 17:55:16 rich-MINT kernel: [10310.840016] ata2.00: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.160013] ata2.01: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.636061] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:55:16 rich-MINT kernel: [10311.636075] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:55:16 rich-MINT kernel: [10311.668122] ata2.00: configured for PIO0 Mar 24 17:55:16 rich-MINT kernel: [10311.684854] ata2.01: configured for UDMA/133 Mar 24 17:55:17 rich-MINT kernel: [10312.105473] ata2: EH complete (Copied from Pastebin - http://pastebin.com/YNDrnyzH) If any linux masters could take a quick look at these log outputs and help me understand what is going on , much appreciated.

    Read the article

  • Supervisord appears to be running, but monitored programs aren't launched

    - by Brad Montgomery
    I've got supervisord 3.0a8 installed from the system package on ubuntu 10.04 (64bit). The supervisor service appears to be running, but it's not launching the configured programs. Interestingly enough, this exact configuration is running on another system, and is working as expected. The main config file looks like this: ; /etc/supervisor/supervisord.conf [unix_http_server] chmod=0700 file=/var/run/supervisor.sock [supervisord] logfile=/var/log/supervisor/supervisord.log childlogdir=/var/log/supervisor pidfile=/var/run/supervisord.pid [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///var/run/supervisor.sock [include] files = /etc/supervisor/conf.d/*.conf A sample program config looks like this: ; /etc/supervisor/conf.d/sample.conf [program:sample] directory=/opt/sample command=/opt/sample/run.sh Where, the /opt/sample/run.sh is: #!/bin/bash while true; do T=`date` echo "[$T] Running!" >> /var/log/sample.log sleep 1 done And, here's some additional information regarding the running instance of supervisord: root@myhost:~# supervisorctl version 3.0a8 root@myhost:~# which supervisorctl /usr/bin/supervisorctl root@myhost:~# which supervisord /usr/bin/supervisord root@myhost:~# supervisorctl status # NOTE that there's no output! root@myhost:~# supervisorctl avail root@myhost:~# service supervisor status is running root@myhost:~# ps aux | grep supervisor root 21740 0.1 0.4 40772 10056 ? Ss 11:28 0:00 /usr/bin/python /usr/bin/supervisord root 21749 0.0 0.0 7624 932 pts/2 S+ 11:28 0:00 grep --color=auto supervisor root@myhost:~# cat /var/log/supervisor/supervisord.log 2012-04-26 11:28:22,483 CRIT Supervisor running as root (no user in config file) 2012-04-26 11:28:22,536 INFO RPC interface 'supervisor' initialized 2012-04-26 11:28:22,536 WARN cElementTree not installed, using slower XML parser for XML-RPC 2012-04-26 11:28:22,536 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-04-26 11:28:22,539 INFO daemonizing the supervisord process 2012-04-26 11:28:22,539 INFO supervisord started with pid 21740 root@myhost:~# ll /etc/supervisor/conf.d/ total 28 drwxr-xr-x 2 root root 4096 2012-04-26 11:31 ./ drwxr-xr-x 3 root root 4096 2012-04-25 18:38 ../ -rw-r--r-- 1 root root 66 2012-04-26 11:31 sample.conf root@myhost:~# ll /opt/sample/ total 12 drwxr-xr-x 2 root root 4096 2012-04-26 11:32 ./ drwxr-xr-x 4 root root 4096 2012-04-26 11:31 ../ -rwxr-xr-x 1 root root 97 2012-04-26 11:32 run.sh* root@myhost:~# python Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> Any help is greatly appreciated!

    Read the article

  • apache mod_cache in v2.2 - enable cache based on url

    - by Janning
    We are using apache2.2 as a front-end server with application servers as reverse proxies behind apache. We are using mod_cache for some images and enabled it like this: <IfModule mod_disk_cache.c> CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache CacheIgnoreCacheControl On CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The image urls vary completely and have no common start pattern, but they all end in ".png". Thats why we used the root in CacheEnable / If not served from the cache, the request is forwarded to an application server via reverse proxy. So far so good, cache is working fine. But I really only need to cache all image request ending in ".png". My above configuration still works as my application server send an appropriate Cache-Control: no-cache header on the way back to apache. So most pages send a no-cache header back and they get not cached at all. My ".png" responses doesn't send a Cache-Control header so apache is only going to cache all urls with ".png". Fine. But when a new request enters apache, apache does not know that only .png requests should be considered, so every request is checking a file on disk (recorded with strace -e trace=file -p pid): [pid 19063] open("/var/cache/apache2/mod_disk_cache/zK/q8/Kd/g6OIv@woJRC_ba_A.header", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) I don't want to have apache going to disk every request, as the majority of requests are not cached at all. And we have up to 10.000 request/s at peak time. Sometimes our read IO wait spikes. It is not getting really slow, but we try to tweak it for better performance. In apache 2.4 you can say: <LocationMatch .png$> CacheEnable disk </LocationMatch> This is not possible in 2.2 and as I see no backports for debian I am not going to upgrade. So I tried to tweak apache2.2 to follow my rules: <IfModule mod_disk_cache.c> SetEnvIf Request_URI "\.png$" image RequestHeader unset Cache-Control RequestHeader append Cache-Control no-cache env=!image CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache #CacheIgnoreCacheControl on CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The idea is to let apache decide to serve request from cache based on Cache-control header (CacheIgnoreCacheControl default to off). And before simply set a RequestHeader based on the request. If it is not an image request, set a Cache-control header, so it should bypass the cache at all. This does not work, I guess because of late processing of RequestHeader directive, see https://httpd.apache.org/docs/2.2/mod/mod_headers.html#early I can't add early processing as "early" keyword can't be used together with a conditional "env=!image" I can't change the url requesting the images and I know there are of course other solutions. But I am only interested in configuring apache2.2 to reach my goal. Does anybody has an idea how to achieve my goal?

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • Splitting an internet connection between multiple separate subnetworks

    - by pythonian4000
    Problem I have an internet connection that I want to split between four separate networks. My requirements are: I need to be able to monitor the amount of bandwidth and data being used by each network, and notify or control as necessary. The four networks should only be able to connect to the internet, not each other. My parents need to be able to operate it, so it needs a simple, preferably Windows-based GUI. Progress so far Server I have a mini-ITX server with six Gigabit ethernet ports - one for the ethernet internet connection, one for each of the four networks, and one for remote access to the server for administration. Bandwidth control I spent a long time researching solutions here. The majority of the control systems/software I found could control bandwidth usage via QOS, but could not monitor or control the amount of data being used. Eventually I found the SoftPerfect Bandwidth Manager, which has everything I need in terms of monitoring and control - per-interface quota management, usage statistics, a web interface for checking usage, and email notifications when quotas are exceeded. It is also Windows-based and has a simple GUI. Internet sharing This is where I am having issues. I am currently using Windows XP Pro SP2 for the server (yes, I know this is far from ideal, but it's the only spare Windows OS I currently have). I can't use the built-in Internet Connection Sharing for several reasons: The upstream internet router has an IP of 192.168.0.1 which ICS clashes with, and I cannot change the router settings. ICS can only share an internet connection with a single interface, but I have four. I have tried bridging the four network cards, but then the Bandwidth Manager cannot see the four individual interfaces - it only sees the bridge. I have tried setting up Dual DHCP DNS server (and am having issues getting DHCP offers to be received by clients), but that would still require gateway software of some sort, which I have been unable to find. My current attempt is to use OpenVPN, with a server for the internet NIC and a separate client for each of the four networks. My thought is that I could bridge the OpenVPN TAP devices to each NIC, meaning that the Bandwidth Manager would control traffic from the bridge instead of the interface. I have not made much progress here though - I've never used OpenVPN before. Questions Is there a Windows software package that does everything I need? (Unlikely, I know) Is there a Windows software package that will share internet between multiple NICs without bridging? Are either of my about attempts feasible? Would it help to have a newer/server version of Windows? Is there a non-Windows alternative that is easy to use?

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • SQL Server 2008 R2 upgrade fails on upgrade rule check

    - by Tim
    I'm trying to upgrade an evaluation instance of SQL Server 2008 to a fully licensed instance of SQL Server 2008 R2. I made it most of the way through the installer, but I'm getting stopped at the Upgrade Rules page - the SQL Server Analysis Services Upgrade Service Functional Check is failing. The specific error I get: Rule "SQL Server Analysis Services Upgrade Service Functional Check" failed. The current instance of the SQL Server Analysis Services service cannot be upgraded because the Analysis Services service is disabled or not online. Please start the service and then run the upgrade rules check again. Simple enough - just need to start the service. Here's where it gets troublesome. When I open Services and go to start the SQL Server Analysis Services (MSSQLSERVER) service, it provides me the following message: The SQL Server Analysis Services (MSSQLSERVER) service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Trying from the command line as Administrator yields: PS C:\Windows\System32 net start MSSQLServerOLAPService The SQL Server Analysis Services (MSSQLSERVER) service is starting... The SQL Server Analysis Services (MSSQLSERVER) service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. I've tried changing the logon setting of this service to Administrator, a user with admin privileges, and both the Local System and Network Service accounts - nothing works. In addition, when I look at the service through the SQL Server Configuration Manager (also run as Administrator), attempting to change the logon setting for the service results in the message: The server threw an exception. [0x80010105] I have no need for analysis services themselves - all I need is for this one service to be running long enough to do the R2 upgrade, then it can shut down again. Any thoughts on how to get the Analysis Services service running? Update: Checking the event log, I found an error logged to the Application log from the MSSQLServerOLAPService. It has event ID 0, task category (289), and says: The service cannot be started: XML parsing failed at line 1, column 4: Unrecognized input signature.

    Read the article

  • Open ports broken from internal network

    - by ksvi
    Quick summary: Forwarded port works from the outside world, but from the internal network using the external IP the connection is refused. This is a simplified situation to make the explanation easier: I have a computer that is running a service on port 12345. This computer has an internal IP 192.168.1.100 and is connected directly to a modem/router which has internal IP 192.168.1.1 and external (public, static) IP 1.2.3.4. (The router is TP-LINK TD-w8960N) I have set up port forwarding (virtual server) at port 12345 to go to port 12345 at 192.168.1.100. If I run telnet 192.168.1.100 12345 from the same computer everything works. But running telnet 1.2.3.4 12345 says connection refused. If I do this on another computer (on the same internal network, connected to the router) the same thing happens. This would seem like the port forwarding is not working. However... If I run a online port checking service on my external IP and the service port it says the port is open and I can see the remote server connecting and immediately closing connection. And using another computer that is connected to the internet using a mobile connection I can also use telnet 1.2.3.4 12345 and I get a working connection. So the port forwarding seems to be working, however using external IP from the internal network doesn't. I have no idea what can be causing this, since another setup very much like this (different router) works for me. I can access a service running on a server from inside the network both through the internal and external IP. Note: I know I could just use the internal IP inside of the network to access this service. But if I have a laptop that must be able to do this both from inside and outside it would be annoying to constantly switch between 1.2.3.4 and 192.168.1.100 in the software configuration. Router output: > iptables -t nat -L -n Chain PREROUTING (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 224.0.0.0/3 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 to:192.168.1.101 DNAT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:25 to:192.168.1.101 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 to:192.168.1.101 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:12345 to:192.168.1.102 DNAT udp -- 0.0.0.0/0 192.168.1.1 udp dpt:53 to:217.118.96.203 Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 192.168.1.0/24 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • Nagios3: Conditional operators for service checks?

    - by Dave
    I'm trying to setup Nagios to monitor my various using hostgroups to define 'machine roles', against which I run services to check the machines by role. However, I'd like to use conditional operators that would enable me to run the service check against an intersection of two host groups, rather than their unions... i.e. using &&, ||, or () operators. For example, imagine I have the following servers: www-eu: Linux WWW (Apache) server, in the EU www-us: Windows WWW (IIS) server, in the US (West coast) ftp-eu: Linux FTP server, in the EU ftp-us: Windows FTP server, in the US I would want to create the following host groups: US-Servers: www-us, ftp-us EU-Servers: www-eu, ftp-eu WWW-Servers: www-us, www-eu FTP-Servers: ftp-us, ftp-eu Now say I'm interested in checking the HTTP response time for my web servers. Then let's say this particular Nagios service is running from the US (West Coast), and that I have a command called *check_http_response_time*. This command will check the responsiveness of the HTTP server, which I can provide an argument which defines the max response time before raising critical. My command might look like: check_http_response_time $HOSTNAME$ 50 Now traditionally, I can run my checks by specifying a list of host or hostgroups. define service{ use local-service hostgroup_name WWW-Servers # Servers = www-us, www-eu servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } However, with the above service definition, given my Nagios service is in US West, I could reasonably expect that my EU server will return critical. Really, I want different thresholds for each region (50 for US West, 200 for EU.) I would have to permutate my service for each host and set their custom threshold, or alternatively permutate out my service groups by role & region (i.e. WWW-Servers-EU), and run my specific thresholds against those. Though the latter is better, both are much messier than I'd like... What I would love, and what this post is asking for, is a way to use hostgroups to perform an intersection using conditional logic, rather than a simple union. It might look like: define service{ use local-service hostgroup_name WWW-Servers && US-Servers servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } It then would run the check only against servers that are in both WWW-Servers and US-Servers, in my example, just www-us. The benefits of such a feature would be significant for Nagios services configured for large-scale. Is this feature available? If it isn't, will it be available in the future? Is there an alternative way to accomplish this given the most recent Nagios version? Any tips/suggestions are most appreciated! Dave

    Read the article

  • Apache/Jboss Issue - is this connection timeout?

    - by user115391
    We have an application. The architecture is as below 1 load balancer (apache), which redirects to 2 app servers (jboss). The site is working fine and I am able to access it fine. But sometimes, randomly the homepage takes a while (like 30-40 secs) to load. I tried checking the logs but could not figure out why. I used the httptraffic analyzer, fiddler to see the traffic, but it just says the request/response took 30 secs or so. I checked the apache access logs, mod_jk.log. My configurations are below mod-jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log #JkLogLevel info #JkLogLevel debug JkLogLevel error # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T %P %{tid}P %D" JkMount /__application__/* loadbalancer JkUnMount /__application__/images/* loadbalancer <VirtualHost *:8080 > JkMountFile conf/uriworkermap.properties </VirtualHost> JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ----------------------------- uriworkermap.properties Simple worker configuration file # Mount the Servlet context to the ajp13 worker /=loadbalancer /*=loadbalancer ----------------------------- workers.properties worker.list=loadbalancer,status worker.template.port=8009 worker.template.type=ajp13 worker.template.lbfactor=1 worker.template.prepost_timeout=10000 worker.template.connect_timeout=10000 worker.template.ping_mode=A worker.worker1.reference=worker.template worker.worker1.host=hostname1 worker.worker2.reference=worker.template worker.worker2.host=hostname2 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=worker1,worker2 worker.status.type=status ----------------------------- my jboss server.xml - $JBOSS_HOME/server/default/deploy/jbossweb.sar/server.xml --------------------------------- The logs from access log is below The issue where it took time - look at the seconds column [23/Mar/2012:12:10:38 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:10:49 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:11:10 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:11:31 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:11:52 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - The one after the issue x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - Would it be a cache or timeout issue? Any help is appreciated. Thanks.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >