Search Results

Search found 10536 results on 422 pages for 'dan course'.

Page 307/422 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Notify user of message arrival in another mailbox

    - by Tim Alexander
    This is very similar to this question but has a few differences. Basically we have a user dealing with a conflict of interest case. To separate the mail from prying eyes (and the draconian routing system we have in place) the user has been granted access to a second conflicts mailbox that is only accessible to him via OWA. This has worked fine for years but now the user would like a notification to be sent to him when a message arrives in his conflicts mailbox. Initially I thought an Outlook rule would work but of course the client is never logged in so the Outlook rules are never processed. This led me to think that an Exchange Transport rule might work but the only options I can see are to Forward or Copy the message to another user. this would bypass the conflicts setup. All I really need is a notification and not the actual message to be sent. Is this at all possible with Exchange 2007? Or if not is there any thirdparty addition or workaround that anyone has come across?

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

  • Scripted redirection for Outlook 2003

    - by John Gardeniers
    We have a staff member in sales who has gone onto a 4 day week (getting ready for retirement), so each Thursday afternoon her email needs to be forwarded to another user and each Friday afternoon it needs to be set back. I'm using the VBS script below to do this, run via the Task Scheduler. Although the script appears to do it's job, based on what I see when I view the user's Exchange settings, Exchange doesn't always recognise that the setting has changed. e.g. Last Thursday the forwarding was a enabled and worked correctly. On Friday the script did it's thing to clear the forwarding but Exchange continued to forward messages all weekend. I found that I can force Exchange to honour the changed setting be merely opening and closing the user's properties in ADUC. Of course I don't want to have to do that. Is there a non-manual way I can have Exchange read and honour the setting? The script (VBS): ' Call this script with the following parameters: ' ' SrcUser - The logon ID of the suer who's account is to be modified ' DstUser - The logon account of the person to who mail is to be forwarded ' Use "reset" to clear the email forwarding SrcUser = WScript.Arguments.Item(0) DstUser = WScript.Arguments.Item(1) SourceUser = SearchDistinguishedName(SrcUser) 'The user login name Set objUser = GetObject("LDAP://" & SourceUser) If DstUser = "reset" then objUser.PutEx 1, "altRecipient", "" Else ForwardTo = SearchDistinguishedName(DstUser)' The contact common name objUser.Put "AltRecipient", ForwardTo End If objUser.SetInfo Public Function SearchDistinguishedName(ByVal vSAN) Dim oRootDSE, oConnection, oCommand, oRecordSet Set oRootDSE = GetObject("LDAP://rootDSE") Set oConnection = CreateObject("ADODB.Connection") oConnection.Open "Provider=ADsDSOObject;" Set oCommand = CreateObject("ADODB.Command") oCommand.ActiveConnection = oConnection oCommand.CommandText = "<LDAP://" & oRootDSE.get("defaultNamingContext") & ">;(&(objectCategory=User)(samAccountName=" & vSAN & "));distinguishedName;subtree" Set oRecordSet = oCommand.Execute On Error Resume Next SearchDistinguishedName = oRecordSet.Fields("DistinguishedName") On Error GoTo 0 oConnection.Close Set oRecordSet = Nothing Set oCommand = Nothing Set oConnection = Nothing Set oRootDSE = Nothing End Function

    Read the article

  • (Windows 7) Shared External Drive Permission Issues

    - by connec
    So, say I share my system (C) drive through windows (E.g. properties -> Sharing -> Advanced Sharing -> Share this Folder). I can then access this drive at \\Comp\C on another networked computer - all is well. However, if I insert a removable (USB) disk, say "E", and proceed to share it the same way, when I attempt to access \\Comp\E (either directly or through browsing) I get an error: Windows cannot access \\Comp\E You do not have permission to access \\Comp\E. Contact your network administrator to request access. Now, the permissions (Advanced Sharing -> Permissions) are set with "Everyone" having read access (same as the internal drive), so this doesn't make a lot of sense. Also of note, I have an SSH server on my computer (through Cygwin) and even through SSH (logging in as an administrator user) I cannot access /cygdrive/e (although /cygdrive/c is accessible). As a final note, the drive is of course accessible on the host machine (E:\), and also at \\Comp\E on the host machine.

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • Remote Desktop leaves host unresponsive

    - by Jeff Dalley
    I have my desktop PC at home set up to accept remote connections, and I often connect to it from work on my laptop via mstsc.exe. However, every time I remote to it, I find when I go home that despite the monitor being on - it's not receiving an image and it looks as though the computer is hibernating/asleep. I basically have to restart it whenever I get home and I know there's an answer for why its doing this. More details: When exiting the remote session, I have tried both logging off the account, and closing the RDP window without logging off; both give the same result. When I get home to the desktop I of course try moving the mouse, ctrl+alt+del to see if its responsive to restart, multiple key-press to see if I can get any audio out of it; It seems pretty obvious its sleeping/hibernating in some way: Nothing happens in any of these cases and a physical restart is necessary. Both desktop and laptop are running Windows 7 Ultimate. I'm thinking it really is sleeping/hibernating it, and I'm not sure why because left alone my desktop's power options are set to never turn off the HDD or change its state - I leave it on 24/7. This could be a stupid error on my part but I just can't see it! Thanks.

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • Never getting a JSON response when running server-side PHP proxy script but I do with others

    - by Dohk
    I'm on PHP 5.3.4 and Apache 2.2 btw So I'm using (or trying to use) Simple PHP Proxy (Simple PHP Proxy) I enter a URL at his example page at SPP Example Page and it works fine, I see the JSON response and all the headers. However, when I copy the exact URL, only changing the URL to now have localhost, I get both empty headers and no JSON. Assuming that the script on his site is the same I downloaded, could this be due to a multitude of things or a setting in Apache and/or the PHP ini? So for example: benalman.com/code/projects/php-simple-proxy/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 That will get me a ton of info back Now changing to localhost http://localhost/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 {"headers":[],"status":{"url":"https:\/\/github.com\/","content_type":"text\/html","http_code":301,"header_size":194,"request_size":182,"filetime":-1,"ssl_verify_result":0,"redirect_count":1,"total_time":0.094,"namelookup_time":0,"connect_time":0.047,"pretransfer_time":0,"size_upload":0,"size_download":185,"speed_download":1968,"speed_upload":0,"download_content_length":185,"upload_content_length":0,"starttransfer_time":0,"redirect_time":0.047,"certinfo":[]},"contents":null} I even went basic and just used some curl and of course, empty objects being returned other than false for my content and the url I set in my JSON. Any help is deeply appreciated or any ideas.

    Read the article

  • How do I make a PPT file as small as possible?

    - by grunwald2.0
    Currently I am agonizing over several large presentation files, which I happened to reprint to PDFs... One thing I wondered: Do PPT's (from Microsoft Powerpoint) always to have to be that big? And what would be the strategies to make a PPT smaller? (If we say "ceterus paribus" at e.g. 25 slides and assuming that one isn't allowed to use a cloud-based service like GDocs, rocketslide or Prezio.) Of course there are the obvious "bad guys": Images and graphics. But: How about roll-over animations etc, who knows how much space they take? How about "smart arts"? Could one save file size if one would use "Open Office" or "Libre Office" Impress? (I didn't try it yet.) And "what if": What if we need to include e.g. five images (or charts that can't be remade in Excel in time), how would we best reduce the file size impact of those five images, if we needed to? I ask all this from an honest "business" perspective. I am no nerd or "Microsoft MVP" and I don't intend on delving into LATeX or similar yet. But that doesn't mean that I am not curious and very willing to learn. I am basically interested in (proven) best practices. Yes I know this question is lacking "initial research", but I think the perspective of my question is interesting and unique to a lot of people and if we intend to make SE a "Q&A" / Wiki kind-of reference site, this question might be a good way to "collect" advice on a question that has a very defined goal: Minimum file-size.

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Verification of downloaded package with rpm

    - by moooeeeep
    I wanted to install a package on CentOS 6 via rpm (e.g., the current epel-release). EDIT: Of course I would always prefer the installation via yum but somehow I failed to get that specific package installed using this normal approach. As such, the EPEL/FAQ recommends Version 2. As I'm downloading the package through an insecure channel (http) I wanted to make sure that the integrity of the file is verified using information that is not provided with the downloaded file itself. Is this especially true for all of these approaches? I've seen various approaches to this on the internet: Version 1 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 2 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 3 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm --import https://fedoraproject.org/static/0608B895.txt rpm -K epel-release-6-7.noarch.rpm rpm -i epel-release-6-7.noarch.rpm I do not know rpm very well, so I wondered how they might differ? My guess (after reading the manpage) is that the first should only be used when the package is previously not installed, the second would additionally remove previous versions of the package after installation, the first two omit some verification steps before the actual installation that are done by rpm -K. So my main questions at this point are Are my guesses correct or am I missing something? Is the rpm --import ... implicitly done for the first two approaches as well, and if not, isn't it necessary to do so after all? Are these additional checks performed by rpm -K ... any relevant? What is the best (most secure, most reliable, most maintainable, ...) way of installing packages via rpm in general?

    Read the article

  • iptables blank after reboot

    - by theillien
    We've started encountering an issue with iptables on our RHEL 6.3 systems in that after a reboot, when the service starts, the rules are not loaded. We get the empty ruleset: [msnyder@matt-test ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is in spite of the fact that we have rules defined and the service is, indeed, running. That I know because when I run service iptables start it simply drops back to the prompt. If I run service iptables restart it actually stops and then restarts the service. And, of course, if I run service iptables stop it indicates that iptables is actually stopping. Knowing that I need to restart the service, I do so and the rules load up properly. They simply don't get loaded after a reboot. Unless they get loaded differently during a reboot I don't see how our rules would be wrong. If they were, they wouldn't even load during a service restart. Has anyone else ever encountered this? EDIT: The rules are already saved in /etc/sysconfig/iptables. They are not added on the fly from the command line so service iptables save is unnecessary.

    Read the article

  • Puppet: variable overriding best practices

    - by rvs
    I'm wondering what are best practices for overriding variables in puppet. I'd like to have all my nodes (which located in different places, some of them are qa, some live) with same classes. Now I have something like this: class base_linux { <...> # something which requires $env and $relayhost variables } class linux_live { $relayhost="1.1.1.1" $env = "prod" include base_linux } class linux_qa { $relayhost="2.2.2.2" # override relayhost include base_linux } class linux_trunk { $env = "trunk" # override env inlude linux_qa } node "trunk01" { include linux_trunk include <something else> } node "trunk02" { $relayhost = "3.3.3.3" # override relayhost include linux_trunk include <something else> } node "prod01" { include linux_prod } So, I'd like to have some defaults in base_linux, which can be overrided by linux_qa/linux_live, which can be overrided in higher level classes and node definition. Of course, it does not work (and not expected to work). It does not work with class inheritance as well. Probably, I'll be able to archive using global scope variables, but it does not seems like a good idea to me. What is the best way to solve this problem?

    Read the article

  • Free Hosting control panel

    - by John Maxim
    Hello All, I'm in the mid of researching for one of the best hosting control panels. The server I run is Ubuntu and I have some experience with ISPConfig 2 & 3. Since I haven't explored any others available, what are the recommended ones for an Ubuntu server? I asked because I find that there seems to be some disabling and modifications required for an Ubuntu server if I need to use ispconfig which causes the server to change its actual way of running. It's quite good though, but any more recommended ones ? Something more organic? which doesn't require much breaking and changing. I'm not asking for the simple one, I don't mind going extra mile to install a powerful one but just try sticking with most Ubuntu's conventions will be an ideal one for me. And of course, if there happens to be something that meets the requirement as mentioned "Ubuntu conventions" and also simple to install at the same time, that'd be a bonus. Thanks in advance.

    Read the article

  • In need of assistance for recovering a lost partition

    - by Tek
    The program that has worked for the most part is Active@ Partition Recovery. I'm so close but yet so far from recovering my data. Okay so here's what happens. In the following screenshot (blanked out a folder and filename with profanity in case some of you guys are at work :P), it detects the partition I accidentally deleted with ALL 100% of my data listed. Of course, I didn't write ANY data to that drive after I did this. But when I click "recover" and finishes the recovery process, in Windows I click on the partition that was just was just recovered... It's EMPTY. The program seems to be able to see my lost files, but when I recover the partition windows doesn't seem to think the same =( Things I tried: I tried running a chkdsk /r /f after I recovered the partition, apparently it couldn't find any errors. Tried using other software like TestDisk to recover the partition, but they (all) act similar to Windows in that it detects the (missing) partition but when I browse it there's no files. The partition is there along with all the file and data information. The sector information is also in the screenshot, is there any way I can use this to my advantage in recovering my data? Other information: Dualboot: Win8 / Ubuntu 12.10 x64 1TB Internal desktop drive, GPT Layout, NTFS formatted drive, 64K allocation size.

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • Cannot change power button or lid close action

    - by Mark Henderson
    I have a Samsung 900x laptop and I want to change it so that when I close the lid, nothing happens (I often close the lid to carry it somewhere 10 seconds away, and by putting it into suspend it cancels any active downloads/etc). Easy, right? Go to Power Options and change it there; just like on every other laptop in the world. Not so fast: Saywhat?! That message only shows up for the nodes for Lid Close Action, Power Button and Sleep Button. I can change every other setting except for those three. I'm definately an Administrator on the computer, and I've googled the error and found dozens of hits on other crappy forums, but of course nothing on those worked (otherwise, I wouldn't be here). And as ususal the "Why can't..." hyperlink gives no useful infomation what so ever (just a generic Help document). So - how can I change what closing the lid does? I will modify the registry directly if I have to.

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • Windows 7 explorer crashing trying to read external hard disk

    - by Mario De Schaepmeester
    I have a 1TB Western Digital hard drive which is almost full and last time I tried to plug it into my laptop, I got a Windows dialog saying "this hard drive needs to be formatted". I did not panic because I have experienced things like this before and I know it's often solved by simply re-inserting the drive. Now however, whenever I plug it in and try to browse it in explorer by going to "computer", the explorer process crashes after a while. I simply close explorer since it takes ages trying to read the disk and nothing happens. After searching on the internet, the best thing to do would be a chkdsk. I tried it via properties in explorer (which also took a good 5 minutes to open up), locks up as well, after waiting a couple of minutes it says there's no access to the disk so a chkdsk is not possible... I want to make clear that I always use safe removal before pulling out the USB cable. Last time however, safe removal just would not work and when trying to shut down Windows, the logoff screen just would not disappear (I've waited at least 10 minutes or so) and I powered off the PC by force. This may be the cause of the problems but the disk was still recognised immediately after that. I really don't want to format this thing because it contains C: clones of 3 computers and a lot of other stuff that I don't want to re-copy. What would be the best course of action? Update I got chkdsk working via the command line. I used the /F and /R options. I already got a bunch of lines saying "file record segment X is unreadable" or whatever it is in English, my OS is Dutch. It looks bad... Will chdsk repair these errors?

    Read the article

  • Windows VPN client connect on different port

    - by John Gardeniers
    Scenario: Two Windows Server 2003 machines running RRAS VPNs. The firewall port forwards 1723 to one of those machines for normal remote access. I'd like to find a way to connect to the second machine as well. Not because I need to but just because it's the sort of thing I reckon should be possible but can't figure out how to do. Is it possible to have the Windows PPTP VPN client (on XP in this instance) connect on a port other than 1723? If so, I can simply port forward another port to the second server. I've done a fair bit of Googling over the last few days and have only found others asking the same question but no answers. I have of course tried to add a port number in the host name or IP connection box, in various formats, but to no avail. While this might be possible with a third part client I'm really only interested in whether or not it can be done with the Windows built-in client and if so how?. Perhaps there's a registry hack I'm not aware of?

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >