Search Results

Search found 10463 results on 419 pages for 'required'.

Page 172/419 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Dell Multi-Monitor Hub: true DisplayPort splitting?

    - by thepurplepixel
    In my search for a new display, I came across the Dell Multi-Monitor Hub MMH11, which seemed to be an alternative to my search for daisy-chainable DisplayPort displays. However, before I cave and spend $179 on this device, I am wondering if this will be similar to other splitting devices where it appears to the computer as one big monitor and the device does the splitting (which I don't want). Or, does this use the packet-based nature of DisplayPort to present two/three separate displays to the computer? Also, would this device work on my MacBook Pro? (I know the Dell site says it's for Windows, but it also says that no driver installation is required. I'd assume since the MBP supports DP 1.2 it would work, but it's better to ask). Thanks!

    Read the article

  • memcached install issues with lib event on server

    - by albert N
    I've installed libevent on my server in the directory root/data/ and have i'm about to install memcached with ./configure –with-lib-event=/data/; make; make install However, after running a bit I get this error saying i'm pointing to the wrong directory for libevent. checking for libevent directory... configure: error: libevent is required. You can get it from http://www.monkey.org/~provos/libevent/ If it's already installed, specify its path using --with-libevent=/dir/ make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target `install'. Stop. Any suggestions. I am not experience with cli so anything is help. Thanks!

    Read the article

  • Umbraco on Windows 7 64-bit

    - by HeavyWave
    I'm trying to install Umbraco CMS on Windows 7 64-bit and I get the following exception: [HttpException (0x80004005): Could not load file or assembly 'ImageManipulation, Version=1.0.2105.41209, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Failed to grant minimum permission required. The application pool's trust mode is set to 'Full', all the user permissions are just as on other sites hosted on the same machine. I went through all relevant topics on Umbraco's forum, but all advices are about the trust level. How do I fix this?

    Read the article

  • Creating a file server - How can I use a large VHD file in Hyper-V? (700GB)

    - by barfoon
    Hey everyone, After a few discussions (here, here, and here), I am still unable to create a simple VM that will be used as a fileserver hosted on my Hyper-V box. I have created a fixed 700GB SCSI drive (.vhd file), as I have learned an IDE drive of this size is not possible. Not to sound too cynical, but its blown me away at how much trouble its been to create a large amount of space and start using it. What is the best way to create a fileserver with a drive of this size hosted on Hyper-V Server 2008, and how can I get it going??? Inclusion of OS, driver, integration tools etc, anything you feel is required would be greatly appreciated. Extra information I am using the stand-alone version of Hyper-V server, and not Windows Server 2008. I have tried loading the Linux Integration Tools (linked in the comments of the last link above) onto a SUSE 11 VM and the installation fails, the machine cannot see the vhd at all. Thanks very much,

    Read the article

  • Adjusting color on one monitor in Nvidia Surround

    - by Chris Stauffer
    I'm currently running three 2560x1440 screens using Nvidia Surround. Two monitors are Yamakasi Catleaps (cheap Korean jobs) and the third is the Achievia Shimian (also Korean). The Catleaps have great color reproduction, however the Shimian is exceptionally blue tinted. With normal monitors the required correction would require minimal effort to accomplish. But these Korean monitors do not have hardware controls to do it. For those who are unfamiliar, Nvidia Surround basically takes all three monitors and makes one big "monitor" out of all of them (Xinerama for GNU/Linux folk), at a resolution of 7680x1440 in my situation. Therefore, adjusting the color profile in the Nvidia control panel changes the settings for ALL of the monitors simultaneously. Thus, I am looking for some software to adjust the Shimian (perhaps by just selecting the pixels that that monitor encompasses). Does anyone know of such a program?

    Read the article

  • AVG Installer Error

    - by the curious one
    my computer is running window XP home editon. the avg i am using is expirying soon and so a pop out ask me to install AVG Anti-Virus Free Edition 2011. When the download is about to complete, i was ask to reboot the computer, i click "OK". after my computer restart , there was a pop out say "AVG Installer- Error A system restart is required in order to continue with the installation. Please restart your system and try again." Since the only option was "OK" i click it, and restart my computer. After restarting, the same pop-out appear. I could not download AVG Anti-Virus Free Edition 2011 as the pop-out keep appearing if i run it or uninstall. can anyone help me?? btw my brower is int

    Read the article

  • How can Nagios handle non-threshold based plugins?

    - by FliesLikeABrick
    I am writing a Nagios plugin to monitor trends of a certain storage resource utilization (e.g. gradual increases are fine, but an instantaneous/sudden increase or decrease in resource usage may indicate a problem). For what it's worth, it is reviewing the last N entries in an RRD file generated by a custom cacti data source/templates. What is the "right" way to handle Nagios notification config/implementation for this? The problem is that it the plugin would exit as warning/critical for one polling period, but in the next it would be fine (or 3 polling periods later, if I look at 3 polling periods worth of data). I guess the question is: should I just write it in such a way that it will alert for X polling periods, or should I find a way to write it such that manual intervention is required for it to clear (such as logging into the monitoring server or hitting a URL to run a script that submits a passive result)? Your input is appreciated, and if you have any tips for how to implement the latter I'm open to them (I can think of a few ways to possibly implement it)

    Read the article

  • How can I shut down all my Virtual Machines when my UPS kicks in?

    - by Tim
    I have a Dell T610 running ESXi4, an APC Smart UPS 1000VA and a local "console" machine running Vista and the vSphere 4 Essentials pack. A dedicated management network is in place between the T610 and the Vista machine. We have 4 VMs: SBS 2003, Server 2003 running Terminal Services, and two XP Machines. Ideally, when the UPS is forced to use battery power [for a set number minutes], I would like to gracefully shutdown all the VMs, then the ESXi, then the console machine. The latter two are not strictly a priority, but the VMs within ESXi are. Google provided lots of deprecated scripts that used to work on ESXi 3.x or similar, however I am unable to find what they were deprecated by. What do I need to be able to do this? I have Powerchute Express as supplied with the UPS, but would be willing to pay for software if required.

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Tunneling over HTTP

    - by Morgan
    Hello, I have a network at work that is locked behind a firewall and Internet connection is available only by using a proxy server. At work, I can connect to databases that are distributed across the network. However, at home, I cannot connect to the proxy server or the databases. How can this be done? I can access my workstation via LogMeIn, so I can install anything on it. I thought of installing some kind of tunneling mechanism in my workstation. Then, at home, I could connect to this mechanism, which would in turn do the required connections. So essentially, what I'd like to do can be represented by the following diagram: Home = Workstation = Database. For example, whenever I connect to, say, 10.140.0.1:1234 at home, this would be redirected to 10.140.0.1:1234 of my Workstation, because 10.140.0.1:1234 is only available through the corporate network. NOTE: I'm using Windows XP.

    Read the article

  • Mutliple VMs for Tomcat cluster vs Multiple Tomcat instances on one physical box

    - by Greymeister
    I'm working on a project that will be implemented into production using a cluster of Apache Tomcat instances and I'm looking for the best Hardware/OS solutions and VMs have come up as one option. I have run ESXi/ESX instances before for development and testing, but I'm curious for a hosting environment if having multiple VMs is actually worse than just configuring a server to host multiple instances of Tomcat. These are my guesses: Pros for VMWare Easier Maintenance/Backup for individual VMs (VMWare makes this easy) Can remote login to individual VMs without having to give host access (security?) Easier way to re-purpose machine for OS/Hardware changes Pros for running on one Physical Machine Overhead of only one OS (also no VMWare footprint) Update OS/security changes once One less administrative layer (No VM expertise required) I'm curious if anyone has any other ideas about what the benefits would be for either option.

    Read the article

  • How Is A SAN Storage Device Managed? [closed]

    - by slickboy
    Apologies if this is a stupid question but my only previous experience with SAN technology is using a SAN virtualisation tool (StarWind iSCSI SAN Free). This tool comes with a management interface that allows for iSCSI targets to be configured on the simulated SAN that can then be accessed. My question is basically: How is physical SAN device managed? Is an operating system required to be installed on the SAN device and then managed via software? Or can it be managed remotely via an application with no OS on the device? Thanks for any help.

    Read the article

  • Upload a directory recursively to an FTP server

    - by Nicolas Raoul
    I am writing a Linux shell script to copy a local directory to a remote server (removing any existing files). Local server: ftp and lftp commands are available, no ncftp or any graphical tools. Remote server: only accessible via FTP. No rsync nor SSH nor FXP. I am thinking about listing local and remote files to generate a lftp script and then run it. Is there a better way? Note: Uploading only modified files would be a plus, but not required

    Read the article

  • Prevent machine in a LAN from receiving a remote shutdown

    - by WebDevHobo
    I'm probably just overreacting, but I recently came across a LAN-scanner that showed me the option "remote shutdown", for all found computers on the scanned network. Now, how exactly does this work? If I send such a message, will the shutdown happen no matter what, or is it required to have the password/user-name of the user of that other computer. Mostly I'm wondering: can this be done to me and how do I prevent it? EDIT: what's more, I had the scanner check for shares. The result being this: Double clicking the links opens them in explorer, basically meaning my entire C and F drive(only 2 HD's I have) are completely exposed to anyone in my LAN. Or can I open these because it's my own machine?

    Read the article

  • vsftpd with pam_winbind.so

    - by David
    I'm trying to setup vsftpd to use logins from our domain. I want the ftp users to be able to login using their active directory username/password and have be able to have full access to /media/storage/ftp/username. I setup pptp using winbind and it is working fine, so I belive the issue is with vsftpd and pam. The ftp server runs and gives 530 for the login. I turned on debug for the pam module, but I see nothing in the syslog. Vsftp only logs a wrong login in its log. /etc/pam.d/vsftpd auth required pam_winbind.so debug /etc/vsftpd.conf listen=YES listen_ipv6=NO connect_from_port_20=YES anonymous_enable=NO local_enable=YES write_enable=YES xferlog_enable=YES idle_session_timeout=600 data_connection_timeout=120 nopriv_user=ftp ftpd_banner=Welcome to Scantiva! Authorized access only! local_umask=022 local_root=/media/storage/ftp/$USER user_sub_token=$USER chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd guest_enable=YES guest_username=ftp ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • What has changed to make it possible to develop USB3.0

    - by RoboShop
    Like I know the transfer speeds are vastly different. But I don't understand is why they are faster. And why couldn't they have implemented USB 3.0 when they released 1.0? Like what technical breakthrough was required to get transfer speeds that fast? Was it cost? capacities of computer? like they couldn't read the data fast enough? Although USB is still well below hard drive transfer speeds engineering breakthrough? They found some new material which could transfer at a faster rate? Was this in the cable itself? In the hardware?

    Read the article

  • COM+ automatic collection of a process dump file and process termination at high call time

    - by immi
    hI, I want to configure my machine for automatic collection of a process dump file and process termination as mentioned in http://support.microsoft.com/kb/910904. But after setting the registry settings according to the KB article, i am not getting the required behaviour. Only a warning is being logged when call time goes high (which is the default behaviour). i am running Windows Server 2003 with SP2. Is there any thing that i am missing? for example restart any COM+ runtime etc. Regards

    Read the article

  • Securing a persistent reverse SSH connection for management

    - by bVector
    I am deploying demo Ubuntu 10.04 LTS servers in environments I do not control and would like to have an easy and secure way to administer these machines without having to have the destination firewall forward port 22 for SSH access. I've found a few guides to do this with reverse port (e.g. howtoforge reverse ssh tunneling guide) but I'm concerned with security of the stored ssh credentials required for the tunnel to be opened automatically. If the machine is compromised (primary concern is physical access to the machine is out of my control) how can I stop someone from using the stored credentials to poke around in the reverse ssh tunnel target machine? Is it possible to secure this setup, or would you suggest an alternate method?

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • WSS "Cannot connect to the configuration database"

    - by Tim
    I have 64-bit WSS 3.0 installed on a 64-bit Windows 2003 Server. After installing WSS 3.0 I switched IIS to run in 32-bit emulation mode as we have some applications that require this. I'm getting a "Cannot connect to the configuration database" trying to get to the Central Admin page and wondered if: a. The setup I have won't work and I'm wasting my time trying to figure this out. or b. If anyone has any suggestions for resolving the database connection issue? The identity of the app pool that WSS runs under has all the required permissions in SQL so far as I can tell. Any help would be appreciated!

    Read the article

  • Can Solaris RBAC roles be ported to Linux using SElinux only?

    - by Jimmy
    We are migrating an application from Solaris to Linux and the main user is allowed, through the use of RBAC roles, to run a few system commands like svccfg/svcadm (chkconfig on redhat). Is it possible, using only SElinux (no sudo), to allow a normal user to run chkconfig off/on (basically give it the ability to add remove services) ? My approach was to try to create an SElinux user with a corresponding SElinux role that manages the app's domain/type and is allowed to transition to all other domains required to run chkconfig, tcpdump or any other system utility usually restricted to root access only. All my attempts so far have failed, so my second question would be where could I find good documentation that applies to this specific problem ?

    Read the article

  • Mount encrypted hfs in ubuntu

    - by pagid
    I try to mount an encrypted hfs+ partition in ubuntu. An older post described quite good how to do it, but lacks the information how to use encrypted partitions. What I found so far is: # install required packages sudo apt-get install hfsprogs hfsutils hfsplus loop-aes-utils # try to mount it mount -t hfsplus -o encryption=aes-256 /dev/xyz /mount/xyz But once I run this I get the following error: Error: Password must be at least 20 characters. So I tried to type it in twice, but that results in this: ioctl: LOOP_SET_STATUS: Invalid argument, requested cipher or key (256 bits) not supported by kernel Any suggestions? Thx Edit: One thing I'm not sure about is whether I use the right password. My assumption is that it is my default one for these situations. But I'm not sure whether Max OSX choose another password (internally) for that.

    Read the article

  • install filezilla error, Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed

    - by solomongaby
    I am trying to install filezilla from this repo: https://launchpad.net/~yofel/+archive/ppa and after sudo apt-get update i tried to install it but i get the error: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: filezilla: Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed Do you have any ideea what is happening ?

    Read the article

  • Warning in Apache log: Cannot get media type from 'x-mapp-php5'

    - by IronGoofy
    I have no idea what is causing this issue, but it seems to be related to the displayed file (just a simple index.php to print phpinfo) being in an aliased directory. Any suggestions what I can do to avoid the warning? Here's an excerpt from my httpd.conf: <Directory "<dir with broken php>"> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> Alias /smartersoftware/ "<broken dir>" <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> The last three lines were required to make php work at all (which I found a bit strange, and it may or may not be related to my problem). Adding a AddType application/x-mapp-php5 .php didn't change anything.

    Read the article

  • SSL client auth in nginx with multiple server section

    - by Bastien974
    I want to implement ssl_verify_client in nginx. This works perfectly when I only have one server section, which listen to 443. In my case I have multiple, all listening on 443 but to different server_name. For one particular server (proxy.mydomain.com), I'm adding the SSL client verify, but when I test the connectivity with openssl s_client -connect proxy.mydomain.com:443 -cert xxx.crt -key xxx.key and then do a GET / HTTP/1.1 host: proxy.mydomain.com It's not working, 400 No required SSL certificate was sent I think nginx is not receiving the proper server_name and is directing it to the first server listening to 443. So I tried to listen on another port and it worked right away. What's the issue and how can I fix it ?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >