Search Results

Search found 20099 results on 804 pages for 'virtual host'.

Page 503/804 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • APIPA ip address in server 2003 dns for (same as parent folder record) can anyone suggest why this i

    - by dasko
    have a server 2003 domain controller i have installed active directory integrated dns under the forward lookup zone for domain_name.local i see an APIPA ip address that is set for (same as parent folder) with ip number 169.x.x.x looks like (same as parent folder) Host A 169.x.x.x (apipa subnet range) problem is, from other forums that i have read, that this is due to dual nics and one on that is not getting a dynamic or static ip address BUT... I only have one nic in this server? where could this be coming from and could it mess up other settings or not allowing the DC to be contacted? i am just wondering what symptoms could arise due to the record being there. any help would be greatly appreciated thanks.

    Read the article

  • Why can't windows see mmcblk0p3? [closed]

    - by jacknad
    The partition is created on the embedded linux target like this # n - new # p - partition # 3 - partition 3 # 66 - starting cylinder # <blank> - maximum size for the ending cylinder # t - set file system type # 3 - partition 3 # c - set to windows vfat # w - write partition table and exit echo -e "n\np\n3\n66\n\nt\n3\nc\nw" | fdisk /dev/mmcblk0 The file system is then formatted on the embedded linux target as MS-DOS like this # -n volume-name # -F FAT-size mkfs.vfat -n DB -F 32 /dev/mmcblk0p3 A linux host can mount and access files in mmcblk0p3 without issue. Why can't windows? Edit: Although the default number of FATS is 2 I tried adding -f 2 [number-of-FATs] since this is actually being done by busybox on an embedded platform but this didn't help. I understand the Linux MS-DOS file system does not support more than 2 FATs but there are only 2 on this target (the boot is also FAT which is visible), along with and EXT3 (on p2) for the root file system.

    Read the article

  • Limit HTTP VERBS on Apache2

    - by user72295
    I am trying to limit the use of certain HTTP verbs on my site. I entered the following into my VirtualHost config file within the Directory element: <Limit GET POST HEAD> Allow from all </Limit> <Limit PUT DELETE OPTIONS> Deny from all </Limit> This seemed to work but with unexpected results: I ran the following telnet/HTTP commands before and after this change, open server 80 OPTIONS server/abs_path HTTP/1.1 User-Agent: Telnet/1.0 Host: server before the change I received a successful response with the Allowed headers. After the change, however, I was expecting to receive a 405 'Method not allowed' response but rather I received a 403 'Access Forbidden' response. What do I need to change in apache to return the 405 HTTP response? Many thanks

    Read the article

  • Is a matching entry in /etc/hosts required for hostname?

    - by JohnWoltman
    I was installing a Tomcat webapp that refused to work until I stumbled on someone else's issue with an unrelated product. The solution was to add the machine's name to /etc/hosts, to match the name returned by hostname. Is this required for general Linux networking to function correctly? My webapp is running in a virtual machine so that I can test the webapp, and I don't normally bother with the /etc/hosts file on VMs. I just shook my fist and cursed Tomcat and webapp's behavior. I read http://serverfault.com/questions/118823, but that doesn't say if it's required or not.

    Read the article

  • How can I make gitosis distinguish between two users with the same username

    - by bryan kennedy
    I have a gitosis system that seems to be working correctly except for a common problem we run into where I can't distingush permissions between two users who have the same username, but different hosts. For example: [email protected] 's SSH key is in the key folder. And so is [email protected] 's SSH is also in the key folder. These two jsmith's are two different people on two different computers. However, when I configure them in the gitosis.conf file with the usernames jsmith@computer or jsmith@machine, it seems like each user just gets the same permission. Can gitosis not distinguish the full username (name and host)? If not, how do I deal with multiple users accessing our system with common usernames? Thanks for any help.

    Read the article

  • Other users on my Server?

    - by Jennifer Weinberg
    I'm buying a Server from a person (that I don't know really well) and I want to make sure that the previous owner hasn't got any access anymore. It's an Ubuntu Virtual Server and I already received the admin access (via shell). How can I find out if there are still other accounts left, who are still able to access my server (e.g. with a still existing shell account, ftp or another type of user account)? And how can I delete them if these accounts exist? Best regards, Jennifer

    Read the article

  • Can I create an SSH user which can access only certain directory?

    - by RiMMER
    I have a Virtual Private Server which I can connect to using SSH with my root account, being able to execute any linux command and access all the disk area, obviously. I would like to create another user account, which would be able to access this server using SSH too, but only to a certain directory, for example /var/www/example.com/ For example, imagine this user has a HUGE error.log file (500 MB) located in /var/www/example.com/logs/error.log When accessing this file using FTP, this user needs to download 500 MB to view the last lines of the log, but I'd like him to be able to execute something like this: tail error.log Therefore I need him to be able to access the server using SSH, but I don't want to grant him access to all server areas. How can I do this?

    Read the article

  • Recover data from a corrupted virtualbox vmdk file?

    - by Neth
    The power went out while I was doing a build on a VirtualBox machine, when I restarted the vmdk for the disk the vm was using was corrupted, apparently irrecoverably. I have been able to grep the 66GB vmdk file and it finds strings from the code I was working on that hadn't gotten in to subversion yet (yeah, yeah I know). But the strings are either in the shell history or what look to be strings inside object files. Any ideas for finding/recovering the source code? If it helps the vm was Linux, Fedora Core 10 on an ext3 filesystem. The host is an ubuntu 10.04_amd64 and has an ext4 filesystem.

    Read the article

  • Connection timed out exception, why?

    - by Dheeraj Kumar Aggarwal
    I am developing an application which uses embedded tomcat server 7, and deploys a web application on embedded server. My application accesses the embedded webapp through Rest APIs, but my clients are getting Connection Timed Out exceptions and port is also not blocked. I never gets this exception when I install this application on my local machine. Some points: IP address is used in the host name part (They are able to access this IP address on other port) Port is not blocked We are using Apache HttpClient library to access the URL Timeout interval seems not to be an issue. What are the possible reasons for this exception Connection Timed Out? or How can I simulate this problem on my local machine? Any pointers would be helpful.

    Read the article

  • Can connect to shared folder on Windows Server 2012, but access denied when accessing

    - by Cylindric
    I have a Windows Server 2012 (non-domain) with a folder that's shared out as TestShare. The share permissions are Everyone has full access, and there is a local user TestUser that has full access to the actual folder. On GuestServer I can connect and/or map a drive to \HostServer\TestShare, specifying the username and password for TestUser. NTFS permissions: Share permissions Effective Access Report The problem is that when I try to access the folder, I get an "access denied" message. On the host server I can see the user connected to the share in the Sessions manager, so the password is correct and being recognised. If I use an incorrect password I don't get the "completed successfully" message, nor the 'open session'. What else can be blocking access to the shared files, when the share seems to be set, and the folder permissions seem to be set, and the connection seems to be okay? The network is recognised as "public", and the relevant firewall rules seem to be enabled - even disabling the firewall doesn't help.

    Read the article

  • Hyper-v error starting vm after reboot

    - by mamu
    Hi rebooted hyper-v host running server 2008 r2. After reboot one of my vm is throwing following error. 'The version does not support this version of file format' Out of my all vms this was only set to save state when shutdown. I tried deleting state and start still same error. Tried inspect disk as well edit disk both throw same error when try to open this. What could it be? any way to resolve it?

    Read the article

  • NginX & PHP-FPM, random 502.

    - by pestaa
    2010/09/19 14:52:07 [error] 1419#0: *10220 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [...], server: [...], request: "POST /[...] HTTP/1.1", upstream: "fastcgi://unix:/server/php-fpm.sock:", host: "[...]", referrer: "[...]" This is the error I'm receiving randomly. 95% of the time my setup works perfectly, but once in a while I'm getting 502 for 3-4 subsequent requests. I'm using Unix socket between the server and the PHP process as you can see, also have set up FastCGI params (SCRIPT_FILENAME), etc. correctly. What can I do about it to strengthen the connection between these services? Thank you very much in advance.

    Read the article

  • Purpose of LAN Domain?

    - by Leonard Thieu
    What is the purpose of creating a domain name for your LAN? I'm using DD-WRT on my router and assigned local.moofz.com as the LAN domain. I setup Apache HTTP servers on two of the computers on my LAN to test it out. I could reach them on oneil.local.moofz.com and vala.local.moofz.com, but I found out that I could also reach them via their hostnames oneil and vala. If I can reach them through their host names, then what would be the purpose of having a domain name for my LAN?

    Read the article

  • Cropping a PDF File's Margin During Printing

    - by JavaMan
    I'm using the free Acrobat Reader to print out some pdf documents having very large top/bottom/left/right margins. I want to remove the margins (which are wasting too much space and making the fonts too small). I used to use Acrobat (the paid version having edit features) to crop the src pdf file manually. But since it is an old version it does not support new pdf format and I don't want to upgrade for such a simple use. Is there any free way to crop/remove unwanted white margins from the printed pdf? I am thinking to print the pdf files to a PDF Printer like the Bullzip PDF Printer and enlarge the output file manually so as to remove any white margin. But there does not seem to be such a feature in Bullzip PDF Printer. Is there any other virtual printer software that can be used for this purpose?

    Read the article

  • VMware Workstation on Linux: Dropping core files in a shared folder...

    - by kclittle
    I'm using VMware 6.0.2 on a RHEL 4.6 host. The VMs are MontaVista CGE 5.0 (2.6.21 kernel). I'm trying to get applications running in the VMs to drop any core files on a HGFS volume, i.e. in a "shared folder". The core files get created as per the path and format given in /proc/sys/kernel/core_pattern, but they are always zero length. If I change the path to a local path (on a virtual disk in the VM), all is well. Any ideas what I have to do get the core files written into a shared folder? Thanks for your help!

    Read the article

  • virtualbox ftp hangs on list command

    - by Tiddo
    Hi all, I have virtual box installed on a windows 7 64-bit computer, with Cent OS 5.5 as guest os. I want to be able to use ftp between those. I've installed vsftpd on the guest os, and the guest os uses a nat connection with the host os for internet. So far, I am able to connect to the guest os using ftp (in filezilla), but after the list command is executed, nothing happens, until the command is timed out. This happens in both active and passive mode. I do have set a pasv_min/max_port in the vsftpd.conf file, listing is enabled, and the ports are redirected in virtualbox. Also the ftp_data_port is set to 20. I also tried setting the pasv_address, but I had to set it to 127.0.0.1, but than filezilla gives me this: Command: PASV Response: 500 OOPS: bad family Command: PORT 127,0,0,1,139,204 Response: 500 OOPS: child died Can someone help me with this?

    Read the article

  • FTP - 530 Sorry, the maximum number of clients...?

    - by aSeptik
    Hi All! i know this is not a properly code question, but who of you don't use an FTP client!? ;-) Ok my problem is that my FTP work great, exept when i upload files on a particular client server! on this server happen that some files are uploaded fine and others not, they stop while uploading at half of it's size, then this error is displayed: 530 Sorry, the maximum number of clients (4) from your host are already connected. Unable to make a connection. Please try again. Obviously this is not true, i'm the only one that is uploading! Anyone had the same experience with this!? PS: i have tried many different FTP, all display the same error or just hung up! Thank's

    Read the article

  • tomcat dns forwarding to multiple applications

    - by basis vasis
    I recently installed business objects software on tomcat 6. I have 2 domains - domain1 and domain2. This software allows access to two of its applications via these URLS: ADDRESS:http://myservername.domain1:8080/BO/APP1 and ADDRESS:http://myservername.domain1:8080/BO/APP2. Instead of these urls, I would like the end users to access these apps via something like http://bobj.domain2.com:8080/BO/APP1 and http://bobj.domain2.com:8080/BO/APP2. I cannot figure out how to accomplish that. I have looked into the option of http redirect (not good because the destination address shows up in the address bar), domain forwarding (not sure if it would work with multiple applications and forwarding from one domain to another) and also using apache tomcat with mod_jk by using virtual hosts (not sure if it is possible when forwarding from one domain to a sub domain in another domain) ?? please advise as to what would be my best option and how to accomplish. thanks a bunch

    Read the article

  • ESXI ftpput fails Syntax problem

    - by Datapimp23
    I'm trying to ftpput my virtual machines dirs to our NAS. Which doesn't support NFS. Only FTP and samba. So I'm in the ESXi console and enter the followin command ftpput ipaddress /vmfs/volumes/4a1157e1-be81171a-1b39-001d09080124/VMNAME /Backup /Backup is a public share on the nas, I can access it through any ftp client. After I enter I get the following ftpput: can't open 'Backup': No such file or directory I'm kind of in the dark here. Any suggestions?

    Read the article

  • Limit disk I/O one program creates?

    - by Posipiet
    Hardware: one virtualization server. Dual Nehalem, 24GB RAM, 2 TB mirrored HD. Software: Debian, KVM, virt-manager on the server with several virtual machines that use Linux too. 2 TB Disk is a big LVM, each VM gets a logical volume and makes its own partitions in that. Problem: One of the programs that runs on one of the VMs creates huge disk load. This never was an issue, because the program never ran on such a powerful hardware. Now the CPUs are fast, and lots of I/O is the result. We cant do much against that at the moment, because the tool is a black box. On the other hand, the speedy computation is welcome. The program creates about 5 GB of temp files which get overwritten during the next iteration. Question: How can we limit the disk I/O for the process?

    Read the article

  • What are the benefits of using conforming certificates?

    - by zneak
    Recently, my web host started sending my mail client a self-signed root certificate with no field filled (everything says "Unknown") when connecting via SSL. I'm pretty sure this is not a good thing, but since it works, the tech support guy says it's fine. I'm not a certificate guru, so I'm turning to you people. What purpose do certificates serve? Is it really okay that the certificate has every field set to "Unknown"? I don't check certificates often, but I don't recall ever being sent a root one; what's the difference between a root certificate and, err, the other kind of certificate?

    Read the article

  • How to setup external mail addresses without external autodiscover tries?

    - by Tarnschaf
    We have a little Exchange/Outlook installation here that fetches the mails from our provider with POP3. Now to be able to send emails outside our organisation, I added another SMTP address to the Exchange User: [email protected] (Default / Reply Address) [email protected] Sending email works using the default address. But now there is an error message each time we start Outlook. Outlook tries to autodiscover using autodiscover.ourcompany.com which doesn't exist. Our autodiscover files are placed on our local server. I think all the servers are discovers, because everything works as expected. Everything except the error message on each Outlook start. (The error message is actually because of an invalid certificate but I don't see why Outlook should contact an external host at all!) So how can I solve this? Forcing Autodiscover on every Outlook client to use the local hosts? Or ist there an even better way?

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • Microsoft Forefront Management Gateway 2010 - Which topology to choose for monitoring only server?

    - by MadBoy
    Hello, I've installed Forefront and wanted to use it as monitoring traffic solution until we decide to put it as a router. I've 2 nic's assigned to this virtual machine. One NIC has connected port which is "mirror port" of our WAN redirected on switch so it sees all the network traffic flying by. The other NIC is internet access. This server is located inside our lan network. What topology should i choose and which options I should look at to be able to see which traffic is used (SMTP, WWW etc) and who does what? We had cases of infected machines with spam and we want to be able to see that some machine is sending large amounts of mails. Is that possible ?

    Read the article

  • Problem installing Windows Server 2008 R2 on Xen 3.0

    - by GodEater
    Hi there folks, I've been googling this for a few hours now and not really getting anywhere. We have a Xen 3.0 host which I'm trying to install a copy of Windows Server 2008 R2 Standard Edition onto as a guest OS - but the install hangs at the "Starting Windows" screen when it starts running the installer. Is this is a known issue with the version of Xen we're running (I know it's positively ancient)? Is there a workaround for it at all? We've successfully got a great number of vanilla 2008 servers running on it, it appears it's an issue specific to R2. Bryan

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >