Search Results

Search found 16587 results on 664 pages for 'virtual hardware'.

Page 573/664 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Host system resets (crashes) when using VMWare or VirtualBox and 64-bit guest systems

    - by sinni800
    I have been trying to install virtual systems on VMWare for a while now and encountered strange behaviour from my PC. The behaviour is as follows: On "automatic" virtualization mode it either outputs a cryptic error message (can MAYBE give later, if I can reproduce) right on startup (before even the BIOS) or it resets the complete HOST system (blackscreen, bios...) If I install a Windows XP on it it works well on "binary translation" mode. If I try installing Linux on it, in "binary translation" mode it crashes 1 or 2 seconds after I hit enter on the GRUB selection screen (after the first page of kernel messages rolled in) Using VirtualBox it crashes right in the BIOS. It gave me a Bluescreen though! 0x00000101: CLOCK_WATCHDOG_TIMEOUT: a clock interrupt was not received on a secondary processor within the allocated time interval NEWS: I tried VirtualBox again and it did not completely crash the computer this time. It gave me a critical error and a log file: http://pastebin.com/yKZSDs91 In conclusion, it will crash instantly if VT-x is activated. If not, it seemingly only crashed if I try to install something with 64 bits. Another update: Yes, it ONLY crashes when the guest is 64 bit! What I tried: Reinstalling Windows (my Windows installation was quite broken so it seemed natural. Didn't work though.) New BIOS What I am certain of: Virtualization extensions are activated in the BIOS What my computer specs are: ASUS P8P67 LE mainboard, newest BIOS/EFI firmware Intel Core i5 2500k Ati Radeon HD 5770 16 GB Corsair 1333mhz DDR3 RAM, 4 X 4 GB

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • Server resolve issues not consistent

    - by bobthemac
    I am having some weird issues with my web server. It has a public ip address and is set-up on an openVZ virtual machine. Accessing in to the site works fine every time but when trying to access out from the server I can't always connect out. Sometimes I can connect out and resolve addresses, sometimes I can't. The issue is visible in both ssh when trying to do a wget command on Google; sometimes it works and I get the index.html page and sometimes I get nothing. The issue is more visible in wordpress where you can't view themes but after a few presses of the try again button you can then view them. I have searched google and found nothing about this issue. Does anyone here have any ideas what could be causing this strange behaviour? Ports 80 and 2222 are open for web and ssh. Failed 17:26:33.398412 IP 86.148.184.124.38445 > 176.9.36.252.http: Flags [.], ack 98383, win 632, options [nop,nop,TS val 3070086 ecr 323106946], length 0 [email protected]..|. $..-.P..,.e......x....... .....B8. Passed 17:30:00.179630 IP 146.90.206.241.50091 > 176.9.36.252.http: Flags [F.], seq 1, ack 1, win 115, options [nop,nop,TS val 13740559 ecr 323308537], length 0 [email protected]... $....P.w...x.....s(K..... .....EK. Thanks in advance

    Read the article

  • Why is ssh agent forwarding not working?

    - by J. Pablo Fernández
    In my own computer, running MacOSX, I have this in ~/.ssh/config Host * ForwardAgent yes Host b1 ForwardAgent yes b1 is a virtual machine running Ubuntu 12.04. I ssh to it like this: ssh pupeno@b1 and I get logged in without being asked for a password because I already copied my public key. Due to forwarding, I should be able to ssh to pupeno@b1 from b1 and it should work, without asking me for a password, but it doesn't. It asks me for a password. What am I missing? This is the verbose output of the second ssh: pupeno@b1:~$ ssh -v pupeno@b1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to b1 [127.0.1.1] port 22. debug1: Connection established. debug1: identity file /home/pupeno/.ssh/id_rsa type -1 debug1: identity file /home/pupeno/.ssh/id_rsa-cert type -1 debug1: identity file /home/pupeno/.ssh/id_dsa type -1 debug1: identity file /home/pupeno/.ssh/id_dsa-cert type -1 debug1: identity file /home/pupeno/.ssh/id_ecdsa type -1 debug1: identity file /home/pupeno/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 35:c0:7f:24:43:06:df:a0:bc:a7:34:4b:da:ff:66:eb debug1: Host 'b1' is known and matches the ECDSA host key. debug1: Found key in /home/pupeno/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /home/pupeno/.ssh/id_rsa debug1: Trying private key: /home/pupeno/.ssh/id_dsa debug1: Trying private key: /home/pupeno/.ssh/id_ecdsa debug1: Next authentication method: password pupeno@b1's password:

    Read the article

  • Recover data from quick formatted DVD-R

    - by Andrii Kalytiiuk
    I need to recover data from quick-formatted DVD-R. Please advise a free of charge option (cheap commercial tools will be ok either). Disk was partially recorded with Windows built in disk recorder and recording most likely was not complete. Afterwards I have inserted partially recorded DVD again and on Windows recorder's message box 'How to use this disk?' selected - 'use for CD/DVD player' and data was completely lost - as new recording session was started. Files of photos were recorded on disk. What I have tried so far: DiskInternals CD-DVD recovery - sees 5 jpg files but can't show preview. Tool is commercial - trial version does not allow to recover files. CDCheck - doesn't see any files and reports errors at attempt to scand DVD CD Recovery Toolbox Free - does not even recognize DVD drive ISO Buster - recognizes two files - one MP3 file for 99% of recorded size and one ARC file for about 100 KB MiniTool Power Data Recovery - Free Edition - does not see any files on DVD Stellar Phoenix CD DVD Data Recovery - does not see any files BinaryBiz Virtual Lab - sees DVD disk but needs license to browse content Please advise how is it possible to recover files from DVD.

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • OpenVZ: Choosing right MySQL-Server depending on host

    - by Scheintod
    What I have: Two servers running Wheezy/OpenVZ with One MySQL container on each host master/master replicated (mysql1/mysql2) Replicated DNS on each host (dns1/dns2) different web-containers on each host but regulary backuped to the other. What I want: Each container should use the "local" MySQL-Server (the one which runs on the same hardware-node). I'd like to be able to move the web-containers between the to hosts. Each container should choose the MySQL-Server (semi) automatically. This scheme should continue working if one host is down. What I tried: Currently I'm keeping track on which container should run on which host by DNS entries which are queries by scripts e.g. for questions like: "Which container should be backuped on/to which host." For choosing the right MySQL server I have one extra entry like "mysql.container_abc" which resolves to either mysql1/mysql2. So in the applications in the container I can use "mysql.container_abc" for e.g. mysql_connect and if I want to move the container around I just need to change the dns. Now I notices one problem with this approach: Every mysql_connect generates one DNS query because the dns is not cached and this slows the request down unnecessarily. What I would like better: Some way of passing the information on which host we are running to the container and using it directly instead of using DNS. E.g. some way of setting a custom /etc/hosts entry in the container. Or any other great idea. Doesn't have to include DNS but shouldn't require to much special "magic" inside the container.

    Read the article

  • Vmware Workstation, Win7 host, Ubuntu guests with Nat + Host-only networks but they cannot connect to the Internet

    - by Ikon
    I have a Win7 host machine with Vmware Workstation. In the workstation I have 3 Ubuntu installed. All 3 Ubuntu guests have a Nat network - to access the internet without asking the router for a local address - and a Host-only network - to connect all Ubuntu quests and the host in a private network for internal communication, without touching the router. When I try to make any of the Ubuntu quests to get data from the internet - assuming that they would figure out that the Nat-ed interface can access the requested data - they fail and report that there is no route to my query. If I disconnect the 2nd interface on the Ubuntu guests with the Host-only network and restart networking, they start to know the route to the internet. Odd, during the installation of the guests they asked which of the 2 given interfaces - with Nat and Host-only config - should be used to get updates during installation and they oddly managed to get the updates. Not so after the installation has finished and rebooted. I have checked the Virtual Network Editor that the Nat interface should use my real network card to access the net, so there should be no problem. I wish not to use the router's dhcp service to give the Ubuntu quests an address, and also I don't want the guests to be accessable from the local network directly, but only by the host - that's the Host-only network is for. Any suggestions? Edit: 192.168.189.0 is the Nat interface and 192.168.7.0 is the Host-only. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.7.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.189.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.189.2 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • VMWare use of Gratuitous ARP REPLY

    - by trs80
    I have an ESXi cluster that hosts several Windows Server VMs and around 30 Windows workstation VMs. Packet captures show a high number of ARP replies of the form: -sender_ip: VM IP -sender_mac: VM virtual MAC -target_ip: 0.0.0.0 -target_mac: Switch interface MAC The specific addresses aren't really a concern -- they're all legitimate and we're not having any problems with communications (most of the questions surrounding GARP and VMWare have to do with ping issues, a problem we don't have). I'm looking for an explanation of the traffic pattern in an environment that functions as expected. So the question is why would I see a high number of unsolicited ARP replies? Is this a mechanism VMWare uses for some purpose? What is it? Is there an alternative? EDIT: Quick diagram: [esxi]--[switch vlan]--[inline IDS]--[fw]--(rest of network) The IDS is complaining about these unsolicited ARPs. Several IDS vendors trigger on ARP replies without a prior request, or for ARP replies that have a target IP of 0.0.0.0. The target MAC in these replies is the VLAN interface on the switch. Capture points: -The IDS grabs the offending packets -The FW can see the same ones -A VM on the ESXi host does not see these, although there is an ARP request for a specific IP on the ESXi host that has source_ip=0.0.0.0 and source_mac=[switch vlan interface]. I can't share the captures, unfortunately. Really I'm interested in finding out if this is normal for an ESXi deployment.

    Read the article

  • hyper-v cluster behavior when losing network connectivity

    - by ChristopheD
    Setup: (rather new) Hyper-V R2 cluster with 2 nodes (in failover configuration). Fysical host OS: Windows Server 2008. About eight VM's (mixed: Windows Server 2008 and Linux) Yesterday we had a power outage of about 15 minutes. Our blades are on UPS so the fysical host machines (Windows Server 2008) never went down. Our main switches are not on UPS (yet) and we saw the behaviour similar to the following (as distilled from the event logs). The nodes in the cluster lost means of communication (because the external switches went down). The cluster wants to bring down one (the first) of the nodes (to start failover?). The previous step impacts clustered storage where the virtual machine VHD's are located. All VM's got brutally terminated and were found in a failed state in the failover manager in the host OS'es. The Linux VM's were kernel panicking and looked like they had their disk ripped out. This whole setup is rather new to us, so we are still learning about this. The question: We are putting switches on UPS soon but were wondering if the above is expected behavior (seems rather fragile) or if there are obvious improvements configuration-wise to handle such scenario's ? I can upload an evtx file concerning what exactly was going on in case that's necessary.

    Read the article

  • Enter network credentials as part of batch script

    - by Michael
    WinXP: I have several system services that are needed to run some machinery in my lab. The machine these services are running uses a lab login that has administrator rights. Our IS department, unfortunately, has it set up where at some point during the night the login "loses" the privilege level to start/stop these services. The account stays logged in, but the software controlling my hardware becomes unresponsive. In order to get things back up and running, I have to stop the system services and restart them. Because of the security settings, however, I have to re-enter the user password to start the service (even though the user was never logged out). That, I get the "This service cannot be started due to a logon failure" and I have to enter the password. What would be ideal is to have a batch script run before anyone gets into work that stops all of the necessary services, enters the user credentials when prompted, and then restarts them so that everything is ready for first shift to run. I assumed that using the Task Scheduler in Windows would work as it allows you to run batch files with a user's name and password, but this didn't seem to do the trick. With this setup I would arrive to find that all the services are stopped but not started again. (Presumably because the authentication failed.) The batch file is about as simple as it gets, all I have is: net stop "Service1" net stop "Serivce2" etc., then restart in reverse order based on dependency: net start "Service2" net start "Serivce1" What would it take to accomplish what I'm trying to do and restart the services?

    Read the article

  • Dead Linux server - need help and options

    - by Choi S.
    All, I have a Dell PE 1950 w/ 2 SATA drives in a software RAID1. OS is CentOS 5.5 (2.6.18.x). Starting this afternoon we received HW errors (something on the bus is bad, E171F) and the machine became unresponsive. We hard booted and it came back up for about 5 hours but then it happened again. I'm trying to figure out our options. Unfortunately we do not have similar hardware but I have a small desktop that I can use. I was contemplating putting one of the drives into the desktop and then starting it up. My goal was to then P2V it using Vmware converter but apparently the free v5.x doesn't support hot cloning/converting on a RAID volume, only the Enterprise 4.x version of Converter does. My questions are: 1.) Is putting a single drive out of a RAID1 pair into another piece of HW is safe? Based on my research and understanding it appears to be but would like confirmation. 2.) Is there any work around to the Vmware Converter not supporting RAID volumes during a hot clone/convert session? 3.) Are there other options I'm overlooking? Thanks in advance for reading and responding. --Choi S.

    Read the article

  • Recommended Win2k8 Server software to fix my RAID-0 issue

    - by Jason Kealey
    I'm running an Asus P6T V2 Deluxe. It has six SATA ports and supports onboard RAID. I am using two of those ports for a RAID0 array of 1.5TB Seagate drives using the onboard RAID controller. One of them is giving me SMART warnings and I want to preemptively replace it. I pulled out two other 1.5TB drives from another computer and am ready to use one or both, if necessary. I can't run any SMART diagnostic software from within Windows because it only sees the hardware RAID-0 array, not each individual drive. The first thing I tried was a slow sector-by-sector copy using a free tool called EASEUS Disk Copy. Used the bootdisk, copied (took like 16 hours), unplugged the defective drive and plugged the new one in its place. The motherboard didn't recognize the new drive as being part of the known setup, so it did not want to boot. The second thing I tried was using other software (I forget the name) to copy the partition from within Windows. The first software failed because I had a server operating system. I found another software (I forget the name) which supported a server OS and did a partition copy onto the new drive. This seemed to work and the OS started to boot, but blue screened and started a reboot cycle. I'm assuming the software I was using was no good as it was trying to copy the boot disk while it was in use. I am looking for recommendations on what software to use to fix my problem without doing a re-install. Everything is backed up but my computer works fine and I'd like to avoid re-installation when possible. However, my system would be back up now if I had just started over on a second RAID array. :)

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • How to redirect a name-based VirtualHost to a different port?

    - by Andra
    I have a virtuoso sparql endpoint installed, which I want to make available through a hostname (e.g. www.virtuosoexample.com). The thing with virtuoso is that the is no Document root. The endpoint is initiated by the daemon and made available through a source port (e.g. localhost:1234/) I know how to set a virtual host pointing to a document root, but i don't know how to do this for a server with a port number. Any advice would be appreciated. Below is the code, how I would do it with a document root. I tried to change that (naively) into localhost:1234/sparql, but that didn't work <VirtualHost * ServerName www.virtuosoexample.com <www.virtuosoexample.com> ServerAlias www.virtuosoexample.com <www.virtuosoexample.com> ErrorLog /var/log/apache2/error.wp-sparql.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.wp-sparql.log combined DocumentRoot /var/www/endpoint/sparql/ <Directory /var/www/endpoint/sparql> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost

    Read the article

  • Google Drive terminates without error on startup

    - by Iszi
    I've used Google Drive for awhile now, but it won't start up after installing on my latest system re-build. I'm still using the same OS, hardware, and basic software load (antivirus, firewall, etc.) that I have for years during which I had not previously had problems with Drive. OS: Windows 7 Ultimate x64 Google Drive Version: 1.12.5329.1887 Now, whenever I try to run Google Drive, it just spawns two instances of the executable which die shortly after. No error messages are posted to the desktop, and nothing indicating any problem is written to the Event Log. After some research, I've yet to find anyone having the same problem who's found an answer. I did find out how to run Google Drive in diagnostic mode, using the --vv parameter at the command line. After that, I opened up the sync log and got this: 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 OS: Windows/6.1.7601-SP1 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 Google Drive (build 1.12.5329.1887) 2013-10-31 17:11:24,039 DEBUG pid=3664 1892:MainThread logging:1608 DEBUGGING DUMP is ON. 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 ERROR, UNEXPECTED EXCEPTION 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 [Error 5] Access is denied Traceback (most recent call last): File "<string>", line 232, in Main File "<string>", line 118, in RegisterCustomFileTypes File "P:\p\agents\hpal4.eem\recipes\353983091\base\b\drb\googleclient\apps\webdrive_sync\windows\build\pyi.win32\main\outPYZ1.pyz/windows.registry", line 62, in GetValue WindowsError: [Error 5] Access is denied 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Crash reporting disabled. Ignoring report. 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Exiting with error code: 0 I'm running on an account with Administrator-level permissions, and have even tried using "Run As Administrator" on the EXE. I'm not sure why it's looking for a P:\ drive, as no such volume has ever been mounted on this system. What should I do to try to further troubleshoot, and resolve, this issue?

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • ddwrt client brigde acces point lost

    - by llazzaro
    Ok I have an AP with ddwrt firm (i know its not the best, but continue reading!) AP is configured to work like a wifi "transparent" brigde, also it had a virtual wifi network card to expand radius of wifi signal in that same AP. The brigde is working, computers behind AP gets ips from main routers which shares internet....BUT! I cant access webgui of the bridge AP... Main problem : AP is lost, but its working as brigde. I cant find it in the network (it didnt have any ip!) so I cant change any configuration... First solution : Reset AP, but it cannot be done. Reset button dont works due to a bug in ddwrt micro firm that mi linksys WAP54g had installed (I really hate this firmware I like more openwrt that my main router has) Second Solution : arp -a from main router , from computers behind AP...It dont appears in the list. Any more ideas, the router at some level must be there, the brigde is working. I know its possible that the AP is with an ip like 192.168.100.2 , my subnect actually is 172.16.X.X. :) thanks!

    Read the article

  • Network structure --> Server 2k8r2 <--> Livebox <--> Router <--> Other PCs

    - by Yusuf
    I have a Livebox connection to the Internet and I have set up my network as follows: - Livebox <--> Win2k8R2 Server - Livebox <--> Netgear N150 Router - Router <--> Other PCs Therefore, in my LAN, - the Livebox has IP address 192.168.1.1, - the Router 192.168.1.12 (when accessed from the Livebox or the server), - the Router 10.0.0.1 (when accessed from the PCs connected to the Router), - the server 192.168.1.2, - the PCs 10.0.0.x I was using a previous configuration, which was as follows: - Livebox <--> Netgear N150 Router - Router <--> Win2k8R2 Server - Router <--> Other PCs Everything was simple, and I just had to forward all ports for incoming connection on the Livebox to the Router, and then forward the specific ports to the Server as needed (it must be however noted that any server I use is found on the Win2k8R2 server itself). In this previous configuration, the IP addresses were as follows: - Livebox 192.168.1.1 - Router 192.168.1.12 (when seen from Livebox) - Router 10.0.0.1 (when seen from server & PCs connected to it) - Server 10.0.0.2 - PCs 10.0.0.x So now of course, my port-forwarding does not work anymore since the server is not connected (directly) to the Router. What I would like to know is how do I configure the Livebox and Router to still have the features like before? From what I understand of networks (which is very limited, btw), I see these options: Make the router assign IPs like 192.168.1.x (but then I want the forwarding to be done from the router itself, is it possible?) The forwarding on the router to the server uses IP address 10.0.0.2. I could change it to 192.168.1.2 (Is that even possible, does it work?) Forward all ports from the Livebox itself to the server, and then manage them there (Is software-based port-forwarding as secure as hardware-based?)

    Read the article

  • Offloading backups to secondary network

    - by user1467163
    I'm trying to solve a problem- Currently, we are constantly backing up and have no budget for additional servers. Our production network is still a 10/100 and handles voip, SQL plus our backup traffic, and I'd like to offload the backup traffic onto a secondary network- all of our servers have secondary NIC's that are not in use, and all support gigabit (Our switching hardware does not- a topic for another day). I'd like to move my backups off the production network, but I am having a hard time getting the computers to communicate. I am using a Netgear GS724T switch for the backup network- Chosen for cost and because I have used them extensively on networks saturated with ghosting traffic, so I know it's up to the task. I have defined a VLAN, with ports that are not members of any other VLAN. All traffic is untagged on the VLAN. I have set the servers with 192.168.1.10 and 192.168.1.11 addresses, 255.255.255.0 netmask and I have tried a blank GW, using the local IP of the server 192.168.1.whatever address, and I have tried using the switch's production-side IP as the GW. The machines cannot find each other. DNS addresses are blank because I am going purely by IP for now... Any ideas how to get these machines to talk? they are Windows machines, running Server 2008R2 and 2003R2. Thanks!

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • What are good systems for managing PHP/MySQL infrastructure?

    - by sbrattla
    I work in a company which is about to migrate most applications from in-house custom built Java/Tomcat applications to Drupal. Due to company policies, applications and websites need to run on in-house servers. This means that we need infrastructure for Drupal (PHP/MySQL) applications. This must have been solved a million times already. I believe this is what web-hosting companies does every day. Even though we work on a much smaller scale than web-hosting companies, i assume it would make sense to look at the task as if we're going to have an internal small-scale web-hosting company. This means that the guys in IT operations could be "responsible" for "offering" web-hosting, while developers could use these "services". We have three environments; dev(elopment), test and prod(uction). It would make sense that developers could log in to a system and create/edit/delete dev and test sites as they'd like. Production sites should be available through the same system, but only available to IT ops. We need to work with clusters of web servers, meaning that an administration system should be capable of creating/editing/deleting sites across multiple servers. I know there's no "this is it" answer to my question; but what would be a good place to start to get going with this? Apart from the actual hardware, what would be a good administration system for this?

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >