Search Results

Search found 17519 results on 701 pages for 'live environment'.

Page 542/701 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • Graphics and USB devices freezing soon after OS loads

    - by Andrew
    I run Ubuntu/Windows dual boot. Last night I started the upgrade to Ubuntu 12.04, and my computer has not worked since in either Windows or Ubuntu. Here's what I got when I rebooted after the upgrade, and continue to get every time I boot: Gets to GRUB screen OK. Choose Ubuntu - black screen or crazy purple lines. At first I assumed something went wrong with the upgrade (often happens). Choose Windows - works fine, I log in, but soon after that the graphics freeze (sometimes with purple artifacts). The keyboard and mouse (both USB) also lose power at the same instant, and none of the USB ports have power to them. This happens sooner or later every time I boot. Update: the HDD also appears to lose power at the same point. I have tried a live CD, but my computer refuses to boot any CD even after disabling all other boot options in the BIOS. I have disconnected everything except keyboard, mouse, graphics card with one monitor, one RAM sick and HDD; no change. I also took the little battery out to reset CMOS. I am pretty sure no matter how wrong the Ubuntu upgrade went, it wouldn't cause the above symptoms in Windows. So the only explanation I can think of is that a hardware failure occurred at the same time. Some possible causes of this I can think of are: A couple of days before this, I added a third screen (which worked fine). About a week before, my house lost power in a storm (no ill effects over the past few days though). What can I do, other than buy a new motherboard/CPU and hope it works? Unfortunately I don't have another box to swap parts into to test at the moment.

    Read the article

  • How to bypass resume from hibernate [closed]

    - by Daniel Trebbien
    I am attempting to resume a Windows Vista laptop from hibernate, but the resume process seems to be stuck in an endless loop in which Windows is repeatedly trying to read from the optical drive. When I press the Power On button on the laptop, the screen is black (not even the backlight turns on) and the following occurs in a loop: Five seconds pass and I hear the optical drive being accessed. (There's no disk in the drive, so it sounds like a short buzzing noise.) Two seconds pass and I hear the optical drive being accessed. Two seconds pass and I hear the optical drive being accessed. So it's three short buzzing noises in a row, over and over again. Eventually I have to abruptly power off the machine. I have tried inserting a data CD into the drive as well as a bootable CD (a live Linux distro boot disk). For both, the optical drive spins up for a bit, but stops after Windows decides that the disk is not what it is looking for. I have since lost the Windows Vista recovery DVD, but I don't know if inserting the recovery disk into the optical drive would have a different effect than the bootable CD. I have tried pressing F8 immediately after pressing the Power On button (hoping to enter System Restore), but that did not have an effect. Is there a special key sequence that will cause Windows to bypass resuming from hibernate, effectively ignoring hiberfil.sys?

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • VMware Virtual Infrastructure Web Access not starting on Fedora 14

    - by FusionHammer
    I am running a Fedora 14 server with VMware Server 2. When I initially installed VMware Server, everything was working fine. This included accessing Web Access on port 8333 (https) and 8222 (http). I had to perform a server restart and afterwards, the Web Access is not starting when I run: /etc/init.d/vmware-mgmt start The VMware Server Host Agent starts [OK], but the Web Access has no [OK] or [Failed] next to it. I also checked ports 8333 and 8222 by running a netstat, but neither are showing up. In addition, I checked the proxy.log and client.log files in the following directory: /usr/lib/vmware/webAccess/tomcat/apache-tomcat-6.0.16/logs/ The proxy.log doesn't have entries in it past 9-18-2012, which is the date the server restart occurred. The client.log is empty. I not sure what is causing this issue or what log files will be helpful in narrowing down the cause. I could re-install VMware Server, but I would like to narrow down the cause as it could occur again with the new re-installed environment.

    Read the article

  • Samba and Windows 7

    - by John Gaughan
    I built a new computer with the intention of it being primarily a home file server. Here is my setup: one desktop with Windows 7 64 HP one laptop with Windows 7 64 HP one desktop with Kubuntu 11.10 (server) The two desktops use static IPs, and I have hostnames mapped in the HOSTS files on all three systems. I have the same username/password combo on all three systems. I have been trying for a while now to set up Samba so the Windows 7 systems can see and use it. Even if I can get the server to show up, Windows is unable to log in. One of the first things I did was to enable LMv2 authentication, which this version of Samba (3.5.11) supports. The workgroup is set correctly. I can normally see the server, but cannot authenticate. Windows homegroup is turned off. Pinging between machines works fine, and the two Windows 7 systems work together flawlessly. What I am trying to do is set up Samba to use peer to peer networking using NTLM security and user-mode authentication. According to the documentation this is possible, but there are no examples that I could find. In all the googling I have done, I see a lot of people asking how to set this up but it either works for someone else and not for me (no idea what I'm missing), or it doesn't work. Has anyone gotten this to work? Is there a place I could download a smb.conf that is set up to work in this environment?

    Read the article

  • windows 7 64 bit visual studio 2008 libtiff build nmake error

    - by user1244539
    I am trying to build tiff 4.0.2 on my Windows 7 x64 system with Visual Studio 2008, but it was showing errors: C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2347) : error C2061: syntax error : identifier 'QINT' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2362) : error C2059: syntax error : '}' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2397) : error C2061: syntax error : identifier 'JOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2397) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2398) : error C2061: syntax error : identifier 'PJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2398) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2399) : error C2061: syntax error : identifier 'NPJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2399) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2400) : error C2061: syntax error : identifier 'LPJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2400) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2146: syntax error : missing ')' before identifier 'pjc' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2081: 'LPJOYCAPSA' : name in formal parameter list illegal C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2061: syntax error : identifier 'pjc' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ',' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ')' NMAKE: fatal error u1077: "c:\program files(x86)\microsoft visual studio 9.0\vc\bin\cl.exe": return code '0x2' This is what I was doing: Extracted tiff 4.0.2 In VS 2008 x64 Win 7 command prompt setting the environment for x86 by running vcvars32.bat Changing the path to tiff 4.0.2/libtiff folder Running nmake /f makefile.vc to create a static library of libtiff Following these steps in Windows XP generates the .lib file but in Windows 7 it fails. This is the first time I'm making any .lib files.

    Read the article

  • Win7 Aero disabled due to mirror drivers following TightVNC use, cannot re-enable

    - by me_and
    I recently installed TightVNC, and used it to access my desktop from my phone. When I did so, my desktop reported that Aero had been disabled due to an application being run that didn't support it. I now cannot re-enable Aero, despite having closed the TightVNC server (and indeed having rebooted a number of times since). Windows 7 troubleshooting reports the following: Close programs using mirror drivers To allow Aero effects to be displayed, close any programs that use mirror drivers (a type of display driver), such as Windows Remote Assistance and Windows Live Mesh. The troubleshooting system appears to attempt to automatically fix that, but fails. I can see nothing in the task manager that obviously uses mirror drivers; in particular, I can see nothing that looks like TightVNC in there. Starting and exiting TightVNC makes no difference. Google has turned up nothing that looks useful so far, either, although my Google-fu isn't great and I may just be searching for the wrong things. There's another question that reports the same problem, caused by LogMeIn rather than TightVNC, but the solution listed there doesn't work—TightVNC doesn't install LogMeIn mirror drivers, for obvious reasons.

    Read the article

  • Is it possible to mount a disk image, created with dd, to a directory on a mounted external usb hdd?

    - by Keeper Hood
    I have an image of my home (/dev/sda3) partition, which I've created using the "dd" command. dd if=/dev/sda3 of=/path/to/disk.img I've deleted the home partition via gparted in order to enlarge my /dev/root partition. Then I've recreated the /dev/sda3 partition which is smaller in size then the one I've backed up to the image. I was wondering since I have a 2TB external HDD, could it be possible to mount my backed up image on the external HDD and then copy the files into the /home directory. Since the external HDD would be already in a "mounted state", I'm unsure whether this is a good idea, mounting on a mounted device. I'm running Slackware 13.37 (64bit). used ext4 on all the partitions. resized the root partition with gparted live cd. I've tried mount -t ext4 /path/to/disk.img /mng/image -o loop It gave me an fs error (wrong fs type, bad option, bad superblock on dev/loop/0) Then i did dmesg | tail which outputs: EXT4-fs (loop0) : bad geometry: block count 29009610 exceeds size of defice (1679229 blocks) I have no idea what to do, I want to restore my /home data from the image I've backed up.

    Read the article

  • Mac Security - Which one?

    - by Bob Rivers
    Hi, Recently I had my credit card cloned. A few hours after shopping at an online store (in which I trust and buy since 2006) I received a call from my bank asking if I recognize a $5,000 debt to a store(?!) called Church of Christ... I'm a Mac user (OS X 10.6.3). I always kept my system updated and I have firewall enabled (in my Mac and in my broadband router), but I decided to adopt some kind of protection. I don't want to rise passionate discussions. Real or not, snake oil or not, I need to have back my peace of mind... I read this and this posts. I selected two software that I think that could help me (both have more features other than just an antivirus). Does someone have feedback about Intego's VirusBarrier X6 or Trendmicro's Smart Surfing? Intego solutions seems to be better, but TrendMicro brand/name is stronger in corporate environment, so their solution should be good. Both solutions have 30 day free trial, but I would like to hear something from you. Any other solution that I should look? TIA, Bob

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • How to solve SocketException: Permission denied: connect

    - by luxinxian
    I recently encountered a problem that is giving me a headache and I need help ... The System consists of two subsystems, called A and B, each running on a standalone Tomcat instance and currently running on the same machine. A invokes B's service via Spring httpInvoker (i.e. over HTTP). B system also invokes the other system's services via HTTP. Symptoms: the system starts to run and appears to work normally for around 10-15 days; the system will run for a period of time after an exception: org.springframework.remoting.RemoteAccessException: Could not access HTTP invoker remote service at [http://xxx.xxx.xxx.xxx/remoting/call]; The nested exception is java. net.SocketException: **Permission denied: connect** when the exception occurs, the system continues. This happens always, not only occasionally. (It looks like some resources are exhausted, but CPU rate < 5%, memory < 15%, network < 5%). when the system call between A and B fails, the B system call over HTTP to an external service also failed, with the same exception. Restarting both Tomcat services makes the whole system work properly. So repeatedly following steps 1 - 5, I have not found the root reason. Environment: windows 2008 R2 tomcat7.0.42 x86_64 oralce-jdk-1.7.0_40 Any ideas?

    Read the article

  • Improving abysmal 802.11n wireless network

    - by concept
    I am in desperate need of help to improve the abysmal performance of my 802.11n wireless network. At best I get 30Mbs (this is an internet download) from a technology that boasts 300Mbs, even worse is the LAN where to date best i have ever gotten is 1Mbs. It is literally quicker to copy the file to a USB and walk it to the other computer. Infrastructure is this AP 802.11n only broadcasting at both 2.4GHz and 5GHz Mac with 802.11a/b/g/n card is connected to the AP via 5GHz Linux with 802.11a/b/g/n card is connected to AP via 2.4GHz I have conducted the following tests (results at end of post) Internet based speed test wired and wireless LAN file copy wired and wireless I have read: http://nutsaboutnets.com/troubleshooting-wi-fi-problems/ http://www.smallnetbuilder.com/wireless/wireless-basics/30664-5-ways-to-fix-slow-80211n-- speed http colon //www.wi-fiplanet dot com/tutorials/7-tips-to-increase-wi-fi-performance.html Slow file transfer on network between two 802.11n laptops (connected directly together via access point) Wireless Network Performance Issues Slower than expected 802.11n wireless network speeds I have made the following optimizations AP broadcasts only 802.11n on both 2.4GHz and 5GHz frequencies 2.4GHz is on a channel with least interference (live in an apartment with lots of APs), this did make a 10Mb/sec improvement Our AP is the only one transmitting on the 5GHz freq. Security: WPA Personal WPA2 AES encryption Bandwidth: 20MHz / 40MHz (i assume this to be channel bonding) I have tried the following with 0 improvement Dropped the Fragment Threshold to 512 Dropped the Request To Send (RTS) Threshold to 512 and 1 Even thought of buying a frequency spectrum analyzer, until i saw the cost of them!!! Speed test results Linux Wired: DOWNLOAD 128.40Mb/s UPLOAD 10.62Mb/s www dot speedtest dot net/my-result/2948381853 Mac Wired: DOWNLOAD 118.02Mb/s UPLOAD 10.56Mb/s www dot speedtest dot net/my-result/2948384406 Linux Wireless: DOWNLOAD 23.99Mb/s UPLOAD 10.31Mb/s www.speedtest dot net/my-result/2948394990 Mac Wireless: DOWNLOAD 22.55Mb/s UPLOAD 10.36Mb/s www.speedtest dot net/my-result/2948396489 LAN NFS 53,345,087 bytes (51Mb) file Linux Mac NFS Wired: 65.6959 Mb/sec Linux Mac NFS Wireless: .9443 Mb/sec All help is appreciated, even testing methods will be accepted.

    Read the article

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

  • Need clear steps on how to convert a Windows 2000 Server to a XenServer VM

    - by Jay
    The source system is not local. The target host running XenServer is not local. The source system is running Windows 2000 Server SP4 and has 1 disk split into 6 partitions, all NTFS: C: 6 GB (boot) D: 15 GB E: 6 GB F: 6 GB G: 5 GB H: 26 GB Most of the partitions are mostly mostly full ( 60%). What is the most straightforward way to do a P2V migration of the server? I can do minor database & data syncs after the P2V is successful & running as a VM within XenServer, it's just getting to that point which is not clear. The option of installing a Windows 2000 Server from scratch is not available, I need to convert the existing physical server as-is into a VM to be hosted within a XenServer environment. I've looked at XenConvert but it maxes out on converting only 4 partitions in one shot, and I'm not certain how to account for the 2 extra partitions. I'm not familiar with XenServer but it's my only option right now to go P2V.

    Read the article

  • How to troubleshoot connectivity when curl gets an *empty response*

    - by chad
    I want to know how to proceed in troubleshooting why a curl request to a webserver doesn't work. I'm not looking for help that would be dependent upon my environment, I just want to know how to collect information about exactly what part of the communication is failing, port numbers, etc. chad-integration:~ # curl -v 111.222.159.30 * About to connect() to 111.222.159.30 port 80 (#0) * Trying 111.222.159.30... connected * Connected to 111.222.159.30 (111.222.159.30) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.19.0 (x86_64-suse-linux-gnu) libcurl/7.19.0 OpenSSL/0.9.8h zlib/1.2.3 libidn/1.10 > Host: 111.222.159.30 > Accept: */* > * Empty reply from server * Connection #0 to host 111.222.159.30 left intact curl: (52) Empty reply from server * Closing connection #0 So, I understand that an empty response means that curl didn't get any response from the server. No problem, that's precisely what I'm trying to figure out. But what more specific info can I derive from cURL here? It was able to successfully "connect", so doesn't that involve some bidirectional communication? If so, then why does the response not come also? Note, I've verified my service is up and returning responses. Note, I'm a bit green at this level of networking, so feel free to provide some general orientation material.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Separate computers in my apartment can't communicate to each other?

    - by Razor Storm
    In my apartment, the management provides the building with a network connection. I have my computer plugged into the ethernet coming out of the walls, and my friend who also lives in the apartment building has his computer connected to a separate ethernet jack. As far as I know our two computers are not within a LAN, and ipconfig shows that we only have external ip addresses. The problem, then, appears when we attempt make direct communication between our computers. I have some hosting server set up on my machine, and my friend is unable to connect to it via my ip address. Other people who do not live in the apartment can connect fine. Ethernet adapter Local Area Connection: IPv4 Address. . . . . . . . . . . : 204.29.113.41 Subnet Mask . . . . . . . . . . . : 255.255.254.0 Default Gateway . . . . . . . . . : 204.29.112.1 His ip: 204.29.113.104 Using a fulltunnel vpn doesn't help.

    Read the article

  • Temporary boot problem after thunder storm - likely causes?

    - by alastairs
    The village where I live was sat under a thunder cloud for most of Friday, and we suffered a few power fluctuations (specifically, what seemed to be split-second outages). When I got back home from work, I found that my PCs had shut down during one of these outages. When I went to boot one of them back up, I couldn't get anything to display on screen, nor did the boot seem to complete correctly. I tried a number of things - unplugging different bits of hardware, swapping graphics adaptors, etc. - to no avail. I thought I was looking at a fried motherboard or CPU. Power seemed to be distributed correctly to the peripherals (the drives all appeared to be working) so I figured it couldn't be the PSU. Eventually I unplugged it from the mains and left it overnight (approx 12hrs unplugged). I tried it again this morning, and it booted up correctly. Woo-hoo! I have all my equipment protected by surge-protected power strips, so I don't think a spike caused these problems. Obviously it has something to do with the power fluctuations, and maybe the PSU in the problem machine got itself confused somehow. The questions are, for future reference and to help people with similar problems: What are the likely causes of the boot failure I experienced? Is a UPS a simple and cost-effective solution, or might other things help prevent this happening in future? What UPS can you recommend (my budget is limited)?

    Read the article

  • Are applications optimized for XenApp?

    - by damitamit
    Our IT dept are about to deploy virtualized apps using Citrix XenApp. One of these apps will be Dynamics AX 4.0 SP2, a ERP client (which I develop on). They have supposedly reached a roadblock because an external 'Dynamics AX Consultant' has told our IT Dept that Dynamics 4 will not work optimally on Citrix and will run very slow because it is not optimized for Citrix. We have it running in a Test environment now and seems ok. They have been told that the only 'solution' is to upgrade to Dynamics AX 2009 where supposedly this problem as been fixed. (not a small task for my team!) When I heard about this, I was quite surprised. From my brief knowledge of Citrix, I thought it would be application independent. How does the citrix app virtualization work, in that a particular app would work better than others on citrix? Would the speed of the virtualized app not just depend on the resources/network connection the citrix server has? FYI, Dynamics AX is a 3-tier client/server system, so the client will be accessing a AOS application server, which then accesses the database. Please enlighten me :)

    Read the article

  • How can I have puppet deploy ssh keys for virtual users?

    - by Pheezy
    I am trying to get puppet to assign authorized ssh keys for virtual users but I keep getting the following error: err: Could not retrieve catalog: Could not parse for environment production: Syntax error at 'user'; expected '}' at /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp:9 I believe my configuration are correct (listed below) but is there a syntax error or scoping issue I am missing? I would simply like to assign users to nodes and have those users automagically have their ssh keys installed. Is there maybe a better way to do this and I'm just overthinking it? # /etc/puppet/modules/users/virtual.pp class user::virtual { @user { "user": home => "/home/user", ensure => "present", groups => ["root","wheel"], uid => "8001", password => "SCRAMBLED", comment => "User", shell => "/bin/bash", managehome => "true", } # /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp ssh_authorized_key { "user": ensure => "present", type => "ssh-dss", key => "AAAAB....", user => "user", } # /etc/puppet/modules/users/init.pp import "users.pp" import "ssh_authorized_keys.pp" class user::ops inherits user::virtual { realize( User["user"], ) } # /etc/puppet/manifests/modules.pp import "sudo" import "users" # /etc/puppet/manifests/nodes.pp node basenode { include sudo } node 'testbox' inherits basenode { include user::ops } # /etc/puppet/manifests/site.pp import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet' } # Set global defaults - including backing up all files to the main filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }

    Read the article

  • script calling script as other user

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root: : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for training user to be set. : su - training -c 'training_command' gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers a la Bash & 'su' script giving an error "standard in must be a tty" but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this message. I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice.

    Read the article

  • How to use Python to read the physical address(MAC ID) [closed]

    - by getjoefree
    I want to read the physical address of the NIC model, i can get the results that i want to with SED.EXE before, but SED.EXE does not support my environment but Python ok, who have the means to do it. The general situation (not plug the network cable, it is impossible to obtain IP address): Ethernet adapter: Connection-specific DNS Suffix.: Chianet Description ...........: Marvell Yukon 88E8040 PCI-E Fast Ethernet Controller Physical Address .........: A4-BA-DB-9D-1E-8E Dhcp Enabled ...........: Yes Autoconfiguration Enabled ....: Yes Ethernet adapter 3: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Dell Wireless 1510 Wireless-N WLAN Mini-Card Physical Address. . . . . . . . . : 00-23-4D-D9-C0-28 The description of the NIC different, we can use this to fetch the corresponding physical address, base on Physical Address does not work, because the computer with the WLAN Card, I want to use Python to read my computer the card information and after Python handles an output file, output file format: SET MAC = A4BADB9D1E8E and sed format: ipconfig -all|sed -nrf getmac.sed | sed -e "s/-//g" > WINMAC.BAT getmac.sed: /Marvell Yukon 88E8040/ { n; s/.*: ([-0-9A-F]+)/set winmac=\1/p; }

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by gft74
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • windows 2003 under Hyper-V - can't send/receive ping

    - by glaucon
    I've installed Windows 2003 x64 R2 SP2 under Hyper-V (the Windows Pro 8 edition). I have a NIC configured but I can't move any traffic on it. In particular I can't send or receive Pings. Scoreboard There is a second VM running Ubuntu under the Windows 8 host which is able to send and receive pings from the host O/S . When I try to ping from Windows 2003 guest to Windows 8 host I get 'Request Timed Out'. When I try to ping from Windows 8 host to Windows 2003 guest I get 'Reply from 192.168.10.107 Destination Host Unreachable'. There's no problem pinging from the Ubuntu guest to the Windows 8 host and no problem pinging from the Windows 8 host to the Unbuntu guest. Environment Integration services are installed on Windows 2003. The windows 2003 needs a static IP address of 192.168.10.15. The Windows 2003 ipconfig output looks like this : While the host o/s ipconfig output looks like this : Event Logs The only things I can see in the event logs which is (a) looks signifcant and (b) is not related to the lack of networking is this : I'm not sure if that's significant or not. Hyper-V and NICs When the Windows 2003 guest was first booted it had no NIC; I subsequently added a 'Legacy Network Connector' which I couldn't get Windows 2003 to recognise; I subsequently removed that and added a 'Standard Network Connector' and at least on the surface this works ... only it doesn't. 'Virtual Network Type' is external. Although I've only mentioned ping there's no other evidence of network activity. 'Allow incoming echo request' is enabled on the Windows 2003 guest. HELP ? What else should I look at or do to resolve this problem ? EDIT 1: I should have said that I turned off the firewall on the W2003 server for a while and retested the pings; same result.

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >