Search Results

Search found 11093 results on 444 pages for 'issues'.

Page 333/444 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Can I regenerate the rsa key for SSH access to a Cisco router? Or should I completely erase the SSH config?

    - by Josh
    I have a production 2691 that I administer via telnet. I'd like to change that to SSH. Looking at the config, it looks like there have been keys generated in the past. I think the history here is SSH was set up, they had issues connecting, and fell back to telnet. There are a number of crypto entries, including the following: crypto pki trustpoint Gateway-2691.xxx.com enrollment selfsigned subject-name cn=IOS-Gateway-2691.xxx.com revocation-check none rsakeypair Gateway-2691.xxx.com I've also got this going... Gateway-2691#sh ip ssh SSH Disabled - version 1.99 %Please create RSA keys (of atleast 768 bits size) to enable SSH v2. Authentication timeout: 120 secs; Authentication retries: 3 Gateway-2691# My question is simply, can I run crypto key generate rsa again to set it up again? Is there a way to negate or no all of the previous ssh config so that I can start fresh there? I may be asking the wrong questions, as I'm learning here. As for the SSH how-to, I'm sure I can find information in many places. I'm just basically wondering if I need to start fresh, or if I can pick up where the last attempt at SSH config left off.

    Read the article

  • Upgrade or replace?

    - by Felix
    My current PC is about four years old, although I have made upgrades to it throughout its existence. The current specs are: (old) Intel Pentium D 2.80Ghz (32K L1 / 2M L2), Gigabyte 945GCMX-S2 motherboard (old) 2.5GB DDR2 (slot0: 512MB @ 533Mhz; slot1: 2GB @ 667Mhz) (new) HIS Radeon HD 4670 - I think this is limited by the motherboard not supporting PCIe 2.0 (?) (old) WD Caviar 160GB - pretty slow (new) WD Caviar Black 640GB (if any more specs are relevant, let me know and I'll add them) Now, on to my question. I've been having performance issues lately, both in video games and in intensive applications. A couple of examples: Android application development (running Eclipse and the Android emulator) is painfully slow (on Linux). I only realized this when, at my new job as an Android dev, both tools are MUCH quicker. (I'm not sure what CPU I have there) The guys at my new job got me NFS Hot Pursuit, in which I barely get like 5-10FPS, even with graphics options turned all the way down My guess is that the bottleneck in my system is my CPU, so I'm thinking of upgrading to a Quad Core i5 + new motherboard + 4GB DDR3 (or more, 'cause I know you'll all jump and say 8GB minimum). Now: Is that a good idea? Is my CPU really a bottleneck, or is the whole system too old and I should replace it? I run Windows 7 on the old, 160GB HDD (which is on IDE, by the way). Could this slow down games as well? Should I get a new drive for Windows if I want to play new games? I know nothing about power supplies. Could that be a problem / will it be a problem if I upgrade to an i5? How come DiRT2 works on full graphics settings (pretty amazing graphics by the way) and NFS Hot Pursuit pulls only 5-10FPS?

    Read the article

  • Why is my computer randomly restarting? How can I fix it?

    - by kinglime
    I have a custom built desktop computer that I've been using since about a year ago. The main specs are: ASUS P8Z68-V LE Motherboard Intel Core i7 2600k Corsair Vengeance 16GB Ram ASUS ENGTX570 GPU Corsair TX650M PSU I was running an overclock of 4.4ghz for my cpu (I have a Hyper212 EVO) and 1600mhz for my RAM but have it currently turned off due to my issues. I am also currently running Windows 8 but this problem occured in Windows 7 too. Basically my issue is that seemingly randomly, with no pattern my PC will reset itself and ASUS Anti-Surge will alert me that something went wrong. This issue is not related to system stress. I can run it fine for an hour maxxed out on Prime95 then later I can be watching a mere YouTube video when it will randomly reset. This has been occurring probably for the last two weeks and it seems to be getting worse. I believe this might be related to the power supply but when I monitor it in the bios and in Windows it appears to be putting out the proper voltages. Also possibly related or not but my Nvidia drivers frequently temporarily fail and then warn me of some kind of kernel error? If I have to buy a new power supply that is what I'll do but I want to make damn sure that's the only issue at hand. Thank you everyone in advance, please help me diagnose the issue and tell me what I can do to fix it. If you need any additional info about my setup please ask me.

    Read the article

  • How do I automatically connect my client to an ODBC data source on another machine with dynamic IP?

    - by Kdansky
    At the customer's place, we've got a postgres DB on a server, and a few clients. We connect them through ODBC-drivers, and all machines run windows (usually XP). Now we had a few annoying issues: The client "forgets" some flags in the ODBC drivers, such as ByteA as LO. Every time anything changes, we have to reset that, and type in the password, and sometimes even the IP of the server. On x64 machines running Windows 7, configuring this is a pain as the system settings dialogue will only show 64-bit connections by default. And most importantly: If the server changes IP because the customer restarts or replaces a switch, all connections are lost. Annoyingly, this cannot be fixed with just correcting the IP, but rather, we have to check every single place (even hba_conf) because all the settings magically disappear. Our customers often are very small companies, where "server" means "that one PC in the other room", and not "Oracle mainframe in the dungeon", so we don't want to rely on them not restarting switches. Is there a better way than to rely on these really unstable settings? Are these settings somewhere in a file which I could edit manually, to make fixing it easier?

    Read the article

  • Clone a Windows Installation to a 3TB Hard Drive; MBR to GPT

    - by DanBlakemore
    I have Windows 7 Professional 64-bit installed on my desktop. Unfortunately for me and my wallet my hard drive is failing. I have purchased a 3TB hard drive as a replacement for my current 2TB drive. I would like to avoid as much hassle as possible in moving to this new drive so I would like to copy my current partition to the new drive using Gparted. The problem is that I suspect that my current partition is MBR, and I need GPT on my new drive since it is 3TB. Can I simply copy the MBR partition onto the new disk and then convert it to GPT after the fact (can you even convert the type of a partition)? Or would I need to somehow copy the contents of the partition into a GPT partition on the new drive? How do I go about making this transistion? Also, are there any issues I should be wary of booting to a GPT partition? If it matters, my motherboard is 1 year old as of May, 2012. Edit: My motherboard is 1 day old. My old one does not have UEFI compatibility, so I decided to make an upgrade to Intel today given that I would need a UEFI motherboard to use my new HDD. How much can I use a dying hard drive (bad sectors according to Hitachi Drive Fitness Test)? I have assumed not at all, to be safe.

    Read the article

  • Ubuntu + LigHTTPd: Server requests taking ages

    - by ctrl_freak
    I've had an issue since upgrading my distro a couple of weeks ago from hardy; receiving data after making a request has increasing intervals of nothing, as you can see from the picture below. http://i49.tinypic.com/2w5lvr9.png I have since reinstalled fresh from an Ubuntu 10.04 Server (i386) disk, but am still having the same issues. I'm running on a LigHTTPd, MySQL, PHP5 stack. The surprising thing is, that local browsing using lynx is super fast, as expected. Initially, after reinstalling, I copied over the old configuration files from the previous installation, but have since reinstalled LigHTTPd and rebuilt the config file from scratch. The only correlation I could find, was that I attempted installation of ionCube and Zend Optimizer for a script I was testing, however I would think that it could no longer impact seeing I had reinstalled the OS. I have also removed Suhosin just in case, however it had no impact. I'm thinking it possibly has something to do with networking, but I wouldn't know where to start. The server is manually assigned an IP by it's MAC address on the router. The fact that the time seems to be exponential (to a point) worries me. I've tried strace'ing the LigHTTPd and MySQL processes, however I couldn't see anything obvious, not that I'd really know what I'm looking for. RAM and CPU usage don't seem to be out of the ordinary, but I can't say its perfect.. I'm hoping someone has experienced the same, or can point me in a direction, as searching has proved fruitless as I don't know anything specific. Config files can be posted, if requested.

    Read the article

  • Server format & Reinstall while keeping Server & domain ID

    - by Chris
    Hi Everyone, I want to reinstall my 2008 R2 server from scratch, due to multiple Active Dir issues. I have only 1 server running AD and a spare machine to use if necessary. Is there a way to save just the user accounts and the domain SID, so that I can start with a clean server that uses the same name as before? I can reassign file security, but I do not want to have to rejoin all the users to a new domain. Also all users are mapped to folders on the server. What I hope to do is a clean install of the server without having to mess with the users machines. can someone please tell me the procedure to accomplish this? any help appreciated! Thanks guys, but I could be here all day telling you every error I am getting. can we please keep this to the question of how to do a reinstall and keep the same SID? I just want to start over without having to rejoin all the clients to a new domain. Is there such a tool that can backup the Server SID and the AD domain name so that I could restore them, without restoring any other data? I might not be using the correct terminology here, but hopefully you understand what I am asking. Thanks

    Read the article

  • Windows Server 2008 R2 Standard to Enterprise Problems

    - by boburob
    A few months ago I setup a Citrix XenApp cluster running on Windows Server 2008 R2 Standard Edition using the temporary 180 day license key. Recently the company bought a Windows Server 2008 R2 Enterprise DataCenter license. This means I need to upgrade the Windows edition from Standard to Enterprise. I attach the disk to the VM and start the upgrade process through XenCenter, it runs through all checks and unpacks all Windows files and seems to create a Windows Setup partition, it then reboots and trys to boot into this partition and I get a blue screen telling me to CHKDSK the hard drive with the following error message: STOP: 0x0000007B As XenApp is already setup and working I really do not want to go down the route of rebuilding this server (as I already had to do this once down to issues with XenApp). The server did have 8GB of RAM assigned to it, I have tried reducing this down to 2GB's as I read this can cause an issue. Also I can boot back into the Windows Server 2008 R2 Standard partition without any problems. UPDATE I have managed to get round the urgency by re-arming the license, giving me another 180 day trial..but would be nice to work out why this is happening!

    Read the article

  • Linksys SFE2000 Interface

    - by boburob
    I have a real problem with a layer 3 Linksys switch. First, every time it looses power, it seems to reset back to an older config. Not only this, but when this happens it looses interface settings on one subnet. This would not be a problem but I am completely unable to get to the interface on the working side. It allows me to log on and then just displays a blank screen. I have tried this on: IE6 IE7 IE8 IE9 Firefox 3.5 Opera Chrome All with the same results, except for Opera, which loads half the interface but nothing I can really use. I really need to get onto this switch so I can sort out routing and VLAN tagged ports, so if anyone has any ideas on either of these issues please let me know ASAP! Thanks! Also, due to its location and my lack of laptops with serial connections I cannot putty into it. UPDATE: Looked into this a bit more and it looks like this model of switch does not save the current config to boot unless you make sure to save it yourself, which explains the first issue, however the broken interface is more worrying!

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • InstallShield or Windows installer corrupted

    - by Bobby S
    Just recently I've been unable to install any software on my Windows 7 machine. Anything that uses InstallShield or the Windows installer will just hang or give a weird error. I noticed there will be many duplicate isbew64.exe processes (like 25) that launch and then just sit there or else a lot of msiexec.exe *32 processes, depending on what I'm trying to install. One piece of software specifically is the Logitech Harmony software. It gives me an *is_string_not_defined* error, saying c:\program files (x86)\:\ the filename, directory name, or volume label syntax is incorrect. The other thing I was trying to install was Battlefield: Bad Company 2, and that just hangs as well, and then just leaves all the Windows installer processes running in the background after I quit the install process. Very odd. I've checked well and googled these issues, it doesn't appear to be any sort of malware issue. I feel like it's related to some kind of corrupted installer application. I've rebooted, deleted the InstallShield folder in program files/common files as some places online suggested but to no avail. I have no idea what to do, any ideas?

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • What can be done to improve time synchronization on networks with sporadic internet access?

    - by anregen
    I'm looking for advice setting up time servers for a very non-typical network. I support many closed networks that have occasional access to the internet. A network would get access most days for a few hours, but would frequently go 1-3 weeks blacked-out. The computers/servers on this network are mostly *nix-based, but not all the same flavor. The entire network is mobile, so when it connects, it will have very different hops/latency to internet time servers. The servers on the closed network are powered-off frequently (at least daily). Right now, my gut tells me to use NTP (because I hate re-learning all the stuff that someone else already got working pretty well). But I have several issues, and am looking for someone with experience in this type of strange situation. I currently have no solution in place, I'm simply letting the internal clocks drift. This results in errors of ~600s in a majority of networks. I have seen mismatch worse than 10,000s. Is there something "better" than NTP in this situation? I know NTP likes to have very frequent, consistent access to servers that give nearly identical answers. I won't have that. How many internal NTP servers should I configure, so that during periods of internet blackout, I have internal time that is consistent within the closed network? There is no human access. No matter how large the mismatch, the server(s) must attempt to correct itself. Discrete steps are very bad. No matter how large the mismatch, the correction must be "slewed", not "stepped". I understand that this could take many hours to correct.

    Read the article

  • ASA access lists and Egress Filtering

    - by Nate
    Hello. I'm trying to learn how to use a cisco ASA firewall, and I don't really know what I'm doing. I'm trying to set up some egress filtering, with the goal of allowing only the minimal amount of traffic out of the network, even if it originated from within the inside interface. In other words, I'm trying to set up dmz_in and inside_in ACLs as if the inside interface is not too trustworthy. I haven't fully grasped all the concepts yet, so I have a few issues. Assume that we're working with three interfaces: inside, outside, and DMZ. Let's say I have a server (X.Y.Z.1) that has to respond to PING, HTTP, SSH, FTP, MySQL, and SMTP. My ACL looks something like this: access-list outside_in extended permit icmp any host X.Y.Z.1 echo-reply access-list outside_in extended permit tcp any host X.Y.Z.1 eq www access-list outside_in extended permit tcp any host X.Y.Z.1 eq ssh access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp-data established access-list outside_in extended permit tcp any host X.Y.Z.1 eq 3306 access-list outside_in extended permit tcp any host X.Y.Z.1 eq smtp and I apply it like this: access-group outside_in in interface outside My question is, what can I do for egress filtering? I want to only allow the minimal amount of traffic out. Do I just "reverse" the rules (i.e. the smtp rule becomes access-list inside_out extended permit tcp host X.Y.Z.1 any eq smtp ) and call it a day, or can I further cull my options? What can I safely block? Furthermore, when doing egress filtering, is it enough to apply "inverted" rules to the outside interface, or should I also look into making dmz_in and inside_in acls? I've heard the term "egress filtering" thrown around a lot, but I don't really know what I'm doing. Any pointers towards good resources and reading would also be helpful, most of the ones I've found presume that I know a lot more than I do.

    Read the article

  • What is best configuration settings for Wordpress and MySQL on Win2008 + IIS7 stack?

    - by holiveira
    I currently have four blogs that uses Wordpress running on a shared hosting company. This blogs have a considerable amount of visits and I'm constantly receiving warnings from the hosting company saying that I'm consuming too much server CPU. Considering the fact that I have a dedicated server in another company with plenty of idle resources (it has a quad core Xeon 2.5GHz and 8GB of Ram and run on Win2008) I'm planning to move the blogs to this server in order to have some more freedom. I'm currently using this server to host some web applications using ASP.Net and SQL Express. I've installed a blog to test and it worked fine, but some issues appeared and raised some questions in my mind: How to properly set the permissions in the folders used by wordpress plugins, I mean, what permissions should I set for the IIS_User in some folders so that the plugins works correctly? What's the best caching plugin to use considering this is a Window Server? In the previous hosting company I used the WPSuperCache, but it was a Linux Stack. Or should I ignore the caching plugins and use the Dynamic Caching Feature of IIS7? How can I optmize the MySQL server running in this server (specially the settings regarding memory and caching) How can I protect the admin folders against hacker attacks? I know some people will advice me not to run Wordpress in a Windows stack, but that's my only choice. I don't even know were to start managing and LAMP stack, don't have the time to do so nor the money to rent another server.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • 2 Computers, same network, different outgoing speeds when uploading to internet?

    - by user117339
    I have 2 work machines in my office, a PowerMac G5 and a MacBook Air. Both behind an IPCop firewall. The PowerMac is connected through a gigabit switch, the MacBook Air is connected through a Netgear 802.11g access point that is then plugged into the gigabit switch. There is also a FreeNAS box, both machines are able to read and write files to it at close to their pipe speeds. The main problem is when I am trying to upload files to the internet at large. The G5 is only hitting 0.1 - 0.25 Mbps. The Macbook is able to hit 2-3 Mbps. The setup (G5 / IPCop / Network) has been the same for 5 years. The issues with the internet speed started about 3 months ago. I hadn't tested on the Macbook at this point. I had complained to the ISP, they said their modem needed a firmware update, did that nothing changed. Reset IPCop, turned off squid, etc. No changes. The ISP switched the office over to a better plan with a theoretical 6 Mbps up, still no change. At this point I tried testing the Macbook, and lo and behold there's the speed. But why? I have tried changing out everything, cables, switches, using another ethernet port on the G5, wiping the system, using DHCP, using manual IPs, changing DNS servers, etc. Nothing works. I figured that if there was something horribly wrong with the network, then internally I would find a similar issue, but that is perfect. iperf, ping, etc show no dropped packets and near saturation of the internal network. I'm at a loss as to what the heck is going on. Any ideas would be appreciated! Below are some screenshots of speedtest.net: G5: Macbook Air:

    Read the article

  • HSphere - Only sees Apache 2 Test Page after forced shutdown?

    - by Darkwoof
    Hi, I have a dedicated server running on a Dell PowerEdge 850 with CentOS 4.4 and HSphere 3.0 Patch 6 colocated at a datacenter. Last night my hosting company had to schedule a change in the power bar, and I gave the go ahead for them to shut down the server and bring it up when they are done. Since they do not have admin access to the machine, I suppose they did a forced shutdown. When the machine was brought up, I found that all my domains (and sub-domains) are now pointing to an "Apache 2 Test Page" instead of the pre-configured sites that were running prior to the shutdown. This apparently only affects the standard sites running on port 80 - my Webmin instance running at port 1000 is still accessible for example, as well as my HSphere control panel running at port 8080. I've checked the config settings using the HSphere UI for each of the sites, and didn't find anything wrong. I've also tried rebooting the server via SSH, which does not rectify the problem. I've previously done reboots with no issues; the sites would just come right back up when its done, but not this time. I'm guessing some configuration file got corrupted or overwritten this time? Anyone with experience with HSphere and can provide some advice on what's happened and how to solve it? Thanks. (I do not have an active support agreeement for HSphere since Parallels took over and increased the min. license to 200. I only had 25 license for use by family and friends.) Thanks in advance.

    Read the article

  • Installing a new Motherboard in a HP xw6200 case

    - by thing2k
    I have a HP xw6200 Workstation, that is rather long in the tooth, and with 2 physical CPUs, it is quite inefficient. So, the plan was to upgrade the internals. Nothing special: AMD Athlon II X4 640 ASUS M4A78KT-M LE (mATX) 2x 2GB DDR3 1333MHz RAM. 3p under my £150 budget The issues: The pin connector for the front panel isn't a good fit, but I can trim it to size. The PSU has a 8-Pin Power connector, unsurprisingly, the new board has a 4 pin socket. The pins do line up, but I would have to cut it in half to fit. Finally, due to the weight of heat-sinks, they are screwed directly into the case. It turns out that these screws also lock the motherboard in place. As to remove it, you remove the heat-sinks, slide the motherboard across and lift it out. I tested the new board for fit, and while it slots in fine, it's not secure. There is nowhere to screw the board down, it is just held in place with plastic standoffs. The only idea I had, was to wedging something between the side of the motherboard and part of the case. Any suggestions?

    Read the article

  • Windows mounted network drives slow after upgrading switch

    - by Kver
    On our small business network our old 10/100 consumer grade switch gave up the ghost, and we replaced it with a proper business-grade gigabyte switch. After wiring it in our Linux and Mac users immediately got back to working off of network drives; But 2 of our 3 Windows 7 PCs have suddenly experienced a tremendous slowdown with mapped network drives; Windows will become stuck "discovering" a folder causing applications to freeze when trying to open files. It will instantly display and browse files, but the moment you try to open one the bug hits. To remedy this we have our users copying files to the desktop, but it can take a few minutes while windows is stuck "calculating" the time it will take to copy. These aren't big files, mostly excel sheets less than 500KB - these operations are instant on Linux and Mac. (The third Windows machine is having no issues) I've tried remapping the drives, mapping to different drive letters, rebooting, etc. I'm at a loss, because switches are mostly transparent, and it's only after the switch was replaced that the Windows PCs started acting up. What black-magic voodoo am I missing to make Windows work? Thank you.

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >