Search Results

Search found 2804 results on 113 pages for 'jon story'.

Page 89/113 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Can I use CNAME with ip address? Why If works (sometimes)?

    - by Maciek Sawicki
    I believe that the easiest answer for the first question is "No, You have "A" for this", but I accidentally setup some subdomain using CNAME pointing to ip address and it worked on few computers in my office. I wonder how it was possible? Now, when I'm checking it from home I have following error: beast:~ viroos$ host somesubdomain.somedomain.com Host somesubdomain.somedomain.com not found: 3(NXDOMAIN) I'm 100% it used to work at my office (currently it looks like it doesn't, but I'm checking it on different machine). Therefore I'm not 100% if it worked due to some special network setup or because I tested it just after adding DNS entry. I know this story sounds, a little crazy/incredibly, but can someone help me solve this puzzle. //edit: I'm adding dig output ; <<>> DiG 9.6-ESV-R4-P3 <<>> somesubdomain.somedomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 60224 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;somesubdomain.somedomain.com. IN A ;; ANSWER SECTION: somesubdomain.somedomain.com. 67 IN CNAME xxx.xxx.xxx.xx1. ;; AUTHORITY SECTION: . 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012040901 1800 900 604800 86400 ;; Query time: 72 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Apr 10 00:11:01 2012 ;; MSG SIZE rcvd: 136

    Read the article

  • Website Use Monitoring for 3 People

    - by linkedlinked
    I work in an IT startup with 2 partners, and I'm the programmer/IT guy -- in other words, the work horse. To make a long story short, I'm doing most of the work right now, while they spend all day on Facebook. That's OK, because they're paying my salary, but if the project fails, I'm sure they'll blame me for it (I'm doing my best to make sure that doesn't happen!), and I want some sort of recourse. I already have an app that blocks time-wasters on my local PC, and keeps logs of when the app is enabled (so I can say "I had Facebook blocked from 9am-5pm today.") Is there any way I can get a brief summary of the most heavily visited sites, split up by client PC? At the end of the month, I want to be able to say "You both load Facebook, on average, every 10 minutes. You spend hours a day on Youtube, and haven't opened up our bugtracker in weeks" and maybe have a nifty chart or graph to match it. We have a crappy D-Link router, and no IT budget. They are both on Windows Vista, I run Ubuntu Linux. I don't want to install any monitoring software on their PC, but I'm totally fine with, say, routing all the network traffic through my machine. I guess I can think of lots of ways to accomplish this (telnet into JSSH and list open tabs? log all the DNS requests, per-domain? even thinking of setting up a webcam on my desk and just keeping 5-minute snapshots...), I just don't really know where to start. Any advice is appreciated, thanks!

    Read the article

  • My PC is powercycling, what's going wrong?

    - by Renai
    So here is my sorry story of woe. My PC has been functioning normally for some time. Last week I bought a cheapish powered USB hub and plugged it into my home PC, which runs Windows 7. This weekend I plugged that hub into my home PC. At some stage I hibernated the PC. Then later on I plugged my Kobo eReader into the hub to charge. Later on I started the PC up. Only thing is, it now won’t start up. It just powercycles on for a second and a half with the fans at full, then powercycles off. Then back on for two seconds, then back off. There’s no display at all and it won’t get to the BIOS screen. It looks like anything USB is not functioning — the keyboard and mouse are not lighting up etc. I’ve taken out the BIOS battery and reseated the RAM, reseated the graphics card and so on, but my suspicion is that I have blown the USB section of the motherboard somehow. Suggestions? All else failing, where is the best place to take this machine in Sydney to get evaluated? It’s a fairly powerful beast, all up has cost me about $2500 over the years, including upgrades and a recent new graphics card, so can’t just start from scratch.

    Read the article

  • How to Re-install an App that Shows up in the Appstore as 'Update' Instead of 'Buy App'

    - by Craig Reville
    So long story short: I dropped the wrong app into 'clean my mac' and I hit 'cancel' but it was too late by that point. I rebooted and appstore said it had an update, when I opened appstore it was showing an update for the app I just uninstalled. I tried clicking 'update' but it gives me an error saying it's unable to install after 'downloading'. When I try to go into 'purchased apps' it shows the app as uninstalled so I click 'install' and I get an error saying it's already installed. I'm running Lion OS X, latest version, updated, mac book pro is only a few months old. I tried searching through the entire system to remove all traces of the app, after rebooting appstore no longer shows the app and no longer shows the update but on the apps page it still says 'Update'. I tried reinstalling the app from desktop OUT of the appstore and again says the app is 'already installed'. So after reading more about lion I found an article that spoke about 'BundleID' being the thing that tells appstore what's installed and needing updating however I can't find the location of where the BundleID would be. Any thoughts? I've tried CCleaner, AppCleaner etc and none of them show the app, mainly because it is uninstalled. Update I've spoken to Apple Support who confirmed that there is a file in the system that connects separately to tell the system if there are updates available however they declined to inform me of any further details. Apple also referred me from technical support to iTunes App Store opposed to Mac App Store support and from there I have been referred to AppleCare who are currently 'investigating' this issue. Hopefully there will be a fix that's simple to implement for people having similar issues, this appears to be a more common issue than I previously thought.

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Windows 7 boot problem on a Lenovo Thinkpad Z61m 9450HAG

    - by Matt Taylor
    I recently did a full upgrade of Windows 7 on my Thinkpad. Everything worked fine after up until the second reboot (the first reboot after some updates installed worked OK). At second reboot time the system would just black screen before the Windows logo appears. Disk/wireless/power/battery lights are all lit and the disk light is active (flickering). However, if I remove my battery and boot with just power it boots fine and quickly, and everything is OK. Any help on why this won't boot with battery plugged in is greatly appreciated. I need to take this battery out on the road/trains, etc. A little more detail on this story. The battery I had inserted when doing the (failed) boot was a long life battery. I have not tried inserting this battery when Windows is logged in. I have another (normal life) battery that I have charged up within Windows. It has just got to 100% and I am about to reboot with it in. I am using the Lenovo power manager to diagnose the battery - all seems OK. I will report back shortly as to the outcome. OK, so I chose the reboot option from within Windows, the machine seemed to shutdown okay, but then stalled. It didn't turn off completely and didn't reboot, but just sat, with the fan humming, somewhere in between! I had to hold the power button in for a few seconds until the fan stopped and then hit the power button again to boot the machine from fresh. One good thing, with this battery (the normal one) it booted into Windows 7 the first time with a battery! So, now I have rebooting issues. I have 3 errors in the event log: A timeout was reached (30000 milliseconds) while waiting for the lxdxCATSCustConnectService service to connect. The lxdxCATSCustConnectService service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. The following boot-start or system-start driver(s) failed to load: cdrom Any thoughts?

    Read the article

  • postfix takes 60-90ms to queue email -- normal?

    - by Jeff Atwood
    We're seeing some (maybe?) strange delays when submitting individual emails to our local Postfix server. To help diagnose the issue, I wrote a little test program which sends 5 emails: get smtp 1ms ( 1 ms) email 0 677ms (676 ms) email 1 802ms (125 ms) email 2 890ms ( 88 ms) email 3 973ms ( 83 ms) email 4 1088ms (115 ms) Discounting the handshaking in the first email, that's about 90ms per email. These timings have also been corroborated with another test app written by someone else using a different codepath, so it appears to be server related. I turned on detailed logging and I can see that the delay is between the end of message \r\n\r\n and the receive: [16:31:29.95] [SEND] \r\n.\r\n [16:31:30.05] [RECV] 250 2.0.0 Ok: queued as B128E1E063\r\n [16:31:30.08] [SEND] \r\n.\r\n [16:31:30.17] [RECV] 250 2.0.0 Ok: queued as 4A7DE1E06E\r\n [16:31:30.19] [SEND] \r\n.\r\n [16:31:30.27] [RECV] 250 2.0.0 Ok: queued as 68ACC1E072\r\n [16:31:30.28] [SEND] \r\n.\r\n [16:31:30.34] [RECV] 250 2.0.0 Ok: queued as 7EFFE1E079\r\n [16:31:30.39] [SEND] \r\n.\r\n [16:31:30.45] [RECV] 250 2.0.0 Ok: queued as 9793C1E07A\r\n The time intervals tell the story (discounting the handshaking required for the initial email) -- each email is waiting about 60-90 milliseconds for postfix to queue! This seems .. excessive .. to me. Is it "normal" for postfix to take 60-90 ms for every email you send it? Or do I just have unreasonable expectations? I would expect the local postfix server to queue the email in about 20ms, tops!

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Vista won't boot. BSOD: Page fault in nonpaged area

    - by user31576
    Here's the story: I let Windows Update do the updates it wanted to do, then rebooted the computer. The updating process was taking time so I went away. When I came back, my computer was rebooting. It got as far as the Windows logo with the laoding bar. BSOD'd. Rebooted. And I'm stuck in this loop ever since. Looked up on the net, the "Page fault in nonpaged area" seems to be linked to faulty RAM or drivers. So I ran a memory test, it found no error. When I try in safe mode (with promt) I can see a list of drivers being loaded, then I get the same BSOD. I tried to repair using the Vista DVD, it says "nothing to repair". I tried to restore to a previous state, it says "no restore point found". So, my guess is, it's got something to do with the drivers. How can I identify the one causing the BSOD? If you have any other leads, What can I do? By the way, I'm writing from this very computer, running a linux distro I installed after the BSOD loop started. So i guess it's not an hardware issue. I have backed up important data, and will format and reinstall Windows if I must. But I'd like to avoid that. Thanks in advance for any help you can give me.

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • ssd firmware, linux: updating large batch of drives

    - by wryfi
    I was recently hit with a fatal firmware bug that affected dozens of Crucial SSDs deployed in my datacenter. Many of the affected machines use LSI or other proprietary SAS controllers, which Crucial's bootable ISO does not recognize. None of the affected machines has a Windows license. The story is roughly similar for other SSD mfrs, including Samsung and Intel. To resolve this issue, I was forced to stop each machine, remove the affected SSD, remove the SSD from its hotswap caddy, install it temporarily into my ThinkPad, flash the firmware, reverse, rinse, repeat. It took the better part of a day to get through all the affected devices. I am looking for hardware, software, and/or purchasing strategies to ease this pain, as SSD firmware bugs seem inevitable, and our SSD footprint is growing. My first thought is to get a laptop with eSATA and one of these cables (http://www.newegg.com/Product/Product.aspx?Item=N82E16812311004). That should at least make it so I don't have to remove the drives from their caddies. Surely others have run into this. Any novel solutions?

    Read the article

  • Why does my DSL modem now need a reboot each time for my laptop to connect?

    - by msorens
    I have a rather peculiar home networking issue. For sometime my home network was purring along fine. I could turn on either of my laptops and they would quickly find and connect to my DSL modem (and thence the internet). Several days ago I unplugged my DSL modem for the first time in months. Upon turning it back on and waiting for the boot to finish, the lights on the panel indicated the DSL modem was fully operational, just as before. But that's not what happened. Not at all. Now when I turn on my Win7 laptop, the network icon in my system tray shows a small starburst; hovering over it the tooltip states "Not connected; connections are available". Clicking it lists several nearby networks including my own network showing a strong signal. If I click to connect, it attempts a connection but then I get a dialog stating "Windows was unable to connect to MyNet.". Turning off wireless on my laptop and turning it back on yields no difference. Running the network troubleshooter (which includes doing a repair on the network connection) yields no difference. The only remedy is to reboot the DSL modem (i.e. unplug it, wait a few seconds, then plug it back in). As soon as it goes online my laptop finds it and connects properly. To add one more twist to the story, this happened to me once before, several months ago. After a couple weeks, the situation resolved itself(!). Everything started working properly again, due to nothing I did. Final note: this problem only affects the wireless connection to the DSL modem. My desktop computer, connected via hardline to the DSL modem, connects fine when I turn it on. Any thoughts on why this is happening or how to fix it?

    Read the article

  • Emptying Windows temp folder is a good idea?

    - by Siva Charan
    Am using DELL Inspiron with Windows 7. As far as I know emptying the windows temp folder would be good. But I faced a strange behaviour around 8 months back, when I clear my windows temp folder. The next day onwards, my laptop starting displaying daily one or other errors and one day OS got crashed. Till now I am not sure whether OS got crashed due to clearing the windows temp folder or something else is problem. Here Windows temp folder mean "C:\Windows\Temp" This is the behind the story. Today, this temp folder "C:\Windows\Temp" contains 102 GB Most of the space occupied by the files starts with etilqs_*.*. I came to know that these files are generated due to WD SmartWare. Now my problem is:- Actually I want clean up this folder, since it occupies lot of space. If I clean up "C:\Windows\Temp" folder, will my laptop face the same kind of problem which I faced earlier OR Any new problems will occur? Please suggest me a good solution.

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • BIOS not detecting working SATA hard drive.

    - by Evan
    Some time ago my power supply died. It's a long story from then till now, but the important bit is that I ended up with a new hard drive and a new power supply. I tested to see if my original hard drive was still alive, and it booted and worked perfectly until I turned it off. When I started it again it would not boot. I bought new SATA cables, assuming that the one I had was not seating properly (it was cheap and wobbly), but no dice. Upon start-up I am presented with a message telling me to insert boot media into the selected drive or add a drive and restart. Neither the new or the old drive is detected by BIOS, my Vista install disk, or from my bootable Linux USB drive. When I remove all of the RAM the computer ceases outputting visual information, and upon reinstalling the ram and starting up again gives me a "failed overclock" error. So, does anyone have an idea as to what might be going on? I'm completely lost at this point.

    Read the article

  • MacOS creates a new mount on AFP path calls

    - by jAndy
    Hi Folks, following scenario: In my webapp, my customers are using Firefox as target browser. They have the need to open afp:// folders via Javascript. To make a long story short, this really works. You need to setup Firefox with about:config and set the value network.protocol-handler.external.afp to true. What happens then, the operating system (OSX) takes care of that path and it correctly opens a Finder window. The problem: OSX does create a new mount every time. It cannot distinct between afp://host/path/111 and afp://host/path/222 for instance. Furthermore, even if the afp path is 100% identical a new mount is created. It looks like this is the default behavior from OSX regardless of Firefox. So, is there any chance I can tell OSX not to create a new mount for some sub directorys which should get access over afp:// ? update: It looks like, there are OSX applications which can change the default behavior for network protocols. So you can change "somewhere" which application OSX should call for a protocol. If that is true, wouldn't it be possible to create a script which just opens the local path without a afp:// prefix ? The question here is, where is that configuration (?) to tell OSX which application to use for specific protocol. Any help welcome!

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

  • Windows is very slow with my new SSD

    - by Maksym H.
    I have a laptop HP probook 4520s with Core i5 M480 @ 2.67Ghz, 4Gb RAM, 640 GB HDD Radeon HD 6370m 1GB video card. It would seem like a good stack for work, right? But My HDD has crashed after everyday walking with laptop about 1 year. After buying my new SSD (Patriot memory - Torqx II 128 Gb SATA II) and installing new Windows 8 from scratch - it was amazing fast. But I had only install windows updates, and I feel that the speed become the same as my old HDD, after install other software for my work, it becomes so slow, so when I use my PC with old lower configuration and it really works better than my awesome laptop... I checked that TRIM and AHCI mode are turned on. So why's that? I asked for help in Patriot Memory support, they suggested to send them ATTO test results, done, sent. Here is the response: "Thank you very much for the attached results. Looking at the results, I can see that your SSD speed is a lot lower than it should be. Can you tell me your system specs?" Until they checked my email, I re-installed Windows 8 to Windows 7 and it was again perfect, but the story repeats it becomes slower and slower after every installing new software. Check out some screenshots.. (sorry for the screenshot with russian TaskManager, I hope you will recognize those parameters accordingly with your english or other lang TaskManager) So the main issue that something everytime loads the disc on 100% and the response time is jumping around 1000-3000 ms. Why am I asking about Windows? Because I tried to install Linux Mint (x86) and It just flies. So great performance independent on how many programs I have installed. Only Windows (any 7 or 8) has this problem. So guys, I appreciate any ideas about how to fix that and may be answers of main question - "why is it so.?" Thanks!

    Read the article

  • Server 2008 RAID 5 Write Speeds

    - by Solipsism
    I recently configured a RAID 5 partition in Server 2008 with 4 RAID 5 disks. These disks are connected through a SATA expansion card that uses PCIe. This morning, I checked and they had finally finished synchronizing, and so I tried to do some speed tests. Copying off the disks started pretty much fine - speeds began at 125MB/s, then trailed down to about 70MB/s, which I found odd but not worrying. Writing TO the disks however is a completely different story. I attempted to copy some of my VM host ISOs onto the disks (~2-4 GB apiece) and this resulted in speeds of approximately 10MB/s. I tried copying both from a local disk (connected directly to the motherboard) and from another server ththe gigabit network and results were the same. I checked the performance monitor while transferring the files and the only thing that stuck out was that my memory hard faults shot up to 6,000 per minute (spiking around 200/s) by explorer.exe. The system is running 2GB of DDR667 ECC RAM and a quad-core 2.3GHz opteron. Is there anything I can do to fix this performance issue (buy more RAM? move the drives to a faster box?, etc) or am I just screwed so long as I stick to windows.

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • Why is writing to my external hard drive slow, while benchmarks show fast writing?

    - by matix2267
    I have an iOmega eGo 320GB portable drive connected through USB2.0 to my laptop running Windows Vista. It's been working fine for quite some time until recently it became very slow when writing e.g. when copying ~300MB movie over to the drive at first it is extremely fast but it actually doesn't write it only puts in cache and then hangs on last 10-20MBs for about a minute. When copying larger files it's the same story: starts fast but then slows down to ~5MB/s (sometimes even slower down to 2MB/s). Strange thing is that I have always had caching disabled for this drive (it was disabled by default and I never bothered changing it). At first I thought that the disk is dying so I checked S.M.A.R.T. values and everything is fine there. I also run chkdsk and it seemed to fix the problem - it worked fast for a few minutes but then it slowed down again. I also tried plugging it into another USB port - no difference. Additionally I noticed that reading under certain circumstances is sometimes slower e.g. loading times for some games are ~10 times longer, whereas simple copying files from this drive to my internal HDD is fast. I ran a speed benchmark using CrystalDiskMark with a 5x100MB run and strangely got these results: read write (MB/s) Seq 33.05 28.25 512k 17.30 15.27 4k 0.267 0.372 4kQD32 0.510 0.260 This is different from what most other people have (I've found many threads about slow disk write while googling but all of them were slow on benchmarks too) which is why I decided to post this problem here. BTW most of the time when writing (or sometimes reading) the activity led is mostly idle (blinks a while and then stops for longer, sometimes has slower blinks ~1 sek, sometimes goes off for a few seconds - extremely long blink :) ) but when benchmarking, defragmenting or just reading (copying from this drive, installing apps from installers there, watching HD videos) it is blinking really fast (like it should) and there are no slowdowns. It shouldn't be driver issue unless stock Windows drivers have some issues I'm not aware of.

    Read the article

  • secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry for this boring and messy story! /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • S.M.A.R.T. broken sectors

    - by Jeffrey Vandenborne
    Recently I received my hard drive (LaCie) that I've sent away for warranty, my disk failed, and I used Palimpsest Disk utility to check if anything was wrong in the S.M.A.R.T Status. And it said that there were a few broken sectors. So the next day, I went to the store and told the story. 4 weeks later I actually got my drive back. The first thing I did was plugging it in and starting the disk utility, and weirdly it showed me pretty much the exact same things, even the values of most tests were the same as they were before when my drive broke. The serial number is different though, but it does show a very peculiar value. Now I'm wondering, I'm almost sure it's the exact same drive and it still says I've got broken sectors, does it just say that because it has been cached in the drive somewhere while LaCie DID actually fix it? Or should I run the extended self test (which seems to take hours) first? Also I've tried the smartctl command tool, it says the drive has smart support, but it doesn't show anything, it says that it's enabled, but then it says that it's disabled, picture below The picture of the Disk utility: Thanks in advance

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >