Search Results

Search found 17406 results on 697 pages for 'option explicit'.

Page 646/697 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • Can you set up a gaming LAN using OpenVPN installed in a VMware guest OS and be playing the game on the host OS?

    - by Coder
    I would like to setup a gaming VPN. Ie. I have some games that work over LAN and would like to play them with people that are not on my LAN. I know I can do this with OpenVPN. My ultimate goal would be to run OpenVPN portably on my host OS and not even need any virtualization. As such i don't want to install it on my host, but i'm fine with running it portably. I'm even fine with temporarily adding registry keys, and then running a .reg file to remove these entries once i'm done. To this effect i have installed OpenVPN on a virtual machine and diffed the registry. I then manually (using a .reg file) added all the keys that seem important on my host OS and copied the installation folder of OpenVPN onto my host machine. Then i try to run openVPN GUI 1.0.3 as a test and it says "Error opening registy for reading (HKLM\SOFTWARE\OpenVPN). OpenVPN is probably not installed". I verified that that key is indeed in the registry with all subkeys and it looks correct. I have tried running the GUI as an administrator and in compatibility mode with no success. I am running Windows 7. If this fails then i would be happy with installing OpenVPN on a virtual machine in VMWare but they key is that i will be running the game installed on my host machine. The first question for this option is if this is even possible. The second is, that I can't get the VM to have internet access if I use bridging but i can if i use NAT. Is it possible to do this game VPN setup with VMWare guest OS running using NAT? Summary of questions: -Is it possible to run openVPN portably and if so what did i miss above? -If it's not possible to run it portably, then can setup a gaming LAN by installing OpenVPN in a guest OS with NAT and how can i do this? -If the above is not possible then can i install OpenVPN in a guest using bridging and if so how can i set this up with a Windows 7 host and Windows XP guest as currently i can't get the guest to be able to access the internet in bridging mode, but it working in NAT mode. -In general is there any good documentation on setting up a gaming LAN with OpenVPN (i am using 2.1.4) as i have never set up a VPN of any sort before so any help would be much appreciated. Thanks!

    Read the article

  • UNIX Question to b answered??? [closed]

    - by Nits
    Create a tree structure named ‘training’ in which there are 3 subdirectories – ‘level 1’,’ level2’ and ‘cep’. Each one is again further divided into 3. The ‘level 1’ is divided into ‘sdp’, ‘re’ and ‘se’. From the subdirectory ‘se’ how can one reach the home directory in one step and also how to navigate to the subdirectory ‘sdp’ in one step? Give the commands, which do the above actions? How will you copy a directory structure dir1 to dir2 ? (with all the subdirectories) How can you find out if you have the permission to send a message? Find the space occupied ( in Bytes) by the /home directory including all its subdirectories. What is the command for printing the current time in 24-hour format? What is the command for printing the year, month, and date with a horizontal tab between the fields? Create the following files: chapa, chapb, chapc, chapd, chape, chapA, chapB, chapC, chapD, chapE, chap01, chap02, chap03, chap04, chap05, chap11, chap12, chap13, chap14, and chap15. With reference to question 7, What is the command for listing all files ending in small letters? With reference to question 7, What is the command for listing all files ending in capitals? With reference to question 7, What is the command for listing all files whose last but one character is 0? With reference to question 7, What is the command for listing all files which end in small letters but not ‘a’ and ‘c’? In an organisation one wants to know how many programmers are there. The employee data is stored in a file called ‘personnel’ with one record per employee. Every record has field for designation. How can grep be used for this purpose? In the organisation mentioned in question 12 how can sed be used to print only the records of all employees who are programmers. In the organisation mentioned in question 12 how can sed be used to change the designation ‘programmer’ to ‘software professional’ every where in the ‘personnel’ file Find out about the sleep command and start five jobs in the background, each one sleeping for 10 minutes. How do you get the status of all the processes running on the system? i.e. using what option?

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • Too many Tunnel Adapter Interfaces

    - by Tomas Lycken
    If I open a command prompt on my machine and type ipconfig /all, I see lots of Tunnel adapter Local Area Connection* 9: Media state . . . . . . . . . . . . . : Media disconnected Connection-specific DNS Sufficx . . . : Description . . . . . . . . . . . . . : Microsoft 6to4 Adapter #5 Physical address. . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . . . : No Autoconfiguration Enabled . . . . . . : Yes In fact, they're so many that my "real" adapters are pushed out of the stack, and can't be seen anymore. Is there any flag I can use on ipconfig to hide all virtual interfaces? Or is there some other way around this problem? Since they always say "Media disconnected" I suppose disabling could be an option, but if possible I'd rather not turn any functionality off. I just want to control what output I get from ipconfig. Also, I know these are related to IPv6 stuff. However, most of what I find on google merely states what these are, and that they're harmless - nothing about hiding/removing them.

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • Network Services disabled (not starting) on Windows XP

    - by Rickesh John
    I am currently running Windows XP Service Pack 3 on my system. But today, when I failed to connect to the internet, via a LAN cable, I realized that almost all of the vital network services had stopped functioning. Any attempts to start it through services.msc gives me the following message: Could not start the DNS Client Service on Local Computer Error 1068: The dependency service group failed to start All my software or services that are related to networking have stopped functioning, for example, Windows Firewall is turned off permanently, so is my Avast Anti-Virus' service of Real Time Shields and Web Shield. When I insert the LAN wire into my laptop, it registers itself, but this is what I get when I do a ping localhost C:>ping localhost Unable to contact IP driver, error code 2 Moveover, with ipconfig I get this : Windows IP Configuration An internal error occurred: The request is not supported. Please contact Microsoft Product Support Services for further help. Additional Information: Unable to query host name On some further poking around, I saw that none of the "NETWORK SERVICE" process in task manager, except svchost.exe were running. Also, when I first opened the task manager, I saw some 20 processes running with username column empty for most of them. With some search in Google, I found out that these services were important, DHCP DNS Net logon Network connection Network location Awareness TCP/IP Net BIOS Helper none of them, except Network Connections are working, they do not start. The event viewer of my system shows a bunch of 7000 and 7001 event errors. I have tried re installing the network driver, booting in safe mode with networking and tried to enable those services mentioned above. I had disabled System Restore some time back, so I have no restore points for my system. I tried a lot of things from Google searches but none of them worked. Also, with such a long list of issue, I am a little confused as to what should I search on the internet. :( One more thing I would like to mention, previous morning, my anti-virus Avast detected a RootKit buried deep in my system folders. It was removed, but maybe this was a problem caused by the root kit. I did run a boot-time scan but no viruses were found. Please please please advice. Is formatting and re-installation of Windows my only option?

    Read the article

  • nginx rewrite for /blah/(.*) /$1

    - by skrewler
    I'm migrating from mod_php to nginx. I got everything working except for this rewrite.. I'm just not familiar enough with nginx configuration to know the correct way to do this. I came up with this by looking at a sample on the nginx site. server { server_name test01.www.myhost.com; root /home/vhosts/my_home/blah; access_log /var/log/nginx/blah.access.log; error_log /var/log/nginx/blah.error.log; index index.php; location / { try_files $uri $uri/ @rewrites; } location @rewrites { rewrite ^ /index.php last; rewrite ^/ht/userGreeting.php /js/iFrame/index.php last; rewrite ^/ht/(.*)$ /$1 last; rewrite ^/userGreeting.php$ /js/iFrame/index.php last; rewrite ^/a$ /adminLogin.php last; rewrite ^/boom\/(.*)$ /boom/index.php?q=$1 last; rewrite ^favicon.ico$ favico_ry.ico last; } # This block will catch static file requests, such as images, css, js # The ?: prefix is a 'non-capturing' mark, meaning we do not require # the pattern to be captured into $1 which should help improve performance location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } include php.conf; } The issue I'm having is with this rewrite: rewrite ^ht\/(.*)$ /$1 last; 99% of requests that will hit this rewrite are static files. So I think maybe it's getting sent to the static files section and that's where things are being messed up? I tried adding this but it didn't work: location ~* ^ht\/.*\.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } Any help would be appreciated. I know the best thing to do would be to just change the references of /ht/whatever.jpg to /whatever.jpg in the code.. but that's not an option for now.

    Read the article

  • Logitech Optical Mouse Frozen In Middle of Windows XP Pro Screen

    - by Code Sherpa
    Hi. I have a Logitech Optical Mouse/Keyboard. I have been using them just fine with the system drivers for almost a year now. I recently updated my Kaspersky software and rebooted. Now the mouse is frozen in the middle of my screen. I am not able to login to the Windows XP Pro box that has the frozen mouse (because i can't work the mouse) but am able to remote desktop to this computer. Things I know / have tried: When I boot on the problem computer, I am able to use the keyboard, but not the mouse. I have installed the latest version of Logitech's SetPoint (with the updated drivers) on the problem computer (via remote desktop) and that didn't seem to matter. I bought new batteries for the mouse and that didn't matter. I have tried the mouse/keyboard on another computer and the mouse works just fine there. My suspicion is that the Kaspersky install has overwritten a driver of some sort. Things I have not done (and would appreciate detailed steps if you feel this is the way to go): 1) Uninstalled all the mouse drivers on the machine and reboot. Then, reinstall. Note: When I get to the Device Manager I don't see an option for Human Interface Devices (where the mouse device is). Here are my options: Computer, Disk Drives, DVD/CD-Rom drives, Floppy controllers, IDE ATA/ATAPI, Imaging devices, Network Adapters, Other devices, Ports, Processors, Sound, video, and gaming, System devices, USB controllers. Also, I should point out that Video Controller is the only thing under Other devices and it has a yellow exclamation mark. The same is true for all the items under Universal Serial Bus controllers. I think this means I have to update my BIOS but, since my mouse was working just fine without doing that, I don't think that is my problem. So, how do I get to my Mouse Device? 2) Update my BIOS. Note: As pointed out above, I don't think this matters as my mouse was working just fine under my computer's current BIOS version. Thanks for your help.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • Dlink search is hijacking my browser

    - by James
    For months now "DLink search" has been hijacking my search engines. I use google chrome, and I have organized my search engines in the handy dandy "manage search engines" tool about a TRILLION times. It never even says D-link is hacking my search engines. It does not show up! I have read many posts on this forum and others saying that to fix this problem from internet explorer: Setup, internet options, yadayada, magical fairies, and you are solved, but my browser is google chrome! How am I supposed to do this from there! I do not know how to re-setup my Dlink router, which is the cause of the problem! HOW? In those posts with the magical fairies fixing it, HUNDREDS responded saying, "yep, those fairies definitely fixed it right. :)" These people were so satisfied. IT WORKED FOR THEM, WHY NOT ME. I look at it and go ":(" because it does not help me. There are no options for anything to do with this in GOOGLE chrome. PLEASE EXPLAIN and HELP. I see no "SETUP" option, no "Internet Options" button, no anything. BTW the exact posts are these: "Uncheck Advanced DNS in the router internet setup. This will take care of it. I had this problem with my DLink router before." "I had this issue with my DIR-655 and unchecking the Advanced DNS setting in Setup - Internet - Manual Internet Connection Setup fixed it." "If this is just internet explorer, you can go to Tools Internet Options or Internet Options in Control Panel. From here, go to the advanced tab and click the Reset button." "I would set the router's DNS to a site like OpenDNS, and I would ensure the machines are set to get their DNS settings via DHCP or set the machine's DNS setting to OpenDNS. If the router's DNS looks like it was messed with, some bad software know the default passwords for routers and could have changed it. If you don't already I would make sure the password to the router is not default or easy to guess. I've had spyware change a machine's DNS, but the fact it is happening on all machines makes me wonder if it is the router." "Something got into your router and changed the dns server most likely, do a hard reset of the router and then change the password to something strong. Also check for a firmware update for the router and apply it as soon as possible."

    Read the article

  • Share the same subnet between Internal network and VPN Clients

    - by Pascal
    I would like to set up a configuration where VPN clients connecting to my Forefront TMG can access all the resources of my Internal network without having the to use the option "Use default gateway on remote network" on the VPN's TCP/IP Ipv4 Advanced Settings. This is important to me, since they can use their own internet while accessing my network through VPN (the security implications of this are acceptable on my cenario) My Internal network runs on 10.50.75.x, and I set up Forefront TMG to relay the DHCP of my Internal network to the VPN clients, so they get IPs from the same range as the Internal network. This setup initially works, and the VPN clients use their own internet, and can access anything that is on the internal network. However, after a while, HTTP Proxy Traffic from the Internal network starts getting routed to the IP of the RRAS Dial In Interface, instead of the IP of the Internal's network gateway. When this happens, the HTTP Proxy starts getting denied for obvious reasons. My first question is: does this happen because Forefront TMG wasn't designed to handle a cenario that I described above, and it "loses itself"? My second question is: Is there any way to solve this problem, either through configuration or firewall policies? My third question is: If there's no way that it can work with the cenario above, is there another cenario that will solve my problem, and do what I'd like it to do properly? Below are my network routes: 1 => Local Host Access => Route => Local Host => All Networks 2 => VPN Clients to Internal Network => Route => VPN Clients => Internal 3 => Internet Access => NAT => Internal, Perimeter, VPN Clients => External 4 => Internal to Perimeter => Route => Internal, VPN Clients => Perimeter Tks!

    Read the article

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • Make mod_wsgi use python2.7.2 instead of python2.6?

    - by guron
    i am running Ubuntu 10.04.1 LTS and it came pre-packed with python2.6 but i need to replace it with python2.7.2. (The reason is simple, 2.7 has a lot of features backported from 3 ) i had installed python2.7.2 using ./configure make make altinstall the altinstall option installed it, without touching the system default version, to /usr/local/lib/python2.7 and placed the interpreter in /usr/local/bin/python2.7 Then to help mod_wsgi find python2.7 i added the following to /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local i start apache and run a test wsgi app BUT i am greeted by python 2.6.5 and not Python2.7 Later i replaced the default python simlink to point to python 2.7 ln -f /usr/local/bin/python2.7 /usr/bin/python Now typing 'python' on the console opens python2.7 but somehow mod_wsgi still picks up python2.6 Next i tried, PATH=/usr/local/bin:$PATH export PATH then do a quick restart apache, but yet again its python2.6 !! Here is my $PATH /usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games contents of /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local <VirtualHost *:80> ServerName wsgitest.local DocumentRoot /home/wwwhost/pydocs/wsgi <Directory /home/wwwhost/pydocs/wsgi> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/wwwhost/pydocs/wsgi/app.wsgi </VirtualHost> app.wsgi import sys def application(environ, start_response): status = '200 OK' output = sys.version response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Apache error.log 'import site' failed; use -v for traceback [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23235): Initializing Python. [Sun Jun 19 00:27:21 2011] [notice] Apache/2.2.14 (Ubuntu) mod_wsgi/2.8 Python/2.6.5 configured -- resuming normal operations [Sun Jun 19 00:27:21 2011] [info] Server built: Nov 18 2010 21:20:56 [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23238): Attach interpreter ''. [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23239): Attach interpreter ''. [Sun Jun 19 00:27:31 2011] [info] mod_wsgi (pid=23238): Create interpreter 'wsgitest.local|'. [Sun Jun 19 00:27:31 2011] [info] [client 192.168.1.205] mod_wsgi (pid=23238, process='', application='wsgitest.local|'): Loading WSGI script '/home/wwwhost/pydocs/$ [Sun Jun 19 00:27:50 2011] [info] mod_wsgi (pid=23239): Create interpreter 'wsgitest.local|'. Has anybody ever managed to make mod_wsgi run on a non-system default version of python ?

    Read the article

  • VPN Connection Causes Internal LAN Connection Loss with Server

    - by sleepisfortheweak
    I've tried configuring basic PPTP VPN at my small business using a number of different tutorials. As far as I can tell, the actual VPN connection worked fine, but upon connecting a client, the Server 'disappears' from the internal LAN. The RRAS service must be stopped before the connection is restored. My Setup: The network is simply a DSL Gateway/Router to the outside functioning as NAT/Firewall/DHCP. The server is a Win Server 2008 machine at fixed IP 192.168.1.200. The server has 1 NIC, so I used the 'custom' option when configuring RRAS. The RRAS settings should be default except that I've disabled ports for connection types I'm not using and reduced PPTP ports to 10. I've also created an address pool and disabled DHCP packet forwarding. The server only functions as a File Share and now a VPN Server. Local LAN computers all have mapped network shares to the server authenticated based on Local User/Group setup on the server. The Problem: The moment a client connects through VPN, the server 'disappears' from the local network. All mapped drives disconnect and there is no response to a ping 192.168.1.200. Even if the client disconnects, the server does not re-appear at that address until the RRAS service is stopped. I've Tried: Using an Address Pool inside and outside the local subnet. Using DCHP Relay Checking Inbound/Outbound filters (none enabled) The fact that nothing I've tried has had any effect, and that I can connect and successfully obtain an IP tells me that it's something more fundamental I'm missing. My gut tells me that it's something to do with the second IP address added by the VPN client somehow taking over the interface or traffic from the local LAN accidently getting routed to the VPN client instead of handled at the server once RRAS has become 'active' when a client connects. Hopefully this may be obvious to someone with real IT experience. I've been doing this a while and almost never been stumped. I'm starting to think it might actually be something tricky since my setup is pretty basic yet refuses to work. I'll be happy to include more info if this doesn't ring any bells right away for anyone. Thanks

    Read the article

  • Subversion 1.7.x and expat location in configure

    - by ditto
    I am running CentOS 6.3 64bit and DirectAdmin control panel. Currently I have installed Apache Subversion 1.6.19 without any problems. I have installed expat and expat-devel and neon-devel using yum. When installing Apache Subversion 1.6.19 this configure command works fine: ./configure --prefix=/usr --with-ssl --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However when installing Apache Subversion 1.7.7 using the same configure command as above, I get this error after doing commmand "make": /etc/httpd/lib/libaprutil-1.so: undefined reference to `XML_StopParser' collect2: ld returned 1 exit status make: *** [subversion/svnadmin/svnadmin] Error 1 However I found out I can solve that problem by adding this into the configure command: --with-expat=includes:lib_search_dirs:libs So it then looks like this: ./configure --prefix=/usr --with-ssl --with-expat=includes:lib_search_dirs:libs --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However that configure command then give this warning: configure: WARNING: Expat found amongst libraries used by APR-Util, but Subversion libraries might be needlessly linked against additional unused libraries. It can be avoided by specifying exact location of Expat in argument of --with-expat option. So I want to solve that. I have experimentet alot, but not been able to figure out how to "specifying exact location of Expat" in configure command, and how to find out what the location should be? However after a lot of searching I found this: http://subversion.tigris.org/issues/show_bug.cgi?id=3997 - that is a FreeBSD user saying this: Building Subversion 1.7.x on FreeBSD currently requires a configure flag: --with-expat=/usr/local/include:/usr/local/lib:expat As that is the default location of expat on that platform, it would be nice if configure detected it automatically. However I am not using FreeBSD, I am running CentOS 6.3 64bit. Also remember I said I have installed expat and expat-devel and neon-devel using yum. However I tried to use the expat/command path posted by the FreeBSD user, and it seems to work, it does not give errors when running configure command, and does not give errors when running "make". This is what I used then: ./configure --prefix=/usr --with-ssl --with-expat=/usr/local/include:/usr/local/lib:expat --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config But this server is a production server, and therfor I need your help to advice if this is also correct to run on a CentOS server? Is the following path in expat command correct on CentOS?: --with-expat=/usr/local/include:/usr/local/lib:expat If not, please advice what it should be changed to. Thanks in advance for any confirmation or help on this!

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • ZFS Recover from Faulted Pool State

    - by nickv2002
    I have a six disk ZFS raidz1 pool and had a recent failure requiring a disk replacement. No problem normally, but this time my server hardware died before I could do the replacement (but after and unrelated to the drive failure as far as I can tell). I was able to get another machine from a friend to rebuild the system, but in the process of moving my drives over I had to swap their cables around a bunch until I got the right configuration where the remaining 5 good disks were seen as online. This process seems to have generated some checksum errors for the pool/raidz. I have the 5 remaining drives set up now and a good drive installed and ready to take the place of the drive that died. However, since my pool state is FAULTED I'm unable to do the replacement. root@zfs:~# zpool replace tank 1298243857915644462 /dev/sdb cannot open 'tank': pool is unavailable Is there any way to recover from this error? I would think that having 5 of the 6 drives online would be enough to rebuild the right data, but that doesn't seem to be enough now. Here's the status log of my pool: root@zfs:~# zpool status tank pool: tank state: FAULTED status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM tank FAULTED 0 0 1 corrupted data raidz1-0 ONLINE 0 0 8 sdd ONLINE 0 0 0 sdf ONLINE 0 0 0 sdh ONLINE 0 0 0 1298243857915644462 UNAVAIL 0 0 0 was /dev/sdb1 sde ONLINE 0 0 0 sdg ONLINE 0 0 0 Update (10/31): I tried to export and re-import the array a few times over the past week and wasn't successful. First I tried: zpool import -f -R /tank -N -o readonly=on -F tank That produced this error immediately: cannot import 'tank': I/O error Destroy and re-create the pool from a backup source. I added the '-X' option to the above command to try to make it check the transaction log. I let that run for about 48 hours before giving up because it had completely locked up my machine (I was unable to log in locally or via the network). Now I'm trying a simple zpool import tank command and that seems to run for a while with no output. I'll leave it running overnight to see if it outputs anything.

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • HP Proliant G7 hardware RAID configuration automation with ribcl

    - by karthik
    I have been trying to automate hardware RAID configuration of HP proliant machines before OS installation (So I can not use hpacucli) ssh into iLO3 doesn't have option for RAID configuration I use ribcl but there is no command for RAID config, however I see this under the command GET_EMBEDDED_HEALTH. <STORAGE> <CONTROLLER> <LABEL VALUE="Controller on System Board"/> <STATUS VALUE="OK"/> <CONTROLLER_STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="50014380215F0070"/> <MODEL VALUE="HP Smart Array P420i Controller"/> <FW_VERSION VALUE="3.41"/> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 1I Box 1"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="04"/> </DRIVE_ENCLOSURE> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 2I Box 0"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="01"/> </DRIVE_ENCLOSURE> <LOGICAL_DRIVE> <LABEL VALUE="01"/> <STATUS VALUE="OK"/> <CAPACITY VALUE="68 GB"/> <FAULT_TOLERANCE VALUE="RAID 0"/> <PHYSICAL_DRIVE> <LABEL VALUE="Port 1I Box 1 Bay 3"/> <STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="6TA0N3SZ0000B231CYDT"/> <MODEL VALUE="EH0072FAWJA"/> <CAPACITY VALUE="68 GB"/> <LOCATION VALUE="Port 1I Box 1 Bay 3"/> <FW_VERSION VALUE="HPDH"/> <DRIVE_CONFIGURATION VALUE="Configured"/> </PHYSICAL_DRIVE> </LOGICAL_DRIVE> </CONTROLLER> </STORAGE> My question is, is there a way I modify/create this xml piece (say I have 2 Logical drive with one spare) and reboot the server it takes effect ? If this approach is not correct are there any other ways to automate hardware raid config ?

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • Repair corrupt hard disk on Mac without install CD

    - by Sarah
    The hard disk of my late 2009 MacBook Pro appears to have become corrupted. I am traveling and do not have my install CD (and won't for several weeks, nor will I be anywhere near an Apple store). The hard disk is not the original, which failed in June 2011. It's some Hitachi replacement installed by IT. History: I was typing an email this afternoon, my computer suddenly started making soft clicking sounds and then froze. I was not moving around. I rebooted, which took a while. I heard more clicking sounds and the computer froze at least once again. It's now kind of working, with mdworker sucking up one CPU. There are no awkward hard drive sounds when I run Chrome or play music. However, when I launched Stickies, I found no trace of my saved Stickies. I ran a live disk verification from within Disk Utility, and it reported Problem: As reported, I don't have access to an installation disc and am nowhere near an area where I can get one for at least two weeks. I have the option of asking someone to go to some trouble and expense to get one for me, but I'm not sure it's worth it: I've read that I can use fsck from single-user mode to repair the disk. Should I just try this? Is it risky? I'm concerned that the clicky sound portends imminent (mechanical) hard drive failure, so it's not worth doing a silly repair. This hard disk is backed up, but I definitely won't be able to access the backup while traveling. I'd like to maximize the probability that I can keep using my computer (and all its current files) while traveling. Update I bit the bullet and ran fsck -fy from single-user mode. It only needed one pass (modification) to reach the "okay" stage. However, rebooting took nearly 5 min and involved several rounds of scratchy sounds and a few bad clicks. I'm now back to kind of using my computer (the same files are missing as before). When I ran live disk verification from Disk Utility this time, however, it reported that the volume appears to be OK. Am I right to infer from the scratchy sounds, however, that my hard drive is still rapidly on its way out? Is there anything else I can do to increase its functionality over the next few weeks?

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >