Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 470/654 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Is domain-transfer inherently safe for downtime when the name servers remain the same?

    - by jlmt
    I've been reading around this topic towards understanding whether there's some or no chance of downtime during an upcoming domain transfer for 15 live and very critical domains. In our case there are three companies involved: CompanyA is the original registrar and DNS host, CompanyB is the new DNS host, and CompanyC is the new registrar. I've already changed the nameservers for all domains to those of CompanyB. We suffered some downtime because CompanyA deleted their hosted DNS for our domains directly after the change, but the changes propagated and we're now able to configure our DNS with CompanyB. From what I understand (please correct where wrong!): There exists an SOA record that points oneofourdomains.com to ns.companyb.com. That record is maintained and authoritatively hosted by the ccTLD registry for the domain (eg. Verisign for .com). CompanyA currently has the ability to change the SOA record because they're the registrar. There exist NS records for oneofourdomains.com, which are also related to the link from domain name to nameserver, are similarly hosted by the ccTLD, and which CompanyA are also able to change while acting as registrar. Neither CompanyB nor CompanyC currently have any control over the SOA or NS records. CompanyA are unable to cause us (DNS) problems during the transfer by dropping service early, because they are not the authoritative source for the SOA and NS records. When we transfer the domains, it's administrative control of the SOA and NS records that will be transferred to CompanyC. As long as we advise CompanyC that the SOA and NS records must not change (as regards pointing to CompanyB's nameservers), there's no need for any kind of DNS change, and therefore no possibility of downtime. Is my understanding of this correct? My fear is that CompanyA will somehow cut us off again, and their support dept hasn't given me much confidence in their understanding of the topic.

    Read the article

  • Big and reaaaally strange problem with a web server (host InMotion Hosting)

    - by altar
    Hi. I have a terrible problem that I have tried to solve since three days ago: I browse my own web site and after a while I cannot access the web site. AT ALL! I can only see a 501 error message: "Method Not Implemented. GET to / not supported. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request." Once I get that error the site is totally and permanently inaccessible in that browser!! Reboot, browser restarts, clear cache, clear all history and cookies, etc., are not working. I have reproduced it in 4 different computers. Three computers are in one city, the 4th is in another city. Two different ISP's also. One computer is Linux, the others are on Windows (XP and 2000). Browsers are FF 3 to FF3.5 and IE 8. The error is ALMOST reproducible on demand (for me at least). It appears when I browse the forum under certain circumstances. I don't know which are these circumstances, but if I browse it long enough (10 sec to 5 minutes) it eventually appears. Just to make it clear, once the error appears (while browsing the forum) then the whole web site become inaccessible, not only the forum! My host is not willing to help because they say they cannot reproduce the error. I sent screenoshots but they don't care. NEWS Reseting browser's settings from the 'Tools-Clear private data' din't worked. However! When I have cleared the same settings (more exaclty) cookies from the special menu that appears when you right click website's icon, it worked. So it was something related to a cookie BUT it manifests in all browsers (FF, IE, Opera). So it cannot be a browser related problem.

    Read the article

  • How do I prevent my computer from freezing when it starts to swap?

    - by cdauth
    I work as a Java programmer, so I often have to run several programs at the same time that consume a lot of memory. When my memory is full and Linux starts swapping, my computer almost completely freezes. I can see that it is heavily writing on the hard-disk and everything reacts really slowly, often not at all. Moving the mouse in X sometimes doesn’t work at all, sometimes it has a delay of several seconds, clicking usually has a delay of several minutes. Sometimes it is possible to change to the TTY (with a long delay), there I can usually type without delay, but when I try to log in, it takes several minutes after typing in the user name until the password prompt appears, and usually an error message appears that tells me that the login timed out. So the only possibility is usually to restart the computer. I noticed that other intensive writing to the hard disk also significantly slows down my computer. Sometimes, I used rsync to limit the bandwidth when I copied files around on my own computer, as else the system would be almost unusable. How can this be? At the moment it seems more useful to me to completely turn off swapping. That might crash some processes, which is unfortunate, but the alternative at the moment is to crash all processes by turning off my computer. I am using Gentoo Linux with kernel 3.6.2-gentoo, I have a 10 GB swap partition on a HDD.

    Read the article

  • If I run two monitors from two different graphic cards, can I still have Twinview?

    - by rumtscho
    I am planning to get a second 2560x1440 monitor for home. The trouble is, I only have 1xDVI, 1xVGA on my graphics card (a 250 GT). I don't want to buy a new graphics card until the prices for the 500 series have stabilized, so probably not before summer (or will it happen earlier? I don't remember how it was for other series, and I couldn't find long-term price history for video cards). The solution I had in mind is to get the 7600 GS from my old PC, which also has 1xDVI, 1xVGA, and run each monitor on a separate card. I have never done that, and I was wondering 1. If I will be able to run the monitors in Twinview then, or will I be stuck with separate X sessions, and 2. Whether there are some other disadvantages as compared to a single-head graphics card. (I am using the proprietary driver because I need compiz). As an aside, how do I find out whether the DVI port on the old graphics card is dual link?

    Read the article

  • Hard drive causing BSOD

    - by JoshIrving
    I've come across a problem after building my new PC and installing a clean Windows 7. I originally planed on a RAID 1 or 0 but after further research I decided against it. So I was left with two 1TB Western Digital Black SATA 6Gb/s hard drives. My plan now was to use my second hard drive as a backup (using Windows Backup or 3rd party software). I set both hard drives to AHCI in the BIOS and installed Windows 7. I went through the lengthy process of downloading and installing each driver manually (latest versions), using the motherboard disk for a list of what I need. After a few restarts and before installing any software, I took an image backup onto DVD and the second hard drive. First witnessed the problem during the first scheduled Windows backup. The progress bar froze at about 70% (doc backup done, image backup in progress). It stayed still for 2 hours until it blue screened. Next time the backup froze, I tried shutting down. It logged me out and got stuck at the last step ("Shutting down" and blue spinner) for an hour, until I hard shutdown. I later realised this hasn't got anything to do with the backup. I ended up blue screening on almost every shut down (same place). Turns out, it's because of the second hard drive spinning down or turning off. The computer will now shutdown properly, as long as I remember to read or write to the second drive before executing shutdown. I've now set "Turn off hard disk after: Never" - No problems, so far. Do I have dodgy hard drive(s) or should I investigate the POWER_STATE_DRIVER_FAILURE BSOD - can it be a driver issue? AHCI?

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • Sendmail slow to accept emails

    - by Rich
    I have a PHP web app which is using SMTP to sendmail on localhost to send email. I would like sendmail to accept the mail request immediately and queue it for later sending, as I don't want to have user-facing request threads blocked on emails. Sendmail is installed with the default settings on RHEL web servers. Sometimes sendmail is blocking for a long time after the MAIL command is sent -- sometimes taking 60 or 90 seconds to accept the mail. The time take is usually very close to 60 or 90 sec, which makes me think this is some kind of timeout. I have looked in the sendmail logs, and there are plenty of "deferred" emails, but nothing which looks responsible for this delay. How can I diagnose what is slowing down sendmail? How can I configure sendmail to always accept the mail immediately and to queue the mail for later sending? Update: I'm not sure, but it looks like this might be linked to aol.com addresses. I strongly suspect that sendmail is doing some kind of blocking receipient address verification at the accept-email-for-sending stage. How can I disable that, so that sendmail doesn't block my UI threads? Update 2: This only seems to happen at busy times. Perhaps I am running out of sendmail threads or something? How can I check that?

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Wifi antenna extension with F-connector/RG-6(RG-59) cable?

    - by rjz2000
    In an older house, the wire mesh in walls surrounding the furnace behave like a Faraday cage and block wifi signals. It is also difficult to lay new cable, however there is television cable to multiple locations due to there once having been a roof-installed, television antenna. It would be relatively trivial to install the wifi router at the center distribution point, then have the antenna broadcasting/receiving the signal plugged in at each of the old television outlets. I assume that it would not be too difficult to find an adapter for SMA <- F-type connectors. The cable is actually RG-59 rather than RG-6, but I assume that it still has relatively good RF isolation along its length, which is no more than a couple hundred feet in any direction. Does anyone know a problem with the idea? Will a router get confused if there is /too little/ interference between the two antenna? Is that length of cable (~100ft) too long for the signal a router broadcasts? I have seen that it is also possible to use old ~$30/each FiOS cable modems available on eBay to extend a network over television cable. However, that seems like a less elegant solution, and might interfere with upnp and dlna services I'd like to have work on a single network. Thanks if anyone has answers or suggestions before I try this project!

    Read the article

  • What is the fall off of subsecond throughput on Ethernet Network Interfaces

    - by Kyle Brandt
    On a network interface, speeds are given in term of data over time, in particular, they are bits per second. However, in the uber-fast world of computing -- a second is kind of a really long time. So for example, given a linear falloff. A 1 GBit per second interface would do 500MBit per half second, 250Mbit per quarter second etc. I imagine at certain units of time, this is no longer linear. Perhaps this is set by ethernet frequencies, system clock speeds, interrupt timers etc. I am sure this varies depending on the system -- but does anyone have more information or whitepapers on this? One of the main reasons I am curious is to understand output drops on interfaces. Even if the speed per second is much lower than the interface can handle -- perhaps there are spikes that cause drops for only small numbers of milliseconds. Perhaps various coalescing would hide this effect -- or perhaps increase it on the receiving interface? Do queues make a difference here? Example: So given if this is linear down to the MS we would have 1Mbit/MS, and if Wireshark isn't distorting what I see, should I see drops when I have a spike beyond 1Mbit?

    Read the article

  • Strange boot problems on 6 month old setup

    - by Balefire
    I've already exhausted my knowledge on this one, so forgive me if this post is a bit long. I built a computer 6 months ago for my wife and it worked fine until last week. Then it randomly shut down and would lock up while trying to boot on the boot screen. I cleared cmos and it allowed me to do startup recovery, but it "failed to fix the issue" so I reinstalled windows on the HD (moving the old install to windows_old). It worked, so I started installing drivers again, but then when I restarted to finalize installations it locked up again. This time, I took the hard drive and hooked it up to my computer, backed up all her files, and then formatted the hard drive before reinstalling it. (again had to clear cmos to let me boot from disk) It installed windows, I installed drivers, and it worked for a few hours but then died during startup again. So, then I got a new HD, cleared cmos, and installed clean again, with the same result as the time before, it worked for a few hours, installed windows updates, then crashed on the 3rd or 4th time turning it on. I decided next to try reinstalling and then going online to see if there were any updates for the BIOS or drivers on the Motherboard, but now I can't get it to even bring up the boot menu, so now I'm just left wondering was it the motherboard, or is it the CPU, or the RAM? The problem was strangely intermittent so I thought it had to be a software issue, since a hardware issue would ALWAYS fail to boot, right? But now it seems to be a hardware issue, because it's not bringing up anything. Any suggestions? System: Windows 7 64-bit 970A-DS3 Gigabyte Motherboard AMD Phenom II X4 955 Deneb 3.2GHz Quad core Proc GeForce GT 430 (Fermi) 1GB Video Card 500W PSU 2 x G.SKILL Ripjaws X Series 4GB 240-Pin DDR3 1600 RAM

    Read the article

  • VPN with VLANs? [closed]

    - by Craig
    As usual, I'm sure I'm in way over my head on this one. My networking skills are limited; so, bear with me if you will. What I have are a few testing servers at my house as well as at a friends house that I want to link together so they can see each other (VPN right? I've done those before). We want to be able to see all the servers and work with them from either location. All the servers also need to be able to see each other. But, we don't want to see each others PCs, printers, PS3s etc. How do we pull that trick off? Multiple VLAN?... subnets?... what? If hardware matters, I have an old PC I was planning on loading pfSense onto because my current el-cheapo router doesn't support VPN. The VPN linking the houses is about the only thing I'm sure on. Beyond that, I'm lost. I'm not a complete noob; but, like I said, I'm not so sharp with the more complex networking. I do however read well... So use lots of descriptive words and feel free to link away to long dry articles if necessary. :-)

    Read the article

  • retain last used path to location for saving files in Windows 7

    - by Mark Miller
    I am using Microsoft Office 2010 and Windows 7 on a Dell PC. I am opening a bunch of MSWord files one at a time, copying data tables therein, pasting the data into Excel and saving the Excel files as comma delimited text files. I am creating a separate Excel file for each MSWord file. The path to the folder containing the saved comma-delimited files is quite long, something like this: c:\users\me\aa\bb\cc\dd\ee\ Every time I open Excel and save a new comma-delimited file I have to re-navigate the entire path (c:\users\me\aa\bb\cc\dd\ee). In the past Windows seemed to remember the last used path, saving a lot of tedious key-strokes. In fact, I think Windows did this for me as recently as last week, albeit on a different computer. Can I apply a setting in Windows somewhere asking it to offer the last used path as a default when saving files so I do not have to re-navigate the entire directory structure to save each new comma-delimited file? If I can, how so? Where is the option for specifying that setting? Thank you for any help.

    Read the article

  • How to make Microsoft JVM work on Windows 7?

    - by rics
    I am struggling with the following problem. I cannot install MS JVM 3810 properly on Windows 7. When I start Interner Explorer 8 without starting any java 1.1 programs choosing Java custom settings under Internet options causes the crash of the browser. I have some Java 1.1 programs that work well in Internet Explorer 8 on Windows XP after the installation of MS JVM 3810. I know that it is not advised to use this old JVM but it is not a short-term option to port the programs in newer Java since it contains 3rd party components. Complete rewrite is a long-term plan. Strangely jview and appletviewer (jview /a) works from a console so the MS JVM 3810 is not completely busted just IE 8 does not like it. The problem with the appletviewer is that it cannot connect to the server even if both signed and unsigned content in Java custom settings have been set to Enable all. (Since Java custom settings was unreachable due to the crash the modifications - including My computer - were performed through the registry and pre-checked to behave correctly on Windows XP and Internet Explorer 8.) If jview was working then I could at least think of a workaround. Is there a way to configure MS JVM or jview properly on Windows 7? Another options would be: Checking Internet Explorer 9 Beta. Using virtualbox and Windows XP older IE in it. Delaying Windows 7 upgrade. ... Update Finally we have modified all the programs to work parallelly as applet and application as well. This way the programs can still be used from browser on older Windows versions. On Windows 7 the applications are started from the desktop. Installation to all user machine can easily be solved since they already have a large common application drive. The code update is fortunately only a few lines of modification: including a main method in the applet class. Furthermore instead of the starting html page a bat file is used to set the classpath before the startup with jview.

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

  • What is the correct approach i should use for an application that requires amazon S3 uploads and SimpleDB data management?

    - by Luis Oscar
    I am developing an application for iOS and that is going smoothly, the problem is that I am very new at server sided things. I am totally confused about how to correctly use Amazon Web Services for this purpose. What I want to do is very simple. I want my application to be able to query a servlet hosted in EC2 to be able to retrieve pictures and data based on some criteria from S3 and SImpleDB respectively. Also the application should be able to upload pictures into a S3 bucket and register the information in the SImpleDB. My main concerns are security and costs, So far i was using Amazon Token Vending Machine but I haven't been successful when trying to customize it, and while researching I discovered that on the long run it is very expensive. The ultimate goal is to handle a "social" picture service for my iOS application. Being able to register new users, authenticate these users. See what permissions they have to which pictures from the bucked. And all this without having to worry about Third party people from accessing the private pictures of my users. Sorry for this question but I am really clueless about how to handle this... I have tried reading many articles but all these server stuff looks very scary.

    Read the article

  • Tried to install Mint to a Flash Drive. Now I can't boot from the main hard disk.

    - by Dan
    Hello, all. I'm kind of new to Linux and I need some help. I wanted to install a Linux distro to a flash drive so that I can have a portable OS with all my settings, programs, etc. wherever I go. So I fired up a Linux Mint Live CD and installed Mint to the flash drive, and this seems to work OK. But now, whenever I try to boot up my system normally without the flash drive plugged in, it doesn't seem to work. It basically hangs for a bit, and then I get the following prompt: error: no such device: (some long hex val) grub rescue> However, when I try powering my system up when the USB is plugged into the computer, it gives me an option between using the OS installed on my USB and the OS installed on my HD. Selecting the latter, everything loads up just fine. I'm guessing that installing Mint to the flash drive somehow messed with my native Grub installation, but, again, I'm kind of new to Linux, so I'm not sure exactly why. Any help is greatly appreciated.

    Read the article

  • AMD processors witn graphics card bundled [closed]

    - by shybovycha
    Sorry for posting this question here - just don't know to which StackExchange website i should be writing. I've heard AMD created processors with video card bundled. So now these processors should work as fast as just usual processors with discrete video card but AMD's ones should use less power and spread less warm. Some googling around gave me the result like "AMD processors of A-series". They were mentioned to be build using that technology i described above. But on the other hand, we have a small rate of publishing and not-very-good quality of AMD drivers. I am a game-developer and web-developer so i need a powerful processor and graphics card and a lot of RAM on board (to make it possible to create a sample Grails application, for example or to create some 3D models in Maya/Cinema4D, for instance). Still i want my battery to be a long-living one, so power usage is a bit critical for me. So, my questions are: are there any processor building technology like i've described and which series they are (if they exist)? which processor shall fit the laptop the best: AMD one or i5/i7 on with nVidia graphics card for the purposes mentioned above?

    Read the article

  • Fake demostration software for command line

    - by Joe
    I'm looking for some software that would be useful for giving demonstrations. I regularly have to show the effects of scrips ect to classes while talking about their effects, and equaly regularly I have finger trouble and have to rewrite various commands - wasting class time and general energy. I'd like to be able to record a sequence of commands in advance, and then play them back at the speed of my choosing. So I might have a file that containes the commands: echo "hello world!" ls ls -l ls -l | sort I'd like to be able to play these commands back by typing similar ones in. So I'd have a blinking command prompt and if I typed 'echo "hxxx' the command prompt would read home$echo "hell and if I typed any other letters the terminal would fill up with the remainder of the command until I press enter, when it executes the command. The point is that even if I screw up the command when typing it, the command that I'd prepared in advance would be executed. My question is - does similar software exist for giving demonstrations? or even, is this an easy thing to script up...? EDIT - two quick things first of all I'm on osx - but it would be nice to get a general solution for other people who arrive here from google. and second a lot of the comments/answers are concentrating on, in effect, making it fast and easy to enter long commands by means of hotkeys and the like. Actually I'd like it to at least look like I'm typing live - that's why I put in the bit about the one-to-one keymapping, but I don't think I explained that quite as well as I could have...

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • How to stop my wireless adapter from received dhcp from router (windows)

    - by baobeiii
    Hi, I have a windows 7 computer which is connected via vpn to an OpenVpn server which happens to be in another country. I have all internet traffic being routed from my computer through the vpn to the server. However dns queries are not going through the vpn, but are instead going directly to my isp's dns via a route outside of the vpn tunnel. This is happening because my wireless adapter is configured to obtain DNS server address automatically. The router that stands between my computer and the internet happens to have a DCHP server running on it that is assinging my computer with the DNS addresses of the isp. The issue is, i haven't been able to stop my wireless adapter on my computer from receiving the dns settings from the router. I've tried selecting 'use the following dns server addresses' and then just leaving them blank, but ipconfig /all shows me that this hasn't worked and i'm still getting dns form the router. So is there any way to completely stop my windows wireless adapter from receiving these settings from the router? I have the OpenVpn server pushing to my computer's tun adapter the dns that it should be using. I'd rather solve this in a way that doesn't involve disabling the dhcp server on the router or fiddling with the router. The reason is i'm on a laptop and i want my vpn to not leak dns even when i'm out, for example in wireless hotspots. I know if i could just force the wireless adapter to ignore the router's dhcp server then my dns queries would go through the tunnel to the dns address pushed by the OpenVpn server. Sorry, i know thats long winded, if you have any idea's please do tell me. Thanks and merry xmas.

    Read the article

  • How do I set up postfix to store e-mail in a file instead of relaying it?

    - by GomoX
    I want to run a staging copy of a production server on a local environment. The system runs a PHP application, which sends e-mail to customers in various scenarios and I want to make sure no e-mail is ever sent from the staging environment. I can tweak the code so it uses a dummy e-mail sender, but i'd like to run the exact same code as the production environment. I can use a different MTA (Postfix is just what we use in production), but I'd like something that is easy to set up under Debian/Ubuntu :) So, I'd like to set up the local Postfix install to store all e-mail in (one or more) files instead of relaying it. Actually, I don't really care how it's stored as long as it's feasible to check the e-mail that was sent. Even a set up option that tells postfix to keep the e-mail in the mail queue would work (I can purge the queue when I reload the staging server with a copy from production). I know this is possible, I just haven't found any good solution online for what seems like a fairly common need. Thanks!

    Read the article

  • How to get Subversion repository from svn:// and https://?

    - by Hikari
    I know these are noob questions, but I never got my own Subversion running before and I'm kinda lost. I installed VisualSVN in Windows, but it doesn't support svn:// protocol by default, only HTTP or HTTPS. It is working fine over HTTP, and I'm able to manage it from its management tool, see its repositories and get their HTTP-based URL, and from that I'm able to use Tortoise to check out and check in. I'm able to check out from a repository URL using Tortoise: http://Main:90/svn/HikariKrumo/ But I need svn:// protocol for Redmine to access it. Redmine says to support http:// but it reports this error message: The entry or revision was not found in the repository.. And I need HTTPS to access it from Internet. If I can get Redmine to access it from svn:// I can just configure it to use HTTPS in place of HTTP, and I hope it all to works. I like VisualSVN because of its management tool, but I can use another Subversion distro if needed, as long as it supports svn:// and https://. I'm getting crazy on it because it should be simple but I can't get it to work.

    Read the article

  • How do I get to the bottom of network latency and bandwidth issues

    - by three_cups_of_java
    I recently moved two blocks south. That move moved me from Comcast to Broadstripe (high-speed internet cable providers). Comcast was pretty good. Broadstripe sucks. I called them on the phone, and they basically brushed me off (politely). I want to come to them with some numbers, so I can say more than just "it's really slow". I still have access to my old Comcast service, so I can run the tests using both providers. Here's what I'm seeing with my new Broadstripe service: 1) When I browse to most sites, there is a long delay (5-10 seconds) before the page starts loading in my browser 2) The speed test tell me I have 12 megs down (bullshit) 3) I have a server at my office. I just downloaded some files (using scp on the command line). It said I'm getting 3.5 KB/s I'm an experienced programmer and spend most of my days on the command line and in vim. Networking, however is not a strong point. I've played around with traceroute, but I'm not sure if that's the right tool to use. I have access to servers all over the country (I would just use Amazon EC2 to set up a test server), and I prefer to use Ubuntu for my testing. How can I come up with some hard numbers to show Broadstripe how crappy their service is?

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >