Search Results

Search found 9170 results on 367 pages for 'world of goo'.

Page 285/367 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • How to get german QWERTY on Windows?

    - by Arturas M
    Well I'm used to having the world standard keyboard which is qwerty and not qwertz... But on windows I can't find the choice for german input which would be qwerty, not qwertz. In linux there was german input with qwerty so it was fun. I believe it should be on Windows too? Cause i'm sick of this qwertz always having to correct and search for z or y... Yeah, I know, if Germans made a mistake, it wasn't loosing WW2... Edit: Maybe I wasn't clear enough, it's not about changing between different language inputs? It's about having all german keys in german input, just the z and y would be in the correct places like all the worlds keyboards use and like the US keyboard uses... Solution: Well unfortunately by searching everywhere and waiting for answers I couldn't find what I needed, so I came up with this solution - I used The The Microsoft Keyboard Layout Creator And created my own very custom layout by modifying the default german layout and switching places of z and y. In case somebody needs it and doesn't want to go the hard way, I decided to upload them, so here are the links to download and install the keyboard layout: http://www20.zippyshare.com/v/95071447/file.html http://rapidshare.com/files/3822150342/German%20QWERTY%202%20by%20Arturas%20M.zip It will appear as a choice under German input between languages in your language bar. If you just want to use this german layout, just remove all german layouts and install this one. Good luck and have fun using it!

    Read the article

  • RHEL 5.2 installing on ProLiant BL460c - hangs at 'now booting the kernel'

    - by Dr Rocket Mr Socket
    As the title states, I have a problem. This server and the installation disc are also on the other side of the world to me so... So far, I have tried to start the install with the parameters: linux text noapic noacpi no=apic no=acpi which results in the same hang. I have also disabled a PCI ethernet adapter, I am uneasy about disabling the onboard ethernet adapter I do not know if ILO uses this. Anyone have any advice? Much appreciated. EDIT: full output after trying to begin the installation. boot: linux text Loading initrd.img.................. Loading vmlinuz....... Uncompressing Linux...done. Now booting the kernel stays on this for hours EDIT2: adding the 'mem=40960M' (server has 40 gigs of ram) parameter allows it to proceed but the following output directly after 'Now booting the kernal' Memory: sized by int13 0e801h initrd extends beyond end of memory (0x00ef2090 > 0x00000000) disabling initrd Console: 16 point font, 400 scans Console: colour VGA+ 80x25, 1 virtual console (max 63) pcibios_init : BIOS32 Service Directory Structure at 0x000ffee0 pcibios_init : BIOS32 Service Directory entry at 0xf0000

    Read the article

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • iptables port forward + nginx redirect problem

    - by easthero
    Here is my network browser = proxy(iptables port forward) = nginx server proxy: 192.168.10.204, forward 192.168.10.204:22080 to 192.168.10.10:80 nginx server: 192.168.10.10 nginx version:0.7.65 debian testing in nginx settings, I set: server_name _; server_name_in_redirect off; because my server has no domain now, access 192.168.10.10/index.html or 192.168.10.10/foobar is ok then access 192.168.10.204:22080/index.html is ok but access 192.168.10.204:22080/foobar, nginx 301 redirect to http://192.168.10.204/foobar how to fix? thanks telnet 192.168.10.204 22080 Trying 192.168.10.204... Connected to 192.168.10.204. Escape character is '^]'. GET /index.html HTTP/1.1 Host: 192.168.10.10 HTTP/1.1 200 OK Server: nginx/0.7.65 Date: Fri, 28 May 2010 10:07:29 GMT Content-Type: text/html Content-Length: 12 Last-Modified: Fri, 28 May 2010 07:25:12 GMT Connection: keep-alive Accept-Ranges: bytes hello world telnet 192.168.10.204 22080 Trying 192.168.10.204... Connected to 192.168.10.204. Escape character is '^]'. GET /test2 HTTP/1.1 Host: 192.168.10.10 HTTP/1.1 301 Moved Permanently Server: nginx/0.7.65 Date: Fri, 28 May 2010 10:04:20 GMT Content-Type: text/html Content-Length: 185 Location: http://192.168.10.10/test2/ Connection: keep-alive <html> <head><title>301 Moved Permanently</title></head> <body bgcolor="white"> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx/0.7.65</center> </body> </html>

    Read the article

  • Bridging Wireless and Wired Interfaces in Linux

    - by The Daemons Advocate
    My network setup is something like: Wireless Router <---> Netbook <---> Ubuntu Desktop ...or, more verbosely (with interfaces): Wireless Router <--(wireless)--> (eth2) Ubuntu Netbook Ubuntu Netbook (eth0) <---(wired)----> (eth0) Ubuntu Desktop In a perfect world, I'd have the desktop wired, but weird circumstances combined with my wanting to understand more about networking in linux make me want to figure out how to bridge these two devices. A bit of googling has given me this example using bridge-utils, and here's how I'm (failing) to setup the bridge (on the netbook): sudo -i ifconfig eth0 0.0.0.0 ifconfig eth2 0.0.0.0 brctl addbr bridget brctl addif bridget eth0 brctl addif bridget eth2 ifconfig bridget up ...then, trying to make sure that the netbook can still get on the internets... route add default gateway 192.168.2.1 dhclient bridget What happens after this is that the dhclient command above (netbook) doesn't get served an IP, and the Desktop, if I run dhclient, it doesn't get served an IP. Some weird considerations might be that I'm running the Network Manager Applet that comes with Ubuntu. While I'm sure I can get a command line wireless configuration setup, it's a bit complex. Can someone give me a shout as to where I'm going wrong? I'd also like to note another related question titled 'Bridging my laptop’s wireless and wired adaptors', however the setup is different to mine.

    Read the article

  • Delete ARP cache on Mac OS when moving from one Wifi network to the other

    - by Puneet
    I am facing wireless connectivity problems when I move from one Wifi network to the other. Here is how it happens: I am at my friends place. I connect to his Wifi. His Wifi router ip address is 192.168.0.1. Everything is fine I close my laptop, come back to my house, open my laptop and I connect to the Wifi Network at my place. Different ESSID, but the Wifi router address is the same 192.168.0.1. At this point I cant get to anything on the internet. To debug I try to see if I can ping the router (192.168.0.1), I cant. I get a no route to host. Meanwhile airport tells me Im connected to Wifi. I see the arp cache and I see a permanent entry for 192.168.0.1 ? (192.168.0.1) at 5c:d9:98:65:73:6c on en1 permanent [ethernet] This permanent bit looks problematic. I go ahead and delete the arp cache entry and all is fine with the world until I go back to my friends place where the same situation plays out. Now my question is, why the hell is this happening? If there is no way around it, can I run a script on Wifi connect/disconnect to clear out the arp cache? Im using Mac OS X $uname -a Darwin 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

  • Monitor displays at 1024x768; scrolls screen for higher resolutions

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks

    Read the article

  • HP Proliant DL380 G4 - Can this server still perform in 2011?

    - by BSchriver
    Can the HP Proliant DL380 G4 series server still perform at high a quality in the 2011 IT world? This may sound like a weird question but we are a very small company whose primary business is NOT IT related. So my IT dollars have to stretch a long way. I am in need of a good web and database server. The load and demand for a while will be fairly low so I am not looking nor do I have the money to buy a brand new HP Dl380 G7 series box for $6K. While searching around today I found a company in ATL that buys servers off business leases and then stripes them down to parts. They clean, check and test each part and then custom "rebuild" the server based on whatever specs you request. The interesting thing is they also provide a 3-year warranty on all their servers they sell. I am contemplating buying two of the following: HP Proliant DL380 G4 Dual (2) Intel Xeon 3.6 GHz 800Mhz 1MB Cache processors 8GB PC3200R ECC Memory 6 x 73GB U320 15K rpm SCSI drives Smart Array 6i Card Dual Power Supplies Plus the usual cdrom, dual nic, etc... All this for $750 each or $1500 for two pretty nicely equipped servers. The price then jumps up on the next model up which is the G5 series. It goes from $750 to like $2000 for a comparable server. I just do not have $4000 to buy two servers right now. So back to my original question, if I load Windows 2008 R2 Server and IIS 7 on one of the machines and Windows 2008 R2 server and MS SQL 2008 R2 Server on another machine, what kind of performance might I expect to see from these machines? The facts is this series is now 3 versions behind the G7's and this series of server was built when Windows 200 Server was the dominant OS and Windows 2003 Server was just coming out. If you are running Windows 2008 R2 Server on a G4 with similar or less specs I would love to hear what your performance is like.

    Read the article

  • Setting up port forwarding for web server

    - by reyjavikvi
    This could belong on Super User, but I thought this place was more appropiate. I want to run Apache in my computer and want to make it available to the outside world to test a couple things. Apparently, I have to go into my router's (a TP-LINK TD 8910G) settings and forward port 80 to my PC's IP. So far so good. Thing is, since the router uses a web based interface and it's kind of stupid, it told me that since I was using port 80 for this, I should access its settings through port 8080. Maybe it can't detect requests coming from the LAN, I don't know. Point is, now neither port can't access the configuration, and I can't access Internet. Specifically, trying to access anything (including 192.168.1.1, the router's settings) through port 80 turns up a blank page (maybe if I had the server running in my computer I'd get something, but I don't want to risk trying, I had to reset the router and restore the settings), and port 8080 gives a "Can't establish connection" error in Firefox (and similar ones in other browsers). Is there a way to configure the router to not redirect requests coming from inside the network? I'm a beginner with this stuff, so please try to explain in a simple way. If this is more appropiate in Super User, I'm sorry.

    Read the article

  • illegitimate traffic from user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729)

    - by user114293
    Since the beginning of the year, I'm getting a lot of traffic with the user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729). My access logs show 40% - 60% from that user agent. That's strange because the user agent states a Firefox 3.0.10 browser (is anybody using that browser in 2012? Definitely not 40%-60% of visitors on a normal website). Also, the logs show that this user agent only requested the HTML document and no referenced assets like images, css, js files. I checked the IPs of those requests (with that UA). It's coming from all over the world. I recognized that those IPs sometimes have a mobile user agent. So my suspicion is a mobile app that is doing a lot of "spider requests" - but if that would be the case than other web sites should have the same problem. That's actually my question: Does anybody experience same/similar problems?

    Read the article

  • Preferred mail system/server for a company?

    - by Trevoke
    Say you are responsible for setting up an email solution at a company. Which would be your choice? I know of the following options, but many of them not well: Gordano Mail System Exchange Exim Postfix Qmail Zimbra For having used it a little over two years, I really, really like Gordano Mail System. They offer a whole bunch of things, like calendaring, anti-spam, anti-virus, extremely complete and filterable logging options, aliases, a customizable webmail interface... And their software can be installed on both a Windows or Linux OS. In addition, their support is top-notch, their knowledgebase comprehensive (and, I will admit with a touch of pride, I have contributed, with my questions, to the addition of a few articles in there). Of course, they're not free, which can be a problem, but they're not Exchange, and they do offer pretty much everything that Exchange offers -- which is great if you want to stay away from that, but need all the features. Although, if you need a Blackberry Exchange Server, or something similar, I'm not sure what you should go for. So.. What would your choice be? Why? I've never played with a more DIY email solution, but I'm sure many people here have and wouldn't trade their setup for the world :)

    Read the article

  • Distributing processing for an application that wasn't designed with that in mind

    - by Tim
    We've got the application at work that just sits and does a whole bunch of iterative processing on some data files to perform some simulations. This is done by an "old" Win32 application that isn't multi-processor aware, so new(ish) computers and workstations are mostly sitting idle running this application. However, since it's installed by a typical Windows Install Shield installer, I can't seem to install and run multiple copies of the application. The work can be split up manually before processing, enabling the work to be distributed across multiple machines, but we still can't take advantage of multiple core CPUs. The results can be joined back together after processing to make a complete simulation. Is there a product out there that would let me "compartmentalize" an installation (or 4) so I can take advantage of a multi-core CPU? I had thought of using MS Softgrid, but I believe that still depends on a remote server to do the heavy lifting (though please correct me if I'm wrong). Furthermore, is there a way I can distribute the workload off the one machine? So an input could be split into 50 chunks, handed out to 50 machines, and worked on? All without really changing the initial application? In a perfect world, I'd get the application to take advantage of a DesktopGrid (BOINC), but like most "mission critical corporate applications", the need is there, but the money is not. Thank you in advance (and sorry if this isn't appropriate for serverfault).

    Read the article

  • Debian, How to convert filesystem from ISO-8859-1 into UTF-8?

    - by Johan
    I have a old pc that is running Debian stable, that is in need of a upgrade. The problem is that it is using latin1 (ISO-8859-1) for everything, and since the rest of the world has moved to UTF-8 I plan to convert this computer as well. And for this question I will focus in on the files that are served with Samba, and some has some latin1 characters in the filenames (like åäö). Now my plan is to move all data of this old computer onto and a brand new one that is running Debian stable (but with UTF-8). Does anybody have a good idea? Thanks Johan Note: later I plan to use iconv to convert the content of some files with something like this: iconv --from-code=ISO-8859-1 --to-code=UTF-8 iso.txt > utf.txt However I don't know of a good way to convert the filesystem it self. Note: Normally I usaly just scp from one computer to the next, but then I end up with latin1 characters in the utf-8 filesystem... Update: Did a small test round with a hand full of files (with funny chars) in the filenames, and that seemed like it could work. convmv -r -f ISO-8859-1 -t UTF-8 * So it was only to execute with the --notest convmv -r -f ISO-8859-1 -t UTF-8 --notest * Nothing more to it.

    Read the article

  • esxi 5 monitoring

    - by user134880
    I'm new to esxi/vmware world. Have few question about monitoring esxi host. Now I'm using trial versian esxi 5.1 I can see performance charts in vShpere client. Cool. But I want to export this raw data (raw - I mean cvs,txt or some format which I can parse later) to other server to be able to parse this data later and create custom charts. (please do not advise to try vCenter, I need custom charts etc.) I could run esxtop in batch mode and use this data... But... How does vShpere client performance charts work? Where client takes data for charts? So if I will use esxtop batch mode it will add extra load to server. Is it possible to use same source as vSphere client use for charts? There is /var/lib/vmware/hostd/stats/hostAgentStats-20.stats file. Its looks like it is binary format. As I understood it is exactly data which I need?!?! Any ideas how to parse it? Thanks! PS: maybe some one know where to find, if there is one, info about processes running on esxi host?

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • What ways are there to set permissions on an Exchange 2003 mailbox?

    - by HopelessN00b
    I'm having a difficult/impossible time tracing down a permissions issue on an Exchange 2003 mailbox, and I was wondering if I'm missing any technical possibilities here. The basic question is what ways are there to set a user's permissions to access a mailbox in Exchange 2003? I know of two. Permissions on the mailbox itself (Mailbox Rights) and having delegated rights. And then, if it's possible, how would one view all the permissions (including delegated permissions) on the mailbox? The situation is that a new user who's been set up "exactly like all the others" in his department (pretty sure he was copied via the right click option in ADUC, in fact) can't access a specific shared mailbox, which I've been assured about a dozen other people do have access to and access on a regular basis. As to how they got permissions to the mailbox, no one knows, so it must have been granted by a white wizard whose spell has since worn off, so now IT has to handle it instead. Anyway... This mailbox is a normal AD user, created as a service account, for which no one knows the password (of course), so it's probably not the case that this service account was being used to delegate permissions. Upon taking examining the Mailbox Rights directly... Here are the permissions I see: This leads me to believe that one of two things are happening - the managers have been delegating full mailbox permissions to the rest of the department, or everyone's logging in using... not their own account. But, before I get too excited about the prospect of busting out the LART and strolling over to that department, I want to make sure I'm not missing another possible explanation. Like most of the rest of the world, I ditched Exchange 2003 at the earliest possible opportunity, and had been looking forward to never seeing it again, so I'm a bit rusty on the intricacies of how it [mostly, sort of] works. Anyone see any or possibilities, or things I may have missed, or does the LART get to come out and play?

    Read the article

  • Swapping out a hardware firewall does the mac address get cached?

    - by Dan
    We need to replace a hardware firewall (cisco pix) and have a spare that we will use (temporarily). The firewall sits in front of a couple of web-servers colocated at a data-centre. The replacement will be configured with identical settings (external/internal IP addresses, configured ports etc.). When we swap the firewalls over, will this work immediately or will the old Pix's mac address be cached and the new firewall not be seen until the cache is cleared? (What is it though that is caching the address? Is it just the switch/router that our pix is connected to?) Reason for asking is a few years ago I had a smoothwall firewall in front of a lone server (the external IP of the smoothwall was also the external IP of the web-server). When I replaced the smoothwall with a pix, the IP address of the web-server stayed the same but it now had to be reached via the new firewall on a different IP. It took about 2-4 hours before the rest of the world could see that web-server again. I'm hoping for less downtime this time!

    Read the article

  • What kinds of protections against viruses does Linux provide out of the box for the average user?

    - by ChocoDeveloper
    I know others have asked this, but I have other questions related to this. In particular, I'm concerned about the damage that the virus can do the user itself (his files), not the OS in general nor other users of the same machine. This question came to my mind because of that ransomware virus that is encrypting machines all over the world, and then asking the user to send a payment in Bitcoin if he wants to recover his files. I have already received and opened the email that is supposed to contain the virus, so I guess I didn't do that bad because nothing happened. But would I have survived if I opened the attachment and it was aimed at Linux users? I guess not. One of the advantages is that files are not executable by default right after downloading them. Is that just a bad default in Windows and could be fixed with a proper configuration? As a Linux user, I thought my machine was pretty secure by default, and I was even told that I shouldn't bother installing an antivirus. But I have read some people saying that the most important (or only?) difference is that Linux is just less popular, so almost no one writes viruses for it. Is that right? What else can I do to be safe from this kind of ransomware virus? Not automatically executing random files from unknown sources seems to be more than enough, but is it? I can't think of many other things a user can do to protect his own files (not the OS, not other users), because he has full permissions on them.

    Read the article

  • Vim - Displaying Code Output in a New Window á la Textmate?

    - by Michael Density
    A few months back I switched from Textmate to Vim. Overall I really love Vim, but one of the things I miss from Textmate is using the ⌘R command to run Ruby code and having the results neatly pop up in a new, scrollable window. Obviously, Vim is capable of running Ruby code and displaying the output with :w !ruby. The only downside to this is that if the resulting output is too long I can't scroll through it. To combat this problem I tried modifying a :redir function from Vim Tips. It looks like this: function! TabNew(cmd) redir => message silent execute a:cmd redir END tabnew silent put=message set nomodified endfunction command! -nargs=+ -complete=command TabNew call TabNew(<q-args>) Now the output from Ruby is put into a new tab. However, I can't get it to pop up in a new, separate window. Changing tabnew to new just sends the output to a split in the same window. The other problem is that a visible ^M gets appended to the end of each line, so the output ends up looking like this, which is kind of bothersome: Hello World!^M So, is there any way to get the output into a separate window without the ^M appended to the end? Are there any plugins I should be trying to achieve this Textmate-like effect for code output?

    Read the article

  • Using gitlab behind Apache proxy all generated urls are wrong

    - by Hippyjim
    I've set up Gitlab on Ubuntu 12.04 using the default package from https://about.gitlab.com/downloads/ {edit to clarify} I've set up Apache to proxy and run the nginx server the package installed on port 8888 (or so I thought). As I had Apache installed already I have to run nginx on localhost:8888. The problem is, all images (such as avatars) are now served from http://localhost:8888, and all the checkout urls Gitlab gives are also localhost - instead of using my domain name. If I change /etc/gitlab/gitlab.rb to use that url, then Gitlab stops working and gives a 503. Any ideas how I can tell Gitlab what URL to present to the world, even though it's really running on localhost? /etc/gitlab/gitlab.rb looks like: # Change the external_url to the address your users will type in their browser external_url 'http://my.local.domain' redis['port'] = 6379 postgresql['port'] = 2345 unicorn['port'] = 3456 and /opt/gitlab/embedded/conf/nginx.conf looks like: server { listen localhost:8888; server_name my.local.domain; [Update] It looks like nginx is still listening on the wrong port if I don't specify localhost:8888 as the external_url. I found this in /var/log/gitlab/nginx/error.log 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: still could not bind() Apache setup looks like: <VirtualHost *:80> ServerName my.local.domain ServerSignature Off ProxyPreserveHost On AllowEncodedSlashes NoDecode <Location /> ProxyPass http://localhost:8888/ ProxyPassReverse http://127.0.0.1:8888 ProxyPassReverse http://my.local.domain </Location> </VirtualHost> Which seems to proxy everything back ok if Gitlab listens on localhost:8888 - I just need Gitlab to start displaying the right URL, instead of localhost:8888.

    Read the article

  • Syncing Multiple Google Calendars and with Outlook and Android

    - by Fred Thomas
    Perhaps this is a multipart question, but I deal with a lot of calendars in my life, and want to know if there is some way to sync them all together, and maintain appropriate privacy. So I have a family calendar that my ex and I maintain for kid events, and I have a personal calendar for my own life, and I have an Outlook work calendar, for work. Ideally I'd look at my calendar on my Android phone. Is it possible to sync them all together? Is it possible for there to be one calendar to rule them all on my phone, but have the other calendars blank out spaces that are from other calendars, but only show the blanked out without the details. (I don't want my date with Miss Hottie to appear that way on the family calendar, and I probably don't want my visit to the proctologist to appear in the corporate exchange server.) Are there tools available to do this? Bonus question, can I do the same with my to do lists? Double bouns question -- how can I solve world hunger and help us to all live together in peace? :-)

    Read the article

  • nginx config woes for multiple subdomains & domains

    - by Peter Hanneman
    I'm finally moving away from Apache and I've got the latest development version of nginx running on a fully updated Ubuntu 10.04 VPS. I've got a single dedicated IP for the box (1.2.3.4) but I've got two separate domains pointing to the server: www.example1.com and www.example2.net. I would like to map the fallowing relationships between urls and document roots in the config: www.example1.com / example1.com -> /var/www/pub/example1.com/ subdomain.example1.com -> /var/www/dev/subdomain/example1.com/ www.example2.net / example2.net -> /var/www/pub/example2.net/ subdomain.example2.net -> /var/www/dev/subdomain/example2.net/ Where the name of the requested subdomain is a folder under /var/www/dev/. Ideally a request for a non-existent subdomain(no matching folder found) would result in a rewrite to the public site (eg: invalid.example1.com -- www.example1.com) however a mere "404 Not Found" wouldn't be the worst thing in the world. It would also be nice if I didn't need to modify the config every time I mkdir a new subdomain folder - even better if I don't need to edit it for a new domain either...but now I'm getting greedy... :p Although in my defense Apache did all of this with a single directive. Does anyone know how I can efficiently mimic this behavior in nginx? Thanks in advance, Peter Hanneman

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >