Search Results

Search found 28762 results on 1151 pages for 'go goo go'.

Page 188/1151 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How do you test your porn filter

    - by Zoredache
    For testing antivirus we have EICAR, for SPAM, we have GTUBE. Is there a standard site that is or should be included in blacklists that you can use for testing instead of going to your favorite porn site in front of your boss, the CEO, or someone else who feels that seeing such a site is an excuse for a sexual harassment suit? Update This is less about getting permission for me to test, though that answer is useful. I do have both permission and responsibility to actually make sure the filter is running. I am able test the filter is functioning with a netcat. Instead, I am hoping there is a standard domain name that is blocked by most/all filters for testing. I need to be able to share this with my boss and users. I need to be able to demonstrate what happens when someone go to a filtered page. I need to have a way to quickly prove to others that the filter is working without asking them to go to some site that will not cause grief if for some reason the filter is not working. If there isn't already a good domain for this purpose I may simply have to register a domain myself, and then add the domain to all the filters I am responsible for.

    Read the article

  • How to have SSL on Amazon Elastic Load Balancer with a Gunicorn EC2 server?

    - by Riegie Godwin
    I'm a self taught back end engineer so I'm learning all of this stuff as I go along. For the longest time, I've been using basic authentication for my users. Many developers are advising against this approach since each request will contain the username & password in clear text. Anyone with the right skills can sniff on the connection between my iOS application and my Django/Gunicorn Server and obtain their password. I wouldn't want to put my user's credentials at risk so I would like to implement a more secure way of authentication. SSL seems to be the most viable option. My server doesn't serve any static content or anything crazy of that sort. All the server does is send and receive "json" responses from and to my iOS application. Here is my current topology. iOS application ------ Amazon Elastic Load Balancer ------- EC2 Instances running HTTP Gunicorn. Gunicorn runs on port 8000. I have a CNAME record from GoDaddy for the Amazon Elastic Load Balancer DNS. So instead of using the long DNS to make requests, I just use server.example.com. To interact with my servers I send and receive requests to server.example.com:8000/ This setup works and has been solid. However I need to have a more secure way. I would like to setup SSL between my iOS application and my Elastic Load Balancer. How can I go about doing this? Since I am only sending json responses to my application, do I really need to buy a certificate from a CA or can I create my own? (since browsers will not be interacting with my servers. My servers are only designed to send json responses to my iOS application).

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Macbook Battery Charging Troubles

    - by bobber205
    Problem: Occasionally, my macbook's battery will say it's charging and actually won't be. It stays at 0% for a long time. Other info: I thought it was my battery (laptop was 3 years old). Got a new battery. Issue did not go away. Got a new power supply. Did not go away. Ended up getting a unibody macbook pro. :) (We even ended up moving to a new house). Now still having the issue. The only thing I can think of is my power strip, which is the only thing that has stayed constant. Is it possible for the strip to be affecting the amount of watts my macbook(s) are getting and preventing it from properly charing the battery. I think it goes in and out, the battery picks up the slack and once it's empty the computer shuts down b/c there's no power at all for a second or two. Funny thing is I have a desktop PC on this same strip and it has never had issues with power. Thanks! :)

    Read the article

  • Reducing video mode switching during Linux boot

    - by Zack
    When I boot up my desktop computer, which only has Linux on it, the video mode and/or console font gets switched four times: When GRUB starts, it switches from 80x25 text to a graphical mode so it can draw a pretty background behind its menu; GRUB then goes back to 80x25 text after I pick something from the menu; When the KMS driver for my video card loads, it switches to a much higher-resolution text mode (I don't know if this is a hardware text mode or not); Finally X starts and it goes graphics and stays that way. I think this last switch does not change the resolution of the video mode, only the graphicalness. I'd like to get rid of as many of these mode switches as possible. Ideally, when GRUB takes over from the BIOS it would go directly to the same high-resolution text mode that the KMS driver selects, and the display would stay in that mode till X starts and brings up graphics. I am under the impression that this is possible by mucking with the kernel command line and/or the GRUB console module load parameters, but I don't know the details. GRUB 1.98+20100706, kernel 2.6.32.15 using Nouveau video drivers. Distro is Debian unstable. Please no answers that involve recompiling anything or cobbling together bleeding-edge kernel/driver combinations, I don't care enough about this to go to that much trouble. EDIT: Tobu suggests setting GRUB_GFXMODE to the full pixel resolution of the monitor, and GRUB_GFXPAYLOAD_LINUX=keep to avoid the mode switch after the menu goes away. This does part of what I want, but winds up being worse overall. There's no mode switch after the menu, but there's still a painfully-slow screen repaint (I should probably just give up on GRUB's gfxmode, it's waaaay too slow at 1920x1200). More seriously, there's now a double mode switch when nouveaufb loads, along with fun-looking error messages in dmesg [ 5.923798] [drm] nouveau 0000:02:00.0: allocated 1920x1200 fb: 0x40250000, bo ffff8801ba5f4600 [ 5.923802] fb: conflicting fb hw usage nouveaufb vs EFI VGA - removing generic driver [ 5.923821] [drm] nouveau 0000:02:00.0: PFIFO_INTR 0x00000010 - Ch 1 ("PFIFO_INTR" message repeats 400+ times) [ 5.925609] Console: switching to colour dummy device 80x25 [ 5.925802] Console: switching to colour frame buffer device 240x75

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • Dynamic subdomain routing

    - by Nader
    Hi everyone, I asked this question over at stackoverflow, but got very few views: http://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a wildcard CNAME entry so that all requests go to a single IP where perhaps two machines using heartbeat (for failover) know how to look up the IP in the database and then do an http redirect to the appropriate app server. This seems clunky and slow to me. Run my own DNS server that can be programatically managed such that when a new site is created a DNS entry is added. We also move sites around to different app servers, so I would need to be able to update DNS entries in close to real time. Thoughts anyone? Thanks. Update2: I've setup external wildcard DNS pointing at an HAProxy web server whose job it is to route requests to backend servers. The mapping is stored in our internal PowerDNS server. Question now is how to get the HAProxy server (or another) to use the value of the internal DNS and not some config file or access list? – Update: Based on some suggestions below, it seems like reverse-proxy server(s) is the way to go. As I'll be rebalancing the domain-server mapping, these need to work instantly and the TTL on a DNS solution could be a problem. Any recommendations on software to use considering this domain-IP data is stored in a DB, and I'll need this to be performant?

    Read the article

  • How to enabled Printer Sharing on Web Server 2008?

    - by FarrEver
    I am installing Web Server 2008 for my home network. I have 2 USB printers that I am connecting to this machine and want to share these printers so that my other machines can print to these 2 USB printers. (I previously had Win Server 2003 on this machine and was able to share both printers fine.) File and Printer sharing Inbound Role for my Private network is enabled, when I go into Network and Sharing Center and try to turn ON Printer Sharing, it never sticks. It always stays on OFF. I go to my installed printers and try to Share them and get the following error message: Printer Settings could not be saved. Remote connections to the Print Spooler are blocked by a policy set on your machine. I have not been able to find a policy on my machine that is preventing this. I have searched a lot over the past few days and most of the results say what I have done should work and there are also a number of search results that say Printer Sharing on Web Server 2008 is not allowed and you have to hack it. Has anyone installed Web Server 2008 and shared printers before? If so, what are the detailed steps you took to get this to work?

    Read the article

  • IIS ASP Redirect Removal

    - by Kim L
    We have a website that is setup on IIS 7 and are trying to replace it with a new site, but need a redirect that is in place removed. The old site used a custom file as the homepage (WN-main.asp). We removed all the old site files, including web.config, and placed them in a subdirectory for safe keeping. The new site no longer uses ASP, and we'd like to use a regular index.html as the default. However, when we go to the website, it keeps trying to redirect our .com to .com/WN-main.asp -- and that gives us a 404 Error in the Application for "Default Web Site" because we removed that page. In the IIS "Default Document" settings we have index.html at the top, and WN-main.asp is nowhere to be found in the list (it never was there). We've also removed the web.config file from the root directory, and put the entire old website in a subdirectory. As well as restarted IIS. We're assuming that the redirect is setup somewhere in IIS because if I navigate to .com/index.html which is our new site, it works. Our problem is that oursite.com redirects to oursite.com/WN-main.asp. Grr. If you go to www.worzalla.com you can see how it redirects to the WN-main.asp page right now as the homepage. Any ideas where this redirect could have been setup so we can remove it? Thanks!

    Read the article

  • What applications do people use for windows folder management? How do you switch between folders in

    - by 118184489189799176898
    I'm always switching between client's folders in different applications like photoshop, sql manager, explorer etc. It's so slow to go between them, navigate to the folder and it's still too slow to copy and paste the directory etc. It's so annoying to do. Someone must have a good solution. I was thinking if there was a "recently accessed" folders list available within every folder explorer window... so in any application, if i go "file open" it will have something, somewhere that lists the recently accessed folders - that would be really helpful. I am aware of the recent places folder in win7, but this sucks because it is not sorted by date accessed. Perhaps if there was a way to change this then this would become a decent feature? Is there some application that already does this? i'm sure someone has already solved this issue in a more elegant solution than I can think off. I'm keen to know what programs people use or how people addresss this issue? Thanks...

    Read the article

  • Lucid Lynx login issue

    - by Bart Silverstrim
    Recently upgraded from Karmic to Lynx. Upgrade seemed to go well, no noticeable issues. I logged in, and my window manager wasn't starting. An application would appear, but sans control buttons and border, so figured the windows manager needed to be given a swift kick. Opened a web browser and a quick google had me run "metacity --replace &" and everything popped up. I re-ran the Compiz configuration tool to enable my rotating desktop cube to the way I liked it, and had to reconfigure my desktop switcher to the right number of desktops (although the first time I ran it, it crashed on the panel and reloaded...odd, but once it relaunched it seemed fine.) Today I installed updates, rebooted and logged in for the second time since my upgrade. Again, the window manager was dead, and my compiz settings were gone, and the workspaces were set back to four (and when I clicked on the preferences to change them, it crashed on the panel and reloaded again). Resetting everything made things look somewhat normal again. I'm guessing it'll work until I reboot again. Googling around isn't turning up similar complaints about Lucid Lynx and the window manager. Before I go deleting preference files, anyone else know of this kind of issue and what could be done about it? Or should I start taking the stab in the dark approach of deleting preference files hoping one of them is corrupt or has something unsupported in it that's throwing LL for a loop?

    Read the article

  • Load balanced IIS. Should I use NLB, or linux-based reverse proxy, or something else?

    - by growse
    What would be the best approach for load-balancing at least 2-3 Windows 2008 R2 IIS webservers running a multitude of .NET applications? My choices appear to be: 1) Hardware-based network device load balancer, like a Cisco CSS 2) Windows NLB 3) Some sort of linux based proxy, either haproxy or other The three servers sit as VMs on a vSphere farm, so I have the ability to clone to up the instance count in times of high load. I control the switch that the vSphere hosts are plugged into (Cisco 3750), but don't control the switching/routing infrastructure beyond that to the clients. (1) Is too expensive, and probably overkill for my needs. I've included this in case someone figures out a cunning way to do it on my existing network kit, which I doubt. (2) would seem to be the obvious "built-in" option, but seems to be quite fiddly messing around with network interfaces, multicast, and generally other things that seem to be needlessly complex. It's also fairly stupid, in that it can't remove hosts from the pool if they start throwing 500 errors or otherwise go wrong (3) is the most interesting option, as it would appear to offer the most flexibility and customizability, but without having to mess around with the network. However, while I'm familiar with the reverse-proxy capabilities of lighttpd etc, I'm not that well read on other options like HAProxy, which might be able to offer a lot more. Which would you go for, and is there anything I've not thought of?

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video - any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • filter / directing URLs coming onto a network

    - by Jon
    Hi all, I an not sure if this is possible or not but what i would like to do is as follows: I have one IP address (dynamic using zoneedit.com to keep it upto date). I have one webserver running my main site which is an Ubuntu machine running Apache. I also have a windows 2008 server running another site. Just to confuse things I also run part of my Apache site on the windows server, currently using proxypassreverse to get the information from it. So it looks something like this: IP 1.2.3.4 maps to mydomain.com as well as myotherdomain.com All requests that come into port 80 are forwarded to the Apache box and I use Virtualhost settings to proxy the windows sites where needed. so mydomain.com is an Apache site mydomain.com/mywindowssection is the Apache server using proxypassreverse to get part of the site from the Windows server myotherdomain.com uses Apache and proxypassreverse to get the whole site. What I would like to be able to do is forward all http requests that come into my network to one machine that figures out who should be serving that content. so: mydomain.com would go to the Apache machine myotherdomain.com would go the windows machine. I am just in the process of setting up an Astaro gateway (never done this before so taking a while to configure) as my firewall, dns, dhcp etc, don't know if this can handle it. I have the capacity to run a VM on the network if a seperate box would be needed for this process as well. Thanks for any and all feedback. Jon

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • Mac SMB connections to Windows 2003 server, leaving Open Files

    - by Bruce Garlock
    We have several Mac clients (Both 10.5, and 10.6) mounting a share from a Windows 2003 server. At least once a day, our archivist will go into this share to archive items from it, to the backup server. Most of the time, she has no issues: she copies the folder to the archive server, when it's done, she deletes it from this share. Then, she will come upon one, and it will say she doesn't have permission. When I go into the Open sessions, it will say that a particular user has a READ lock on the file, in Windows 2003. Of course, this person does not have the file open, and the only way we can delete it, is to close the open session on the file. My thoughts: The Mac likes to "sprinkle" Hidden "Resource Forks" on SMB servers, and possibly, when this Mac who last wrote to that share, closes out of the file, and these files still exist. Windows 2003 has a bug, that doesn't properly "release" the OPLOCK on the file? Steve Ballmer just doesn't like Mac's, so he wants to annoy everyone by not releasing file locks :-) What can be done about this? It happens every day, and sometimes several times per day! Many thanks, Bruce

    Read the article

  • How to install IE9 when KB2120976 is not applicable to my Windows 7 x86 Ultimate edition?

    - by CVertex
    I'm trying to install IE9, visit http://beautyoftheweb.com, download it and then says it's downloading prereqs. After a minute, the installer says there's a problem and directs me to Prerequisites for installing Internet Explorer 9 Beta I click on the x86 installers one by one... most say "already installed".. But http://support.microsoft.com/kb/2120976/ says "The update is not applicable to your computer". Lame. So I try the IE9 install and the go through the whole process again with the same result. I discovered there's an IE9 install log at C:\Windows\IE9_main.log which logs the install process and reports an Error for 2120976 00:06.630: ERROR: Error installing prerequisite file (C:\Users\Vijay\AppData\Local\Temp\IE98036.tmp\KB2120976_x86.msu): 0x80240017 (2149842967) 00:06.677: INFO: PauseOrResumeAUThread: Successfully resumed Automatic Updates. 00:12.090: INFO: Link clicked, opening URL in new window:'http://go.microsoft.com/fwlink/?LinkId=185111' 00:12.106: INFO: Setup exit code: 0x00009C47 (40007) - Required updates are missing from the system.are missing from the system. Any idea why KB2120976 is inapplicable to my LEGAL Windows 7 Ultimate system? Any help is greatly appreciated.

    Read the article

  • CentOS Installation on a Cisco MCS 7800

    - by William
    I'm having some problems installing CentOS 5.5 Final (i386) onto my server, a Cisco MCS 7800. The problem comes very early into the installation. When the welcome screen comes up and gives you the option on how to boot into the DVD, I'll press enter to go into the graphical installer. The screen will then have a blinking cursor in the top left of the screen and will never go away (I thought that it just might need time but I let it sit for over 5 hours). I then booted into it again and tried using Linux Text thinking it was a problem with the graphical installer. That didn't work, same problem. Then I tried a DVD of RHEL 5 and got the same problem, both graphical and Linux text. At this point I think it's a hardware problem. The server has 2GB of ECC RAM, 1 Pentium 4 CPU @ 3.06GHZ and 2 WD Hard Drives (80GB) configured for RAID 0. (There is also an option in the BIOS for what OS type and that is set to Linux.) If anyone has any idea what is going on, it would be helpful. Edit Typing "text" doesn't change a thing. Still stuck at the blinking cursor. I looked it up and it's really the same thing as typing "linux text", which as stated in the first part of my question, I've already done.

    Read the article

  • Sending Mail from Web App to Google Apps won't work - internal routing? VPS

    - by Charlino
    I've got a web application, www.mysuperwebapp.com, which sends out emails for various reasons - the contact us page is a good example. I am using google apps on the domain and I've setup a google apps group, Support ([email protected]), which I want the emails from the contact us page to go to. But the emails don't seem to be sending... I thought it could be that the groups security is a little tighter than normal email, so I change the contact us email to go to [email protected] - but they still didn't appear. So I'm guessing that it has something to do with internal routing and the messages aren't leaving the server/network at all. Eg Sending an email from the mysuperwebapp.com computer to a mysuperwebapp.com email address. I put an entry into the hosts file for 123.123.123.123 mysuperwebapp.com but that doesn't seem to have helped. Also, there doesn't seem to be anything of interest in the event log. What do I need to do? Or what do I need to get my VPS hoster to do? TIA, Charles Ps. The VPS is a Windows 2008 box with IIS7 and the default SMTP (IIS6?) server. The web app is ASP.NET MVC - not that that should matter.

    Read the article

  • Cannot find network path for computer in workgroup of home Windows XP PCs

    - by John Galt
    VMWare Workstation 6.5 is running as an app on a Windows Vista 64bit PC host. Thanks to Workstation we have 2 guest machines running: TerriVM and MattVM (both of these run Windows XP SP2). We are attempting to get virtual networking configured so we can access the files of both of these VM guest systems from other real PCs connected to this home network. We think we are close but we can't quite get it right... Here is what we've done so far: * On VM Workstation, we set "Host Virtual Network Mapping" to use VMnet0 with the setting "Bridge to an automatically chosen adapter". * On each VM guest (i.e. using Windows explorer on XP), we rightmouse on the C disk, click "Sharing" tab, set shareName to "C_Disk" and check both boxes labeled "Share this folder on the network" and "Allow network users to change my files". Symptoms: On "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \TerriVM\C_Disk and assign drive letter T. We see all the folders on this shared drive and can open files on them. So that is good. On same "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \MattVM\C_Disk and assign drive letter M. We get a message box "_The network path \mattvm\C_Disk could not be found_". Alternatively, we type just \mattvm\ into the Folder box and click "Browse" and get a dialog box where we drill down from "Entire Network" to "Microsoft Windows Network" to "Workgroup" where both TerriVM and MattVM are listed as computers on the network. Clicking the + sign next to MattVM gives an hourglass and never enables the OK button and I have to cancel. In summary, I think we've attempted to share both of these virtual machines using the same techniques and connect to them in similar fashion, but one connects properly and the other machine can be seen but no shared resources on it can be accessed. Can anyone suggest something possibly overlooked or something to try? Thanks so much in advance.

    Read the article

  • Using public interfaces on a server connected through a GRE tunnel

    - by Evan
    I'm pretty new to networking so please forgive any terminology mistakes. I have 2 servers connected with a GRE tunnel. Server1 (10.0.0.1) ---- Server2 (10.0.0.2) I want to be able to bind to the public IPs on Server2 using Server1. To do this, I setup virtual interfaces with Server2's public IPs on Server1 and then used routing rules on Server1 to route the packets through the GRE tunnel. On Server1: ip rule add from [Server2's first public IP] table gre ip rule add from [Server2's second public IP] table gre ip route add default via 10.0.0.2 dev gre1 table gre This works great and I can see the packets arriving via GRE on Server2. I can see the packet exiting the tunnel on Server2's gre1 device as shown: From Server1: ping -I [Server2's public ip] google.com tcpdump from Server2's GRE tunnel device: 12:07:17.029160 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) [Server2's public ip] > 74.125.225.38: ICMP echo request, id 6378, seq 50, length 64 This is exactly the packet I want. However, I'm not seeing it go out at all on eth0:0 (where Server2's public IP is bound to). I've tried to use routing rules to get packets coming from Server2's public IP (which would be coming out of dev gre1) to go through dev eth0 on the public default gateway and that doesn't work either. I'm at a loss, thank you to anyone who can help.

    Read the article

  • Prevent master to fall back to master after failure

    - by Chrille
    I'm using keepalived to setup a virtual ip that points to a master server. When a failover happens it should point the virtual ip to the backup, and the IP should stay there until I manually enable (fix) the master. The reason this is important is that I'm running mysql replication on the servers and writes should only be on the master. When I failover I promote the slave to master. The master server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP1 } vrrp_instance APP1 { interface eth0 state EQUAL virtual_router_id 61 priority 999 nopreempt virtual_ipaddress { 217.x.x.129 } smtp_alert } Backup server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP2 } vrrp_instance APP2 { interface eth0 state EQUAL virtual_router_id 61 priority 100 virtual_ipaddress { 217.xx.xx.129 } notify_master "/etc/keepalived/notify.sh del app2" notify_backup "/etc/keepalived/notify.sh add app2" notify_fault "/etc/keepalived/notify.sh add app2” smtp_alert }

    Read the article

  • Apache taking up a lot of CPU while running request-tracker4

    - by bhowmik
    I am trying out a request-tracker installation on an EC2 micro instance. The specs for the micro instance are as follows 1) Ubuntu 12.04 64bit, 613MB RAM, 8GB Hard Drive 2) Running request-tracker 4.0.4 from the repository, perl 5.14.2, Apache2, MySQL5 3) Request-tracker4.0.4 running with mod_perl2 and Worker mpm 4) Apache configured with Worker MPM. Config snippet given below Timeout 150 KeepAlive On MaxKeepAliveRequests 60 KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> Now when I start Apache2 it works fine for some time and after a while the CPU load shoots up to 99% or more. Usually it is one or more Apache processes doing this. I've tried a to modify the worker module configuration without any luck. The log files for both Apache2 and request-tracker4 are set to log debug messages and don't show anything to indicate what could be causing this. The system gets a maximum of 5 users at any given time and usually (90% of the time) it is just 2. I've just installed it and we only have 20 tickets in the database. I don't think its the memory thats causing the issue since the server isn't swapping or even close to it and I hardly see the memory usage go up. Would appreciate any pointers on how to go about troubleshooting this. In case it helps I've also tried this out a similar installation on a small instance (Identical settings except RAM bumped upto 1.7GB) and I still see the issue.

    Read the article

  • Windows XP Setup Fails to Recognize USB Floppy after formatting AHCI disk

    - by Strahn
    I am attempting to install Windows XP Professional x64 onto a HP EliteBook 8540w. I have downloaded both the latest Intel Rapid Storage Technology drivers and the Intel Storage Matrix drivers that are listed on HPs website and copied the drivers over to a floppy disk (two separate floppies, one for each version of the drivers.) Booting to my WinXP Pro x64 install CD, I go through the F6 process, load the driver and am able to see my HDD, delete, create and format partitions on it. When I go to continue the install, after checking the disk, the system asks me to enter the disk labeled "Intel Rapid Storage Technology" and press enter to continue. Nothing happens at this point when I press enter. This happens if I use the latest drivers or the older drivers. We have created a slipstreamed install CD using nLite that has the AHCI drivers integrated, which installs fine. However, we have identified a number of issues with the system that I believe are side-effects of using nLite for the slipstreaming and I am attempting to verify that. I have researched this issue and found a few examples of others having the same problem, but no solution. The USB floppy is a Lacie branded floppy, connecting it to a working XP workstation shows it to be the Y-E Data USB floppy drive that is supposedly 100% compatible with XP per MS KB 916196.

    Read the article

  • Adding users to Sharepoint when they are not in the same domain

    - by jim-work
    Bear with me as I explain this, I'm working my way through Sharepoint access as I go, but I'll clarify my question as I go along. The Problem We have about 10,000 users who need access to our Sharepoint 2005 based reporting. Because our organization is migrating from one domain to another, we need to add each user twice, once for each domain. For the current domain, this is no problem, we've got a powershell script that I tweaked to add all the users in a given CSV file, this takes about 5 minutes to run. The big problem we're having is with users who are NOT in our currently active domain. Because the sharepoint server cannot authenticate the new users, we can't add them directly. What we're doing is creating a temp user, then using STSADM.EXE to migrate that test user to the proper domain/user_name for each of our 10,000 users. The creation and migration takes about 5 seconds per user, or well over 12 hours to run. The Question Has anyone encountered this before? Is there a way to add users without requiring AD authentication? Why is STSADM.EXE running so slow? Thanks a lot for any advice or direction anyone can give me.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >