Search Results

Search found 18219 results on 729 pages for 'tips box'.

Page 700/729 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Sign an OpenSSL .CSR with Microsoft Certificate Authority

    - by kce
    I'm in the process of building a Debian FreeRadius server that does 802.1x authentication for domain members. I would like to sign my radius server's SSL certificate (used for EAP-TLS) and leverage the domain's existing PKI. The radius server is joined to domain via Samba and has a machine account as displayed in Active Directory Users and Computers. The domain controller I'm trying to sign my radius server's key against does not have IIS installed so I can't use the preferred Certsrv webpage to generate the certificate. The MMC tools won't work as it can't access the certificate stores on the radius server because they don't exist. This leaves the certreq.exe utility. I'm generating my .CSR with the following command: openssl req -nodes -newkey rsa:1024 -keyout server.key -out server.csr The resulting .CSR: ******@mis-ke-lnx:~/G$ openssl req -text -noout -in mis-radius-lnx.csr Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=Alaska, L=CITY, O=ORG, OU=DEPT, CN=ME/emailAddress=MYEMAIL Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a8:b3:0d:4b:3f:fa:a4:5f:78:0c:24:24:23:ac: cf:c5:28:af:af:a2:9b:07:23:67:4c:77:b5:e8:8a: 08:2e:c5:a3:37:e1:05:53:41:f3:4b:e1:56:44:d2: 27:c6:90:df:ae:3b:79:e4:20:c2:e4:d1:3e:22:df: 03:60:08:b7:f0:6b:39:4d:b4:5e:15:f7:1d:90:e8: 46:10:28:38:6a:62:c2:39:80:5a:92:73:37:85:37: d3:3e:57:55:b8:93:a3:43:ac:2b:de:0f:f8:ab:44: 13:8e:48:29:d7:8d:ce:e2:1d:2a:b7:2b:9d:88:ea: 79:64:3f:9a:7b:90:13:87:63 Exponent: 65537 (0x10001) Attributes: a0:00 Signature Algorithm: sha1WithRSAEncryption 35:57:3a:ec:82:fc:0a:8b:90:9a:11:6b:56:e7:a8:e4:91:df: 73:1a:59:d6:5f:90:07:83:46:aa:55:54:1c:f9:28:3e:a6:42: 48:0d:6b:da:58:e4:f5:7f:81:ee:e2:66:71:78:85:bd:7f:6d: 02:b6:9c:32:ad:fa:1f:53:0a:b4:38:25:65:c2:e4:37:00:16: 53:d2:da:f2:ad:cb:92:2b:58:15:f4:ea:02:1c:a3:1c:1f:59: 4b:0f:6c:53:70:ef:47:60:b6:87:c7:2c:39:85:d8:54:84:a1: b4:67:f0:d3:32:f4:8e:b3:76:04:a8:65:48:58:ad:3a:d2:c9: 3d:63 I'm trying to submit my certificate using the following certreq.exe command: certreq -submit -attrib "CertificateTemplate:Machine" server.csr I receive the following error upon doing so: RequestId: 601 Certificate not issued (Denied) Denied by Policy Module The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Certificate Request Processor: The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Denied by Policy Module My certificate authority has the following certificate templates available. If I try to submit by certreq.exe using "CertificiateTemplate:Computer" instead of "CertificateTemplate:Machine" I get an error reporting that "the requested certificate template is not supported by this CA." My google-foo has failed me so far on trying to understand this error... I feel like this should be a relatively simple task as X.509 is X.509 and OpenSSL generates the .CSRs in the required PKCS10 format. I can't be only one out there trying to sign a OpenSSL generated key on a Linux box with a Windows Certificate Authority, so how do I do this (perferably using the off-line certreq.exe tool)?

    Read the article

  • How can I automatically update Flash Player whenever a new version is released? [closed]

    - by user219950
    Summary: Flash Player Update Service doesn't run on a reliable schedule, and doesn't automatically download and apply updates when it does run. Given the importance of having an up-to-date version of Flash Player installed (for those of us who don't use Chrome with its built-in player), I would like to find a way to ensure that new updates are promptly detected and installed. What follows are the details of my efforts to solve this problem on my own... Appendix A: Flash Player Update Service OK, way back in Flash Player 11.2 (or so?) Adobe added the Flash Player Update Service (FlashPlayerUpdateService.exe), it was supposed to keep the Flash Player updated... Upon installation, FPUS is configured to run as a Windows Service, with Start Type set to Manual. A Scheduled Task (Adobe Flash Player Updater.job) is added to start this service every hour. So far, so good - this set-up avoids having a constantly-running service, but makes sure that the checks are run often enough to catch any updates quickly. Google's software updater is configured in a similar fashion, and that works just fine... ...And yet, when I checked the version of my installed Flash Player, I found it was 11.6.602.180, which, based on looking at the timestamps of the files in C:\Windows\System32\Macromed\Flash was last updated (or installed) on Tue, Mar 12, 2013 --- 3/12/13, 5:00:08pm. I made this observation on Thu, Apr 25, 2013 --- 4/25/13, 7:00:00pm, and upon checking Adobe's website found that the current version of Flash Player was 11.7.700.169. That's over a month since the last update, with a new one clearly available on the website but with no indication that the hourly check running on my machine has noticed it or has any intention of downloading it. Appendix B: running the Flash Player updater manually Once upon a time, running FlashUtil32_<version_Plugin.exe -update plugin would give you a window with an Install button; pressing it would download the installer for the current version (automatically, without opening a browser) and run it, then you'd click thru that installer & be done. It was manual, but it worked! Finding my current installation out of date (see Appendix A), I first tried this manual update process. However... Running FlashUtil32_<version_ActiveX.exe -update activex (in my case, that's FlashUtil32_11_6_602_180_ActiveX.exe -update activex) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/activex. Running FlashUtil32_<version_Plugin.exe -update plugin (in my case, that's FlashUtil32_11_6_602_180_Plugin.exe -update plugin) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/plugin. I could continue with the Download page it sent me to, uncheck the foistware box ("Free! McAfee Security Scan Plus"), download that installer (ActiveX, no foistware: install_flashplayer11x32axau_mssd_aih.exe, Plugin, no foistware: install_flashplayer11x32au_mssd_aih.exe) & probably have an updated Flash...but then, what is the point of the Flash Player Update Service if I have to manually download & run another exe? Epilogue I've since come to suspect that the update service is intentionally hobbled to drive early adopters to the manual download page. If this is true, there's probably no solution to this short of writing my own updater; hopefully I am wrong.

    Read the article

  • Apple Airport Express, Extreme and Time Capsules, BT Home Hub, Wireless Extenders confusion

    - by Jamie Hartnoll
    I post quite frequently in Stack Overflow, but use Superuser less frequently. Mainly as I don't change hardware often and rarely have software issues! I live in a small stone cottage, and have an office in a separate building across a yard. I have a BT Homehub which is located in the cottage and a series of Ethernet cables running across the yard to the office. This is fine for my wired stuff. My main office computers are PCs running Windows 7 Ultimate, and one on Win7 Home, all working fine. I also have an old laptop on Win XP which works fine wirelessly in the house for those evenings in front of the TV catching up on a bit of work. I also have an iPhone and an iPad. Recently, I have been trying to get WiFi in the office so I can use Adobe Shadow (or whatever it now is!) to improve mobile web development efficiency using my iPhone and iPad, so I bought this: http://www.ebuyer.com/393462-zyxel-wre2205-500mbps-powerline-wireless-n300-range-extender-wre2205-gb0101f Thinking that would be lovely just plugged into the socket by the door in the office, extending the perimeter of the WiFi from my Homehub. I can't get it to work properly! If I plug a laptop into its ethernet port I can get it to connect to the Homehub and give me a kinda of wired, wireless extender. If, however, I plug the ethernet port into my home hub, it then seems to extend the network, but only my iOs devices work, and all my wired stuff stops working, and seems to create an infinite loop where windows connects to my homehob, and then rather to the internet, it then connects back to the extender thing. Anyway... in the meantime, I took a fatal trip to the Apple Store, where I purchased an Airport Express... solely for the purpose of hooking my iOs devices up as wireless music players in the house. I knew it had WiFi, but didn't want to use that part as an extender, I didn't think it would work on a Homehub anyway. It doesn't work on a Homehub! I now have a new wireless network in the house, which, when anything connects to it cannot connect to the Internet, so it works ONLY as a wireless music player. I then borrowed some Powerline Adaptors from someone and realised that this whole thing was getting totally out of control! It seems all the technology is out there but it's so complicated to get the right series of devices. To further add to the confusion, I wouldn't mind a network hard drive. I bought one that broke and lost everything, so now we're on to looking at the Apple Time Capsules. So my question is... IF... I buy an Apple Time Capsule, can I: Hook that up to my Homehub, leaving the homehub connected to the Internet so my Hub phones still work, then disable wireless on the homehub Link up my Airport Express to the Time Capsule PROPERLY so it will connect to the Internet Do the above with an Apple TV box should I buy one in future Use the Time Capsule as a network hard drive to store video and music that can be viewed/listened to via my iOS devices/Apple TV/Aiport Express anywhere even with my main PC off (this currently stores all this data) Hope that the IOS devices like the WiFi from the TimeCapsule better than the Homehub and work without extension, or buy another Airport Express to get WiFI in the office. Or... should I buy an Airport Extreme and use a USB hard drive for the network drive?

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • Splitting a raidctl mirror safely

    - by milkfilk
    I have a Sun T5220 server with the onboard LSI card and two disks that were in a RAID 1 mirror. The data is not important right now but we had a failed disk and are trying to understand how to do this for real if we had to recover from a failure. The initial situation looked like this: # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size Level Disk ---------------------------------------------------------------- c1t0d0 136.6G N/A DEGRADED OFF RAID1 0.1.0 136.6G GOOD N/A 136.6G FAILED Green light on the 0.0.0 disk. Find / lights up the 0.1.0 disk. So I know I have a bad drive and which one it is. Server still boots obviously. First, we tried putting a new disk in. This disk came from an unknown source. Format would not see it, cfgadm -al would not see it so raidctl -l would not see it. I figure it's bad. We tried another disk from another spare server: # raidctl -c c1t1d0 c1t0d0 (where t1 is my good disk - 0.1.0) Disk has occupied space. Also the different syntax options don't change anything: # raidctl -C "0.1.0 0.0.0" -r 1 1 Disk has occupied space. # raidctl -C "0.1.0 0.0.0" 1 Disk has occupied space. Ok. Maybe this is because the disk from the spare server had a RAID 1 on it already. Aha, I can see another volume in raidctl: # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Volume:c1t132d0 (this is the foreign root mirror) Disk: 0.0.0 Disk: 0.1.0 ... No problem. I don't care about the data, I'll just delete the foreign mirror. # raidctl -d c1t132d0 (warning about data deletion but it works) At this point, /usr/bin/ binaries freak out. By that I mean, ls -l /usr/bin/which shows 1.4k but cat /usr/bin/which gives me a newline. Great, I just blew away the binaries (ie: binaries in mem still work)? I bounce the box. It all comes back fine. WTF. Anyway, back to recreating my mirror. # raidctl -l Controller: 1 Volume:c1t1d0 (this is my server's root mirror) Disk: 0.0.0 Disk: 0.1.0 ... Man says that you can delete a mirror and it will split it. Ok, I'll delete the root mirror. # raidctl -d c1t0d0 Array in use. (this might not be the exact error) I googled this and found of course you can't do this (even with -f) while booted off the mirror. Ok. I boot cdrom -s and deleted the volume. Now I have one disk that has a type of "LSI-Logical-Volume" on c1t1d0 (where my data is) and a brand new "Hitachi 146GB" on c1t0d0 (what I'm trying to mirror to): (booted off the CD) # raidctl -c c1t1d0 c1t0d0 (man says it's source destination for mirroring) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" -r 1 1 (alt syntax per man) Illegal Array Layout. # raidctl -C "0.1.0 0.0.0" 1 (assumes raid1, no help) Illegal Array Layout. Same size disks, same manufacturer but I did delete the volume instead of throwing in a blank disk and waiting for it to resync. Maybe this was a critical error. I tried selecting the type in format for my good disk to be a plain 146gb disk but it resets the partition table which I'm pretty sure would wipe the data (bad if this was production). Am I boned? Anyone have experience with breaking and resyncing a mirror? There's nothing on Google about "Illegal Array Layout" so here's my contrib to the search gods.

    Read the article

  • Play music on iPhone through computer

    - by Kyle Cronin
    Now that I've had my iPhone for a few months, I'm trying an experiment to see if I can't replace the laptop I carry around with my iPhone + internet connected computer. To this end, I've been trying to find a program that will let me play the music on my iPhone through the hardware and software on the host computer. If I recall correctly this was possible a few years ago with the iPod - Linux software like Rhythmbox and Banshee was able to read the music off an iPod and play it through the speakers. I even thought I recalled iTunes itself being capable of this at one time. Now, however, iTunes greys out/disables the music on my iPhone and I can't find any documented support for the iPhone in any other music program. Is this really no longer possible? Am I limited to using the headphone jack to get music to play? (note: I am using an iPhone 3G with the 3.0 software. I am attempting to play music on computers other than the one I sync with) Several replies mention that I should check "manually manage" to do this. I just tried this on a computer that I don't sync my iPhone to and it asked me to erase and sync, which is obviously something I don't want to do. update: OK, I checked the "Manually manage music and videos" box on a computer that I didn't sync to (now known as "Computer A"), and it told me that I needed to erase & sync to cause the changes to have effect, so I did. At this point I'm guessing that my iPhone thinks that it's syncing with that computer. I copied over a few songs using the autofill feature. At this point, Computer A sees the maybe 10 or so songs I've copied using autofill. I then plug my iPhone into my Macbook ("Computer B") which I've been syncing with. At this point, I'm pretty sure that it still thought that all my synced content was still on my iPhone. The "manually manage music and videos" checkbox isn't checked, so I check it and go through a similar process where iTunes erases the synced content and I copy over a playlist. At this point, there's no trace of the songs that I copied over from Computer A. So I plug my iPhone into Computer A - in the Music section are the handful of songs that I had copied over earlier, greyed out and unplayable. To make sure that this wasn't some sort of caching issue, I plugged my iPhone into my sister's Macbook ("Computer C") and it lists the same few, greyed out songs that I had copied over from Computer A. Plugging into Computer B doesn't reveal these songs at all, only the songs that it copied over (these are playable). A few things: This inconsistent behavior is driving me insane. Why would my iPhone report two versions of its contents to different computers? Is there a way to get a computer to completely forget about an iPhone and just resync everything to get everything into a consistent state? Even if I get the phone into a consistent state, I still can't play the files on my phone anywhere but the computer I sync with, which was my original goal. What am I doing wrong? maybe I should read the fine print before I mess with my iPhone So going over this thread with a fine-toothed comb again yields this lovely tidbit in the Apple docs: Note: Even when manually managing, some content may only be available from one library at time. This includes all content on iPhone and video content on iPods. OK, so manually managing is a dead end on the iPhone. Are there any other options? Any unofficial third-party programs or drivers that will work?

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1) EDIT Not sure whether it was a bug or not, but it seems to be fixed now in my current version of ffmpeg, at least for this video (version 0.6.1 from MacPorts).

    Read the article

  • Wireless AAA for a small, bandwidth-limited hotel.

    - by Anthony Hiscox
    We (the tech I work with and myself) live in a remote northern town where Internet access is somewhat of a luxury, and bandwidth is quite limited. Here, overage charges ranging from few hundreds, to few thousands of dollars a month, is not uncommon. I myself incur regular monthly charges just through my regular Internet usage at home (I am allowed 10G for $60CAD!) As part of my work, I have found myself involved with several hotels that are feeling this. I know that I can come up with something to solve this problem, but I am relatively new to system administration and I don't want my dreams to overcome reality. So, I pass these ideas on to you, those with much more experience than I, in hopes you will share some of your thoughts and concerns. This system must be cost effective, yes the charges are high here, but the trust in technology is the lowest I've ever seen. Must be capable of helping client reduce their usage (squid) Allow a limited (throughput and total usage) amount of free Internet, as this is often franchise policy. Allow a user to track their bandwidth usage Allow (optional) higher speed and/or usage for an additional charge. This fee can be obtained at the front desk on checkout and should not require the use of PayPal or Credit Card. Unfortunately some franchises have ridiculous policies that require the use of a third party remote service to authenticate guests to your network. This means WPA is out, and it also means that I do not auth before Internet usage, that will be their job. However, I do require the ABILITY to perform authentication for Internet access if a hotel does not have this policy. I will still have to track bandwidth (under a guest account by default) and provide the same limiting, however the guest often will require a complete 'unlimited' access, in terms of existence, not throughput. Provide firewalling capabilities for hotels that have nothing, Office, and Guest network segregation (some of these guys are running their office on the guest network, with no encryption, and a simple TOS to get on!) Prevent guests from connecting to other guests, however provide a means to allow this to happen. IE. Each guest connects to a page and allows the other guest, this writes a iptables rule (with python-netfilter) and allows two rooms to play a game, for instance. My thoughts on how to implement this. One decent box (we'll call it a router now) with a lot of ram, and 3 NIC's: Internet Office Guests (AP's + In Room Ethernet) Router Firewall Rules Guest can talk to router only, through which they are routed to where they need to go, including Internet services. Office can be used to bridge Office to Internet if an existing solution is not in place, otherwise, it simply works for a network accessible web (webmin+python-webmin?) interface. Router Software: OpenVZ provides virtualization for a few services I don't really trust. Squid, FreeRADIUS and Apache. The only service directly accessible to guests is Apache. Apache has mod_wsgi and django, because I can write quickly using django and my needs are low. It also potentially has the FreeRADIUS mod, but there seems to be some caveats with this. Firewall rules are handled on the router with iptables. Webmin (or a custom django app maybe) provides abstracted control over any features that the staff may need to access. Python, if you haven't guessed it's the language I feel most comfortable in, and I use it for almost everything. And finally, has this been done, is it a overly massive project not worth taking on for one guy, and/or is there some tools I'm missing that could be making my life easier? For the record, I am fairly good with Python, but not very familiar with many other languages (I can struggle through PHP, it's a cosmetic issue there). I am also an avid linux user, and comfortable with config files and command line. Thank you for your time, I look forward to reading your responses. Edit: My apologies if this is not a Q&A in the sense that some were expecting, I'm just looking for ideas and to make sure I'm not trying to do something that's been done. I'm looking at pfSense now as a possible start for what I need.

    Read the article

  • Squid w/ SquidGuard fails w/ "Too few redirector processes are running"

    - by DKNUCKLES
    I'm trying to implement a Squid proxy in a quick and easy fashion and I'm receiving some errors I have been unable to resolve. The box is a pre-made appliance, however it seems to fail on launch.The following is the cache.log file when I attempt to launch the squid service. 2012/11/18 22:14:29| Starting Squid Cache version 3.0.STABLE20-20091201 for i686 -pc-linux-gnu... 2012/11/18 22:14:29| Process ID 12647 2012/11/18 22:14:29| With 1024 file descriptors available 2012/11/18 22:14:29| Performing DNS Tests... 2012/11/18 22:14:29| Successful DNS name lookup tests... 2012/11/18 22:14:29| DNS Socket created at 0.0.0.0, port 40513, FD 8 2012/11/18 22:14:29| Adding nameserver 192.168.0.78 from /etc/resolv.conf 2012/11/18 22:14:29| Adding nameserver 8.8.8.8 from /etc/resolv.conf 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'bin' processes 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'squid-auth.pl' processes 2012/11/18 22:14:29| User-Agent logging is disabled. 2012/11/18 22:14:29| Referer logging is disabled. 2012/11/18 22:14:29| Unlinkd pipe opened on FD 23 2012/11/18 22:14:29| Swap maxSize 10240000 + 8192 KB, estimated 788322 objects 2012/11/18 22:14:29| Target number of buckets: 39416 2012/11/18 22:14:29| Using 65536 Store buckets 2012/11/18 22:14:29| Max Mem size: 8192 KB 2012/11/18 22:14:29| Max Swap size: 10240000 KB 2012/11/18 22:14:29| Version 1 of swap file with LFS support detected... 2012/11/18 22:14:29| Rebuilding storage in /opt/squid3/var/cache (DIRTY) 2012/11/18 22:14:29| Using Least Load store dir selection 2012/11/18 22:14:29| Set Current Directory to /opt/squid3/var/cache 2012/11/18 22:14:29| Loaded Icons. 2012/11/18 22:14:29| Accepting HTTP connections at 10.0.0.6, port 3128, FD 25. 2012/11/18 22:14:29| Accepting ICP messages at 0.0.0.0, port 3130, FD 26. 2012/11/18 22:14:29| HTCP Disabled. 2012/11/18 22:14:29| Ready to serve requests. 2012/11/18 22:14:29| Done reading /opt/squid3/var/cache swaplog (0 entries) 2012/11/18 22:14:29| Finished rebuilding storage from disk. 2012/11/18 22:14:29| 0 Entries scanned 2012/11/18 22:14:29| 0 Invalid entries. 2012/11/18 22:14:29| 0 With invalid flags. 2012/11/18 22:14:29| 0 Objects loaded. 2012/11/18 22:14:29| 0 Objects expired. 2012/11/18 22:14:29| 0 Objects cancelled. 2012/11/18 22:14:29| 0 Duplicate URLs purged. 2012/11/18 22:14:29| 0 Swapfile clashes avoided. 2012/11/18 22:14:29| Took 0.02 seconds ( 0.00 objects/sec). 2012/11/18 22:14:29| Beginning Validation Procedure 2012/11/18 22:14:29| WARNING: redirector #1 (FD 9) exited 2012/11/18 22:14:29| WARNING: redirector #2 (FD 10) exited 2012/11/18 22:14:29| WARNING: redirector #3 (FD 11) exited 2012/11/18 22:14:29| WARNING: redirector #4 (FD 12) exited 2012/11/18 22:14:29| Too few redirector processes are running FATAL: The redirector helpers are crashing too rapidly, need help! Squid Cache (Version 3.0.STABLE20-20091201): Terminated abnormally. CPU Usage: 0.112 seconds = 0.032 user + 0.080 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): total space in arena: 2944 KB Ordinary blocks: 2857 KB 6 blks Small blocks: 0 KB 0 blks Holding blocks: 1772 KB 8 blks Free Small blocks: 0 KB Free Ordinary blocks: 86 KB Total in use: 4629 KB 157% Total free: 86 KB 3% The "permission denied" area is where I have been focusing my attention with no luck. The following is what I've tried. Chmod'ing the /opt/squidguard/bin folder to 777 Changing the user that squidguard runs under to root / nobody / www-data / squid3 Tried changing ownership of the /opt/squidguard/bin folder to all names listed above after assigning that user to run with squid. Any help with this would be greatly appreciated.

    Read the article

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • How can I switch an existing set of Subversion repositories to use ActiveDirectory?

    - by jpierson
    I have a set of private Subversion repositories on a Windows Server 2003 box which developers access via SVNServe over the svn:// protocol. Currently we have been using the authz and passwd files for each repository to control access however with the growing number of repositories and developers I'm considering switching to using their credentials from ActiveDirectory. We run in an all Microsoft shop and use IIS instead of Apache on all of our web servers so I would prefer to continue to use SVNServe if possible. Besides it being possible, I'm also concerned about how to migrate our repositories so that the history for the existing users map to the correct ActiveDirectory accounts. Keep in mind also that I'm not the network administrator and I'm not terrible familiar with ActiveDirectory so I'll probably have to go through some other people to get the changes made in ActiveDirectory if necessary. What are my options? UPDATE 1: It appears from the SVN documentation that by using SASL I should be able to get SVNServe to authenticate using ActiveDirectory. To clarify, the answer that I'm looking for is how to go about configuring SVNServe (if possible) to use ActiveDirectory for authentication and then how to modify an existing repository to remap existing svn users to their ActiveDirectory domain login accounts. UPDATE 2: It appears that the SASL support in SVNServe works off of a plugin model and the documentation only shows as an example. Looking at the Cyrus SASL Library it looks like a number of authentication "mechanisms" are supported but I'm not sure which one is to be used for ActiveDirectory support nor can I find any documentation about such matters. UPDATE 3: Ok, well it looks like in order to communication with ActiveDirectory I'm looking to use saslauthd instead of sasldb for the *auxprop_plugin* property. Unfortunately it appears that according to some posts (possibly outdated and inaccurate) saslauthd does not build on Windows and such endeavors are considered a work in progress. UPDATE 4: The lastest post I've found on this topic makes it sound as though the proper binaries () are available through the MIT Kerberos Library but it sounds like the author of this post on Nabble.com is still having issues getting things working. UPDATE 5: It looks like from the TortoiseSVN discussions and also this post on svn.haxx.se that even if saslgssapi.dll or whatever necessary binaries are available and configured on the Windows server that the clients will also need the same customization in order to work with these repositories. If this is true, we will only be able to get ActiveDirectory support from a windows client only if changes are made in these clients such as TortoiseSVN and CollabNet build of the client binaries to support such authentication schemes. Although thats what these posts suggest, this is contradictory from what I originally assumed from other reading in that being SASL compatible should require no changes on the client but instead only that the server be setup to handle the authentication mechanism. After reading a bit more carefully in the document about Cyrus SASL in Subversion section 5 states "1.5+ clients with Cyrus SASL support will be able to authenticate against 1.5+ servers with SASL enabled, provided at least one of the mechanisms supported by the server is also supported by the client." So clearly GSSAPI support (which I understand is required for Active Directory) must be available within the client and the server. I have to say, I'm learning way too much about the internals of how Subversion handles authentication than I ever wanted to and I juts simply want to get an answer about whether I can have Active Directory authentication support when using SVNServe on a Windows server and accessing this from Windows clients. According to the official documentation it seems that this is possible however you can see that the configuration is not trivial if even possible at all.

    Read the article

  • xrandr shows two displays (LVDS1), but how can I use VGA1 only?

    - by Tom Fishman
    We're running Ubuntu 11 on this hardware: Foxconn R20-D2 Intel Atom D510 Intel NM10 Intel GMA 3150 Barebone There is no integrated display (it is a barebone box). I connected an external VGA to it. However xrandr shows two displays: Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 LVDS1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0*+ 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 519mm x 324mm 1920x1200 60.0 + 1600x1200 60.0 1680x1050 60.0 1280x1024 76.0 75.0 72.0 60.0 1440x900 75.0 59.9 1152x864 75.0 1024x768 75.1 70.1 60.0* 832x624 74.6 800x600 72.2 75.0 60.3 640x480 72.8 75.0 66.7 60.0 720x400 70.1 But I don't have two displays. How can I get rid of LVDS1 and use only VGA1? The direct result is that I'm seeing a 1024x768 resolution on my VGA display, because the OS is using "mirror" mode which uses the lower resolution of the two. Turning off the mirror is not a solution. I want to fix it. Related logs: ... [ 20.019] (II) intel(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 20.019] (==) intel(0): Depth 24, (--) framebuffer bpp 32 [ 20.019] (==) intel(0): RGB weight 888 [ 20.019] (==) intel(0): Default visual is TrueColor [ 20.019] (II) intel(0): Integrated Graphics Chipset: Intel(R) Pineview G [ 20.019] (--) intel(0): Chipset: "Pineview G" [ 20.019] (**) intel(0): Relaxed fencing enabled [ 20.019] (**) intel(0): Wait on SwapBuffers? enabled [ 20.019] (**) intel(0): Triple buffering? enabled [ 20.019] (**) intel(0): Framebuffer tiled [ 20.019] (**) intel(0): Pixmaps tiled [ 20.020] (**) intel(0): 3D buffers tiled [ 20.020] (**) intel(0): SwapBuffers wait enabled [ 20.020] (==) intel(0): video overlay key set to 0x101fe [ 20.020] (II) intel(0): Output LVDS1 has no monitor section [ 20.020] (II) intel(0): found backlight control interface /sys/class/backlight/intel_backlight [ 20.080] (II) intel(0): Output VGA1 has no monitor section [ 20.080] (II) intel(0): EDID for output LVDS1 [ 20.081] (II) intel(0): Not using default mode "320x240" (doublescan mode not supported) [ 20.081] (II) intel(0): Not using default mode "400x300" (doublescan mode not supported) [ 20.081] (II) intel(0): Not using default mode "400x300" (doublescan mode not supported) [ 20.081] (II) intel(0): Not using default mode "512x384" (doublescan mode not supported) ... [ 20.082] (II) intel(0): Not using default mode "960x600" (doublescan mode not supported) [ 20.082] (II) intel(0): Printing probed modes for output LVDS1 [ 20.082] (II) intel(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz) [ 20.082] (II) intel(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz) [ 20.082] (II) intel(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz) [ 20.082] (II) intel(0): Modeline "640x480"x59.9 25.18 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz) [ 20.149] (II) intel(0): EDID for output VGA1 [ 20.149] (II) intel(0): Manufacturer: BNQ Model: 771b Serial#: 6595 [ 20.149] (II) intel(0): Year: 2008 Week: 16 [ 20.149] (II) intel(0): EDID Version: 1.3 [ 20.149] (II) intel(0): Analog Display Input, Input Voltage Level: 0.700/0.700 V ... [ 20.152] (II) intel(0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz) [ 20.152] (II) intel(0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz) [ 20.152] (II) intel(0): Output LVDS1 connected [ 20.152] (II) intel(0): Output VGA1 connected [ 20.152] (II) intel(0): Using exact sizes for initial modes [ 20.152] (II) intel(0): Output LVDS1 using initial mode 1024x768 [ 20.152] (II) intel(0): Output VGA1 using initial mode 1024x768 [ 20.152] (II) intel(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. ...

    Read the article

  • Troubleshooting PHP email sending?

    - by darkAsPitch
    I created a website that occasionally emails users when they register/change their password/etc. Every other person however cannot or does not receive the emails. They are telling me that they are not even hitting their spam folders. I don't know a ton about MX records or email sending, but when I "Edit DNS Zone" for this domain in particular there is 1 MX record listed there. How do you go about troubleshooting botched PHP mail actions? UPDATE: Here is my super-simple php mailing code: $subject = "Subject Here"; $message = "Emails Message"; $to = $verified_user_data["email_address"]; $headers = "From: [email protected]\r\n" . "Reply-To: [email protected]\r\n" . "X-Mailer: PHP/" . phpversion(); //returns true on success, false on failure $email_result = mail($to, $subject, $message, $headers); re: "are you saying that some do and some do not?" @ Jacob Yes, basically. I send the emails containing the user's login username/password using similar code above. And I sell to fairly tech-savvy people. About 50% of the time, my customers claim they cannot find their welcome emails in their inbox OR in their spam box. It's as if it never arrived. I have the largest problem with Yahoo email addresses accepting my emails or so it seems. re: "The MX record at your end doesn't factor in, although the SPF record (or lack of it) will. How much access and control do you have on the server itself?" @ John Gardeniers I rent a dedicated server from Codero. Running CentOS 5, WHM + cPanel. I have full root access to the entire thing. Don't know much about MX records and/or SPF records. I just want the PHP mail function to work. It doesn't say much about that on the php mail function's help page. re: "What are you using for the SMTP server?" @ JonLim No idea. I use the code above when I need to fire off an email to a loyal customer, and that's it. Do I need to be worrying about SMTP servers? re: "Could be many, many things. Can you describe how you're sending mail in your code? i.e. are you relaying off of another mail server somewhere, using the local sendmail or postfix? Any consistency in domains that can/cannot receive email? Do you have a PTR record setup from the IP address that you're sending mail out as? What about SPF records?" @ gravyface I just described my simple code above! I believe I have been having the most trouble with Yahoo domains, however "independent" domains (probably running spamassasin) ex. [email protected] as opposed to [email protected] seem to give a lot of trouble as well. I do not know if I have a PTR record setup from the IP address I'm sending my mail from. It's probably the same IP address that I setup my domain on, because I didn't do anything extra special. No idea about SPF records either, where can I go to create one? Side Note: It's a crying shame what havoc the spammers have brought upon our beloved email system.

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • windows 2003 server : can't join domain

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. then running dcdiag C:\Program Files\Support Toolsdcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SRV Starting test: Connectivity The host 1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local cou ld not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local) couldn't be resolved, the server name (srv.domain1.local) resolved to the IP address (192.168.1.21) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... SRV failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SRV Skipping all tests, because server SRV is not responding to directory service requests Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : domain1 Starting test: CrossRefValidation ......................... domain1 passed test CrossRefValidation Starting test: CheckSDRefDom ......................... domain1 passed test CheckSDRefDom Running enterprise tests on : domain1.local Starting test: Intersite ......................... domain1.local passed test Intersite Starting test: FsmoCheck ......................... domain1.local passed test FsmoCheck from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. note: there is only one nic on the server any ideas? thanks in advance

    Read the article

  • ESXI guests not using available CPU resources

    - by Alan M
    I have a VMWare ESXI 5.0.0 (it's a bit old, I know) host with three guest VMs on it. For reasons unknown, the guests will not use much of the availble CPU resources. I have all three guests in a single pool, with the hosts all configured to use the same amount of resource shares, so they're basically 33% each. The three guests are basically identically configured as far as their VM resources go. So the problem is, even when the guests are performing what should be very 'busy' activitites, such as at bootup, the actual host CPU consumed is something tiny, like 33mhz, when seen via vSphere console's "Virtual Machines" tab when viewing properties for the pool. And of course, the performance of the guest VMs is terrible. The host has plenty of CPU to spare. I've tried tinkering with individual guest VM resource settings; cranking up the reservation, etc. No matter. The guests just refuse to make use of the abundant CPU available to them, and insist on using a sliver of the available resources. Any suggestions? Update after reading various comments below Per the suggestions below, I did remove the guests from the application pool; this didn't make any difference. I do understand that the guests are not going to consume resources they do not need. I have tried to do a remote perfmon on the guest which is experiencing long boot times, but I cannot connect to the guest remotely with perfmon (guest is w2k8r2 server). Host graphs for CPU, Mem, Disk are basically flatlining; very little demand. Same is true for the guest stats; while the guest itself seems to be crawling, the guest resource graphing shows very little activity across CPU, Mem, Disk. Host is a Dell PowerEdge 2900, has 2 physical CPU,20gb RAM. (it's a test/dev environment using surplus gear) Guest1 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest2 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest3 has: VM ver. 7, 1vCPU, 2gb RAM, 2tb storage which lives on a RAID-5 ISCSI NAS box Perhaps I am making a false assumption that if a guest has a demand for CPU (e.g. Windows Task Manager shows 100% CPU), the host would supply the guest with more CPU (mem, disk) on demand. Another Update After checking the stats, it would appear that the host is indeed not busy at all, neither is the guest. I believe I have a good idea on the issue, though; a messed-up VMWare Tools install. The guest has VMware Tools on it, but the host says it does not. VMWare Tools refuses to uninstall, refuses to be upgraded, refuses to be recognized. While I cannot say with authority, this would appear to be something worth investigation. I do not know the origin of the guest itself, nor the specifics on the original VMWare Tools install. Following various bits of googling, I did come up with a few suggestions that went nowhere. To that end, I was going to delete this question, but was prompted not to do so since so many folks answered. My suspicion right now is; the problem truly is the guest; the guest is not making a demand on the host, and as a natural result, the host is treating the guest accordingly. My Final Update I am 99% certain the guest VM had something fundamentally wrong with it re VMWare Tools. I created a clone of a different VM with a near-identical OS config, but a properly working install of VMWare tools. The guest runs just great, and takes up it's allotment of resources when it needs to; e.g. it eats up about 850mhz CPU during startup, then ticks down to idle once the guest OS is stable.

    Read the article

  • mod_mono 'Service Temporarily Unavailable' issue

    - by Charlie Somerville
    I've deployed an ASP.NET web application on a Linux (Debian) server running Apache 2.2 and mod_mono 1.9 It's working well, however Mono occasionally segfaults and uses the entire CPU which causes the website to stop working and display 'Service Temporarily Unavailable' Killing mono fixes it, but obviously this isn't a good solution. I tailed the system log after this happened and I saw the following error messages from the kernel: Apr 20 01:49:37 charliesomerville kernel: [1596436.204158] mono[17909]: segfault at b645f671 ip b645f671 sp b4ffb604 error 4<6>mono[19047]: segfault at b645f66e ip b645f66e sp b4bf7604 error 4<6>mono[18017]: segfault at b645f66e ip b645f66e sp b52fe604 error 4<6>mono[19668]: segfault at b645f5e6 ip b645f5e6 sp b48f4604 error 4<6>mono[22565]: segfault at b645f674 ip b645f674 sp b45f1604 error 4<6>mono[17700]: segfault at b645f661 ip b645f661 sp b51fd604 error 4<6>mono[19596]: segfault at b645f5e6 ip b645f5e6 sp b49f5604 error 4 Apr 20 01:49:37 charliesomerville kernel: [1596436.208172] mono[23219]: segfault at b645f66e ip b645f66e sp b44f0604 error 4 At the end of Apache's error.log are the following errors: [Tue Apr 20 03:10:23 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 System.ArgumentNullException: null key Parameter name: key at System.Collections.Hashtable.get_Item (System.Object key) [0x00000] at System.Runtime.Serialization.SerializationCallbacks.GetSerializationCallbacks (System.Type t) [0x00000] at System.Runtime.Serialization.ObjectManager.RaiseOnDeserializingEvent (System.Object obj) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectContent (System.IO.BinaryReader reader, System.Runtime.Serialization.Formatters.Binary.TypeMetadata metadata, Int64 objectId, System.Object& objectInstance, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectInstance (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObject (BinaryElement element, System.IO.BinaryReader reader, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] at System.Runtime.Remoting.Channels.CADSerializer.DeserializeObject (System.IO.MemoryStream mem) [0x00000] at System.Runtime.Remoting.RemotingServices.GetDomainProxy (System.AppDomain domain) [0x00000] at System.AppDomain.CreateDomain (System.String friendlyName, System.Security.Policy.Evidence securityInfo, System.AppDomainSetup info) [0x00000] at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] at Mono.WebServer.ApplicationServer.GetApplicationForPath (System.String vhost, Int32 port, System.String path, Boolean defaultToRoot) [0x00000] at (wrapper remoting-invoke-with-check) Mono.WebServer.ApplicationServer:GetApplicationForPath (string,int,string,bool) at Mono.WebServer.ModMonoWorker.GetOrCreateApplication (System.String vhost, Int32 port, System.String filepath, System.String virt) [0x00000] at Mono.WebServer.ModMonoWorker.InnerRun (System.Object state) [0x00000] at Mono.WebServer.ModMonoWorker.Run (System.Object state) [0x00000] [Tue Apr 20 03:10:26 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:26 2010] [error] Command stream corrupted, last command was -1 Along with the above errors, Apache's error.log is packed with hundreds (if not thousands) of the following error: Maximum number (20) of concurrent mod_mono requests to /tmp/mod_mono_dashboard_default_2.lock reached. Droping request. At the moment, I'm thinking there might be something wrong with configuration here (it's basically running on out-of-the-box config)

    Read the article

  • Vagrant (Virtualbox) host-only multiple node networking issue

    - by Lorin Hochstein
    I'm trying to use a multi-VM vagrant environment as a testbed for deploying OpenStack, and I've run into a networking problem with trying to communicate from one VM, to a VM-inside-of-a-VM. I have two Vagrant nodes, a cloud controller node and a compute node. I'm using host-only networking. My Vagrantfile looks like this: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.define :controller do |controller_config| controller_config.vm.network :hostonly, "192.168.206.130" # eth1 controller_config.vm.network :hostonly, "192.168.100.130" # eth2 controller_config.vm.host_name = "controller" end config.vm.define :compute1 do |compute1_config| compute1_config.vm.network :hostonly, "192.168.206.131" # eth1 compute1_config.vm.network :hostonly, "192.168.100.131" # eth2 compute1_config.vm.host_name = "compute1" compute1_config.vm.customize ["modifyvm", :id, "--memory", 1024] end end When I try to start up a (QEMU-based) VM, it boots successfully on compute1, and its virtual nic (vnet0) is connected via a bridge, br100: root@compute1:~# brctl show 100 bridge name bridge id STP enabled interfaces br100 8000.08002798c6ef no eth2 vnet0 When the QEMU VM makes a request to the DHCP server (dnsmasq) running on controller, I can see the request reaches the controller because of the output on the syslog on the controller: Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPDISCOVER(br100) fa:16:3e:07:98:11 Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPOFFER(br100) 192.168.100.2 fa:16:3e:07:98:11 However, the DHCPOFFER never makes it back to the VM running on compute1. If I watch the requests using tcpdump on the vboxnet3 interface on my host machine that runs Vagrant (Mac OS X), I can see both the requests and the replies $ sudo tcpdump -i vboxnet3 -n port 67 or port 68 tcpdump: WARNING: vboxnet3: That device doesn't support promiscuous mode (BIOCPROMISC: Operation not supported on socket) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vboxnet3, link-type EN10MB (Ethernet), capture size 65535 bytes 22:51:20.694040 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.694057 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.696047 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:23.700845 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.700876 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.701591 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:26.705978 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.705995 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.706527 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 But, if I tcpdump on eth2 on compute, I only see the requests, not the replies: root@compute1:~# tcpdump -i eth2 -n port 67 or port 68 tcpdump: WARNING: eth2: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 02:51:20.240672 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:23.249758 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:26.258281 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 At this point, I'm stuck. I'm not sure why the DHCP replies aren't making it to the compute node. Perhaps it has something to do with the configuration of the VirtualBox virtual switch/router? Note that eth2 interfaces on both nodes have been set to promiscuous mode.

    Read the article

  • Debian Lenny to Debian Squeeze upgrade problems

    - by Roland Soós
    Hi! Yesterday I made a dist-upgrade on my Debian Lenny server. I thought it will be easy as an usual upgrade, but it's not. I got a lot of problem after the update: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-2.6-amd64 : Depends: linux-image-2.6.32-5-amd64 but it is not installed E: Unmet dependencies. Try using -f. Then I tried the suggestion: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libio-compress-base-perl libatk1.0-0 libts-0.0-0 libmime-types-perl libc-client2007b libgtk2.0-common libxfixes3 libgsf-1-common hicolor-icon-theme libfile-remove-perl libxcomposite1 libltdl3-dev libneon27 libmd5-perl libwmf0.2-7 libilmbase6 libatk1.0-data djvulibre-desktop libdirectfb-1.0-0 fam libxinerama1 libcroco3 libopenexr6 libgsf-1-114 libmail-box-perl libdjvulibre21 openssl-blacklist librsvg2-2 libio-compress-zlib-perl libsysfs2 libbeecrypt6 libxdamage1 libobject-realize-later-perl libuser-identity-perl libgtk2.0-bin libxi6 libxcursor1 portmap libxrandr2 libgtk2.0-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-5-amd64 Suggested packages: linux-doc-2.6.32 The following NEW packages will be installed: linux-image-2.6.32-5-amd64 0 upgraded, 1 newly installed, 0 to remove and 121 not upgraded. 98 not fully installed or removed. Need to get 0 B/28.6 MB of archives. After this operation, 103 MB of additional disk space will be used. Do you want to continue [Y/n]? y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Preconfiguring packages ... (Reading database ... 37915 files and directories currently installed.) Unpacking linux-image-2.6.32-5-amd64 (from .../linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./lib/modules/2.6.32-5-amd64/kernel/sound/pci/hda/snd-hda-codec-realtek.ko': No space left on device configured to not write apport reports dpkg-deb: subprocess paste killed by signal (Broken pipe) locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Running postrm hook script /sbin/update-grub. Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.26-2-amd64 Updating /boot/grub/menu.lst ... done Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) # dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r /usr/sbin/dpkg-reconfigure: locales is broken or not fully installed Then I stucked. Do you have any idea how could I solve this?

    Read the article

  • Linux bcm43224 wifi adapter slows down a couple minutes after boot

    - by Blubber
    I just installed Ubuntu on my mid 2012 MacBook Air. Everything worked out of the box, but the wifi is showing some weird behavior. When I first login it's really fast, loading google.com is near instant, and browsing in general feels at least as smooth as it did on Mac OS. However, after a couple minutes the connection slows down dramatically, sometimes it takes over 5s to load google.com, a simple reboot fixes the problem for another couple minutes. Specs: Wifi: 02:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) Driver: open-source brcmsmac driver Kernel: Linux wega 3.8.0-21-generic #32-Ubuntu SMP Tue May 14 22:16:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Distro: Ubuntu 13.04 (uptodate) I tried a number of things, none of which actually helped Use proprietary sta driver from broadcom Installed firmware into /lib/firmware/brcms (which, as far as I can tell from logs, does not get loaded at all) Switch router to only use 2.4 OR 5 GHz Set router to only use a OR g OR n Set router to use AES encryption only Turned off power management on the adapter Set regulatory region to the correct value (NL) on both router and laptop Disable ipv6 Nothing seems to help, the slowdown always occurs. I did notice that the latency (ping google.com) stays roughly the same (around 9ms). Below is some more information that might be of use. $ lspci -nnk | grep -iA2 net 02:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01) Subsystem: Apple Inc. Device [106b:00e9] Kernel driver in use: bcma-pci-bridge $ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no $ lsmod Module Size Used by dm_crypt 22820 1 arc4 12615 2 brcmsmac 550698 0 coretemp 13355 0 kvm_intel 132891 0 parport_pc 28152 0 kvm 443165 1 kvm_intel ppdev 17073 0 cordic 12574 1 brcmsmac brcmutil 14755 1 brcmsmac mac80211 606457 1 brcmsmac cfg80211 510937 2 brcmsmac,mac80211 bnep 18036 2 rfcomm 42641 12 joydev 17377 0 applesmc 19353 0 input_polldev 13896 1 applesmc snd_hda_codec_hdmi 36913 1 microcode 22881 0 snd_hda_codec_cirrus 23829 1 nls_iso8859_1 12713 1 uvcvideo 80847 0 btusb 22474 0 snd_hda_intel 39619 3 videobuf2_vmalloc 13056 1 uvcvideo snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_cirrus bcm5974 17347 0 bluetooth 228619 22 bnep,btusb,rfcomm snd_hwdep 13602 1 snd_hda_codec lpc_ich 17061 0 videobuf2_memops 13202 1 videobuf2_vmalloc videobuf2_core 40513 1 uvcvideo videodev 129260 2 uvcvideo,videobuf2_core bcma 41051 1 brcmsmac snd_pcm 97451 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 16 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_hda_codec_cirrus mei 41158 0 soundcore 12680 1 snd apple_bl 13673 0 mac_hid 13205 0 lp 17759 0 parport 46345 3 lp,ppdev,parport_pc usb_storage 57204 0 hid_apple 13237 0 hid_generic 12540 0 ghash_clmulni_intel 13259 0 aesni_intel 55399 399 aes_x86_64 17255 1 aesni_intel xts 12885 1 aesni_intel lrw 13257 1 aesni_intel gf128mul 14951 2 lrw,xts ablk_helper 13597 1 aesni_intel cryptd 20373 4 ghash_clmulni_intel,aesni_intel,ablk_helper i915 600351 3 ahci 25731 3 libahci 31364 1 ahci video 19390 1 i915 i2c_algo_bit 13413 1 i915 drm_kms_helper 49394 1 i915 usbhid 47074 0 drm 286313 4 i915,drm_kms_helper hid 101002 3 hid_generic,usbhid,hid_apple $ dmesg | egrep 'b43|bcma|brcm|[F]irm' [ 0.055025] [Firmware Bug]: ioapic 2 has no mapping iommu, interrupt remapping will be disabled [ 0.152336] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 2.187681] pci_root PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-99] only partially covers this bridge [ 12.553600] bcma-pci-bridge 0000:02:00.0: enabling device (0000 -> 0002) [ 12.553657] bcma: bus0: Found chip with id 0xA8D8, rev 0x01 and package 0x08 [ 12.553688] bcma: bus0: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x22, class 0x0) [ 12.553715] bcma: bus0: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x17, class 0x0) [ 12.553764] bcma: bus0: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x0F, class 0x0) [ 12.605777] bcma: bus0: Bus registered [ 12.852925] brcmsmac bcma0:0: mfg 4bf core 812 rev 23 class 0 irq 17 [ 13.085176] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: false (implement) [ 13.085186] brcmsmac bcma0:0: brcms_ops_config: change power-save mode: false (implement) [ 20.862617] brcmsmac bcma0:0: brcmsmac: brcms_ops_bss_info_changed: associated [ 20.862622] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 20.862625] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 20.897957] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"wlan" Mode:Managed Frequency:5.22 GHz Access Point: E0:46:9A:4E:63:9A Bit Rate=65 Mb/s Tx-Power=17 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=63/70 Signal level=-47 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:13 Invalid misc:56 Missed beacon:0

    Read the article

  • Connecting tomcat6 to apache2

    - by StudentKen
    Disclaimier: Not a server admin I've been scratching my head over this for weeks now (not consistently mind you, as that would be maddening). I've been trying to connect my apache2 server to my tomcat server to the point where if someone encounters *.jsp or any servelet in navigating my web directory, it's handed over to tomcat. I have both Apache2.0 (port 9099) and Tomcat6 (9089) running on Debian lenny on the same box. Currently, mod_jk is enabled with mod_jk.conf in $apacheHOME/mods-enabled/ with content: # Where to find workers.properties JkWorkersFile /etc/apache2/workers.properties # Where to put jk shared memory JkShmFile /var/log/at_jk/mod_jk.shm # Where to put jk logs JkLogFile /var/log/at_jk/mod_jk.log # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " # Send servlet for context /examples to worker named worker1 JkMount /*/servlet/* worker1 # Send JSPs for context /examples to worker named worker1 JkMount /*.jsp worker1 my workers.properties located in $apacheHOME/ with content: workers.tomcat_home=/var/lib/tomcat6 workers.java_home=/usr/lib/jdk1.6.0_23/db/ worker.list=worker1 ps=/ worker.worker1.port=9071 worker.worker1.host=localhost worker.worker1.type=ajp13 my web.xml in $tomcatHOME/conf has the following servlets enabled <servlet> <servlet-name>default</servlet-name> <servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-cla$ <init-param> <param-name>debug</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>listings</param-name> <param-value>false</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class> <init-param> <param-name>fork</param-name> <param-value>false</param-value> </init-param> <init-param> <param-name>xpoweredBy</param-name> <param-value>false</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jsp</servlet-name> <url-pattern>*.jsp</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> From what I can tell, there's no funny buisness as both the apache2, tomcat, and mod_jk logs show green; yet whenever I navigate to a jsp, it simply displays the javascript. I'm unsure what the problem is exactly despite pouring over the logs and documentation for aid. I'm quite a greenhorn in the servelet world.

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • Cannot connect to MySQL Server on RHEL 5.7

    - by Jeffrey Wong
    I have a standard MySQL Server running on Red hat 5.7. I have edited /etc/my.cnf to specify the bind address as my server's public IP address. [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 # Disabling symbolic-links is recommended to prevent assorted security risks ; # to do so, uncomment this line: # symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid bind-address=171.67.88.25 port=3306 And I have also restarted my firewall sudo /sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 3306 -j ACCEPT /sbin/service iptables save The network administrator has already opened port 3306 for this box. When connecting from a remote computer (running Ubuntu 10.10, server is running RHEL 5.7), I issue mysql -u jeffrey -p --host=171.67.88.25 --port=3306 --socket=/var/lib/mysql/mysql.sock but receive a ERROR 2003 (HY000): Can't connect to MySQL server on '171.67.88.25' (113). I've noticed that the socket file /var/lib/mysql/mysql.sock is blank. Should this be the case? UPDATE The result of netstat -an | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN Result of sudo netstat -tulpen Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 0 7602 3168/hpiod tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 27 7827 3298/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 5110 2802/portmap tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN 0 8431 3326/rserver tcp 0 0 0.0.0.0:915 0.0.0.0:* LISTEN 0 5312 2853/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 7655 3188/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 7688 3199/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 8025 3362/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 0 7620 3173/python udp 0 0 0.0.0.0:909 0.0.0.0:* 0 5300 2853/rpc.statd udp 0 0 0.0.0.0:912 0.0.0.0:* 0 5309 2853/rpc.statd udp 0 0 0.0.0.0:68 0.0.0.0:* 0 4800 2598/dhclient udp 0 0 0.0.0.0:36177 0.0.0.0:* 70 8314 3476/avahi-daemon: udp 0 0 0.0.0.0:5353 0.0.0.0:* 70 8313 3476/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 0 5109 2802/portmap udp 0 0 0.0.0.0:631 0.0.0.0:* 0 7691 3199/cupsd Result of sudo /sbin/iptables -L -v -n Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6373 2110K RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 RH-Firewall-1-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 1241 packets, 932K bytes) pkts bytes target prot opt in out source destination Chain RH-Firewall-1-INPUT (2 references) pkts bytes target prot opt in out source destination 572 861K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 1 28 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 255 0 0 ACCEPT esp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT ah -- * * 0.0.0.0/0 0.0.0.0/0 46 6457 ACCEPT udp -- * * 0.0.0.0/0 224.0.0.251 udp dpt:5353 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:631 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:631 782 157K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:23 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 4970 1086K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Result of nmap -P0 -p3306 171.67.88.25 Host is up (0.027s latency). PORT STATE SERVICE 3306/tcp filtered mysql Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds Solution When everything else fails, go GUI! system-config-securitylevel and add port 3306. All done!

    Read the article

  • How to troubleshoot problem with OpenVpn Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenSvn Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • How to save POST&GET headers of a web page with "Wireshark"?

    - by brilliant
    Hello everybody, I've been trying to find a python code that would log in to my mail box on yahoo.com from "Google App Engine". I was given this code: import urllib, urllib2, cookielib url = "https://login.yahoo.com/config/login?" form_data = {'login' : 'my-login-here', 'passwd' : 'my-password-here'} jar = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar)) form_data = urllib.urlencode(form_data) # data returned from this pages contains redirection resp = opener.open(url, form_data) # yahoo redirects to http://my.yahoo.com, so lets go there instead resp = opener.open('http://mail.yahoo.com') print resp.read() The author of this script looked into HTML script of yahoo log-in form and came up with this script. That log-in form contains two fields, one for users' Yahoo! ID and another one is for users' password. However, when I tried this code out (substituting mu real Yahoo login for 'my-login-here' and my real password for 'my-password-here'), it just return the log-in form back to me, which means that something didn't work right. Another supporter suggested that I should send an MD5 hash of my password, rather than a plain password. He also noted that in that log-in form there are a lot other hidden fields besides login and password fields (he called them "CSRF protections") that I would also have to deal with: <input type="hidden" name=".tries" value="1"> <input type="hidden" name=".src" value="ym"> <input type="hidden" name=".md5" value=""> <input type="hidden" name=".hash" value=""> <input type="hidden" name=".js" value=""> <input type="hidden" name=".last" value=""> <input type="hidden" name="promo" value=""> <input type="hidden" name=".intl" value="us"> <input type="hidden" name=".bypass" value=""> <input type="hidden" name=".partner" value=""> <input type="hidden" name=".u" value="bd5tdpd5rf2pg"> <input type="hidden" name=".v" value="0"> <input type="hidden" name=".challenge" value="5qUiIPGVFzRZ2BHhvtdGXoehfiOj"> <input type="hidden" name=".yplus" value=""> <input type="hidden" name=".emailCode" value=""> <input type="hidden" name="pkg" value=""> <input type="hidden" name="stepid" value=""> <input type="hidden" name=".ev" value=""> <input type="hidden" name="hasMsgr" value="0"> <input type="hidden" name=".chkP" value="Y"> <input type="hidden" name=".done" value="http://mail.yahoo.com"> He said that I should do the following: Simulate normal login and save login page that I get; Save POST&GET headers with "Wireshark"; Compare login page with those headers and see what fields I need to include with my request; I really don't know how to carry out the first two of these three steps. I have just downloaded "Wireshark" and have tried capturing some packets there. However, I don't know how to "simulate normal login and save the login page". Also, I don't how to save POST$GET headers with "Wireshark". Can anyone, please, guide me through these two steps in "Wireshark"? Or at least tell me what I should start with. Thank You.

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >