Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 530/724 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Reconfiguring PHP with OpenSSL Extension on CentOS

    - by Evan
    Hi Guys - Long time browser, first time poster! I have a CentOS Dedicated server running just fine. I'm trying to reconfigure PHP to include the OpenSSL extensions so I can use some of the Youtube API's. I installed OpenSSL with yum, so it's in place on the server. I'm just now having trouble getting PHP to use it as an extension. I got the latest PHP tarball, untarred, set my configure string (./configure) using the proper parameter for openssl (--with-openssl=/usr) and it checked out just fine. I ran Make, then Make Install. I am getting hung up here. After it makes the PEAR config file it seems to quit. I guess I'm not sure, but it seems like there is a LOT more that should be happening. Here is a screenshot: http://www.evanfell.com/screencaps/6iamks.png Restarting apache shows no change to the PHP running on the server. Is there are PEAR issue killing the Install process? Or is there an other issue? Thanks In Advance. Happy to clarify and provide more info.

    Read the article

  • Is there any program (or code, any language) that will mute all of the microphones on my computer?

    - by Sean
    Is there any program (or code, any language) that will mute all of the microphones on my computer? If it is code, please make it as simple as possible, the only language I know is C# and I am still VERY new to it. I just want to setup some way to mute my microphones from a hotkey/shortcut, and if I can just find a program that can do, I will be set. As I said, I can also do a little bit if it is in C#, but the only code I have seen before for this, was miles long (atleast to me). My goal, is I just want a program that opens up, and toggles the mute on the microhpones (all of the system audio input) then closes. That is it, very simple. Thank you to anyone who trys to helps me! EDIT: Yes, I am using Windows. I am using Windows 7 32-bit. I already know that I can go into the volume mixer and do it that way, but I need to do this while running a full screen application, and it is a hassle to have to exit fullscreen, open the volume mixer, the click the mute icon, then go back into full screen and the the whole thing over again to unmute it. And I will be toggling back and forth quite often, so it just takes alot to do that so much.

    Read the article

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Best way to integrate applications to windows 7 install.wim image

    - by cyph3r
    I have right now an unmodified .iso of a windows 7 32bit and 64bit installation disk. And I need to integrate to that some applications (office, adobe reader etc) and windows updates so that when windows are installed the above applications/updates are already installed and working. Requirements: My output has to be a install.wim image containing the new/improved windows installation files because the deployment is done via a pxe server and a custom windowsPE enviroment. The procedure to create the install.wim has to be as automatic as possible. I can't create it manually every time I want to incorporate a new windows or application update to the image. The image will be installed on 100+ computers so it needs to be 'generic'. I've never done something like this before but from what I searched a possible solution to this issue would be: To create a reference installation (preferably on a vm so I can take snapshots) complete with its applications/updates/settings. After the complete setup I take a snapshot of the installation Run C:\Windows\System32\sysprep\sysprep.exe /oobe /generalize /shutdown to sysprep the machine. Boot to a WindowsPE enviroment and capture the .wim image using gimagex. Deploy the .wim and enjoy the rapid installation times. :D Does that sound ok? Would you recommend anything else? Right now the applications are installed after the installation of windows is complete. So the total installation time is quite long. That's why I need a different approach.

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Is there a way to change the string format for an existing CSR "Country Code" field from UTF8 to Printable String?

    - by Mike B
    CentOS 5.x The short version: Is there a way to change the encoding format for an existing CSR "Country Code" field from UTF8 to Printable String? The long version: I've got a CSR generated from a product using standard java security providers (jsse/jce). Some of the information in the CSR uses UTF8 Strings (which I understand is the preferred encoding requirement as of December 31, 2003 - RF 3280). The certificate authority I'm submitting the CSR to explicitly requires the Country Code to be specified as a PrintableString. My CSR has it listed as a UTF8 string. I went back to the latest RFC - http://www.ietf.org/rfc/rfc5280.txt. It seems to conflict specifically on countryName. Here's where it gets a little messy... The countryName is part of the relative DN. The relative DN is defined to be of type DirectoryString, which is defined as a choice of teletexString, printableString, universalString, utf8String, or bmpString. It also more specifically defines countryName as being either alpha (upper bound 2 bytes) or numeric (upper bound 3 bytes). Furthermore, in the appendix, it refers to the X520countryName, which is limited to be only a PrintableString of size 2. So, it is clear why it doesn't work. It appears that the certificate authority and Sun/Java do not agree on their interpretation of the requirements for the countryName. Is there anything I can do to modify the CSR to be compatible with the CA?

    Read the article

  • Intermittent 5.7.1 email bounce to Exchange 2007

    - by Steve Kennaird
    My knowledge of Exchange isn't particularly great, so excuse me if some of the terminology I use isn't quite right. I'm primarily a web developer who's now responsible for a small business's network. We have a server running SBS 2008 and Exchange 2007. Generally, everything works well, emails are able to be sent to both internal and external domains without issue. We've only got ~20 users, Exchange is sitting on a single server. I use SendGrid to send emails generated by our externally hosted website to users in the office. Primarily, order notifications are sent to [email protected]. Without any pattern and less than once per week on average, an email to [email protected] will bounce back, and the logs on SendGrid detail the following error: 550 5.7.1 Unable to relay for [email protected] Either side of that failed delivery attempt, I'm able to send and receive emails to/from [email protected]. Having done some research, incorrect reverse DNS seems like it could be a cause of intermittent bounces like this. Having used nslookup, I have found that the reverse DNS doesn't map like it should, e.g. Office IP: 135.325.351.123 (made up IP, for example only) Domain: office.somedomain.com (made up, for example only) Reverse DNS: somedomain.gotadsl.co.uk (half made up) Could this be a cause? I'm sure that the IP address and the domain should map to each other. Also, it has been suggested to me that as the Exchange server is on a network with an ADSL connection, that could be a potential cause as the connection "goes up and down all day long". I don't have an opinion on this, as I don't have enough knowledge of Exchange/ADSL to form a reliable opinion. Can anyone offer any insight as to whether one or both are actually potential causes, or if there is another possible cause?

    Read the article

  • How to determine the Kerberos realm from an LDAP directory?

    - by tstm
    I have two Kerberos realms I can authenticate against. One of them I can control, and the other one is external from my point of view. I also have an internal user database in LDAP. Let's say the realms are INTERNAL.COM and EXTERNAL.COM. In ldap I have user entries like this: 1054 uid=testuser,ou=People,dc=tml,dc=hut,dc=fi shadowFlag: 0 shadowMin: -1 loginShell: /bin/bash shadowInactive: -1 displayName: User Test objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uidNumber: 1059 shadowWarning: 14 uid: testuser shadowMax: 99999 gidNumber: 1024 gecos: User Test sn: Test homeDirectory: /home/testuser mail: [email protected] givenName: User shadowLastChange: 15504 shadowExpire: 15522 cn: User.Test userPassword: {SASL}[email protected] What I would like to do, somehow, is to specify per-user basis to which authentication server / realm the user is authenticated against. Configuring kerberos to handle multiple realms is easy. But how to I configure other instances, like PAM, to handle the fact that some users are from INTERNAL.COM and some from EXTERNAL.COM? There needs to be an LDAP lookup of some kind where the realm and the authentication name is fetched from, and then the actual authentication itself. Is there a standardized way to add this information to LDAP, or look it up? Are there some other workarounds for a multi-realm user base? I might be ok with a single realm solution, too, as long as I can specify the user name - realm -combination for the user separately.

    Read the article

  • Ubuntu purple splash screen with blinking pixels?

    - by joxnas
    I had ubuntu 9.10 I upgraded to 10.04 after solving some problems (freeze at boot). Since then, I don't have the ubuntu's logo showing up when I boot, but a purple screen with some blinking pixels. I didn't care much about it... but today my computer took too long at that screen (normally it was just 1/4 second, but today it was like a minute..). And it happened like 4 or 5 times in a row (Only at the 5th time I realised that it was not freezing up, but it simply would took more time) After a reboot, it is again 1/4 second of purple screen but I don't want this problem to return.. so I want to get rid of the purple screen (I think it is an indicator of the problem) Well, I already installed the graphic drivers (going to system admnistration hardware drivers). But it didn't solve anything. (I don't know if it is even related) I searched in google, found something old (2006) and I think it maybe has some relation with my problems .. http://ubuntuforums.org/archive/index.php/t-294692.html But couldn't understand the conversation (i'm a linux novice) Sorry for my horrible english.. I would appreciate any help! My hardware: ATI Mobility Radeon 4650 HD P7450 2.13Ghz Core 2 Duo

    Read the article

  • Dell PS/2 Keyboard Stops Working Randomly After Power Disturbance

    - by Kenneth Murphy
    I have a Dell PS/2 keyboard connected to a desktop PC running Slackware 12.2 & Windows XP. After a recent, brief power outage/disturbance at my home, the keyboard has begun to quit working at random times. It has stopped at POST, but not by keyboard error -- I have to press the F1 key to continue booting, and at times the keyboard has already stopped working. Other times, the keyboard will work perfectly for a long time (a day or more) before it finally quits. It has stopped at boot, in Windows XP, and in Slackware. The led lights continue to work regardless. I have tried another PS/2 keyboard and it seems to be immune to this problem. The USB mouse always works. Does anyone have any ideas about how this might have happened? If this is related to the power disturbance that killed the power to the running PC, is it feasible that it would have only fried the keyboard itself (which still works sometimes) and not the PS/2 port nor anything else? I have experienced no other problems since the event.

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". PLease suggest.

    Read the article

  • Can Current Backflow from Powered Hub's Adapter & cause PC Damage?

    - by SuperUserMan
    Getting this short: Can current flow from a powered USB hub's power adapter (lying 10 Meter away) back to computer via usb port and cause damage to Computer components like mobo, etc? What should be my concerns? Using a 2 Amp 5V Power adapter to power a 10m Long Active Repeater USB extension cable with 4 port HUB & plugging into PC's Front port, causes PC Chassis fan to keep running (thought slower than regular speed) Front Chassis HDD & power LED to turn on (though bit dim) may be other things which i cant detect/see at chip level, in motherboard?? All this even after PC is shut down (bit scary) More detail (in case still want to read): To run 4 High power (needing 450 mAmps) Wifi Adapters, far away from PC, Bought Active Repeater USB Extension Cable with 4 Ports & power port at far end http://www.ebay.com/itm/33FT-USB-2-0-Male-to-Female-Extension-Cable-Hub-Splitter-Adapter-with-4-USB-Port-/390846115254 Then added a locally bought 2 Amp 240V AC to 5V DC Power Adapter and plugged into USB hub which is a part of & situated at far end of a 10 Meter Active Repeater usb extension cable. Even 4 Wifi Adapters run fine (appear to) using this setup, but running chassis fan, dimly lighted Power & HDD LED, even when PC is switched off is bit scary and surely mean 5V & some current is flowing all though that 10 meter extension cable into my USB port & powering stuff. Can this cause damage? and what should be my concerns. Of course I can't switch off the power adapter (lying 10 meters away from PC) every time I switch off my PC to prevent this.

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • How can I get Windows 7 to work with two Nvidia graphics cards with different drivers?

    - by Max
    This is similar to this question, but I am using more similar cards with Windows 7. I just purchased a Zotac Nvidia GeForce 7200 GS. I have a motherboard with two PCI Express x16 slots. There is already an MSI Nvidia GeForce 8800 GTS being used as the primary card, driving two LCD monitors. I would like the Zotac to output to a TV via DVI-out. Unfortunately, when Windows detects the Zotac and installs its drivers, or I manually install them, Windows stops being able to boot up. If I remove them and re-install the MSI 8800 drivers, I can boot again, but Windows can no longer see the Zotac 7200--it shows up as a yellow triangle in Device Manager. I've read conflicting reports about this. Some people claim that Windows 7 will support multiple heterogeneous graphics card drivers, as long as they are all using the same driver API ("WDDM?"). Others say that they have to be using the exact same driver, or it won't work. Others claim that you have to use the exact same card. which is it, exactly? I know I can run the MSI 8800 in SLI if I purchase another, but I don't need that kind of power--I just need HD-out to my television. I read somewhere that running two cards in SLI precludes you from using 100% of their output ports, so I'm not sure if that's an option. I suppose I could also run two MSI 8800's without SLI, but again, that's more power than I need (and more money than I'd like to spend). Also, I don't think this exact model is even manufactured anymore. Any ideas?

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • Cannot establish XMPP server-to-server connection with gmail

    - by v_2e
    My jabber-server fails to connect to gmail.com giving the error: outgoing s2s stream myserver.com.ua-bot.talk.google.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) I am using the Prosody XMPP server. It works just fine with other jabber-servers I tested so far (e.g. jabber.ru). However, when some of my clients tries to add a gmail contact to his contact-list, the subscription request lasts forever, and the Prosody gives the following sequence of messages in its log: Oct 21 22:57:16 s2sout95897f8 info Beginning new connection attempt to gmail.com ([173.194.70.125]:5269) Oct 21 22:57:16 s2sout95897f8 info sent dialback key on outgoing s2s stream Oct 21 22:57:16 s2sout95897f8 info Session closed by remote with error: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info outgoing s2s stream myserver.com.ua->gmail.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info sending error replies for 2 queued stanzas because of failed outgoing connection to gmail.com Here for the domain name of my server I use myserver.com.ua I found a similar problem described in this thread, but there is no detailed description of the solution there. As for the Google services, I did have a google account where I added the domain name under question to the Webmasters tools page. However, I deleted my account long ago, so now it is unclear, how any of the Google services can relate to my domain name. So my question is: What is the real cause of this problem (my jabber-server configuration or imaginary Google account or something else) and how can I make my Prosody server connect to gmail.com jabber service?

    Read the article

  • Has anyone had luck running 802.1x over ethernet using the stock Windows or other free supplicant?

    - by maxxpower
    I just wanted to see if anyone else has had luck implementing 802.1x over ethernet. So here's my basic setup. Switch sends out 3 eapol messages spaced out 5 seconds apart. if there's no response the machine gets put on a guest vlan with restricted access. If the machine is properly configured it will authenticate and be placed into a secure vlan. About 10% of my windows xp users are getting self assigned 169 addresses. I've used the Odyssey Access Client and it worked without a hitch. I'm using the setting to automatically use the users windows login to authenticate, but it's workign on 90% of the machines so I don't think that's the issue. Checking the logs on the dc it seems that the machines are trying to authenticate with computer credentials even though they are configured not to. I'm running Juniper switches with IAS for radius. I have radius configured for PEAP and MSvhapv2. Macs and linux boxes seem to have no issues authenticating. One last thing to add If I unplugging the ethernet cable and plug it back in usually resolves the issue, but I'd hardly call that acceptable for production. Kinda long winded and specific for a discussion, but just want to see if anyone else has had similar issues or experiences, or if anyone knows of a free XP supplicant that actually works with 802.1x over ethernet.

    Read the article

  • Looking for comprehensive computer monitoring software

    - by cornjuliox
    Summer in this country is insanely hot. So hot in fact, that I think we just lost a machine due to overheating (last recorded CPU/GPU temp was close to 100 C, now it wont start and lets out a long series of short beeps on power up), but that's not my concern here. Since heat is such a problem for me, I use several different pieces of software to monitor temperatures in my machine. I use MSI Afterburner to monitor GPU temp, control GPU fan speed and for some light overclocking, and then I use Speccy and SpeedFan to monitor the rest of the system, CPU temp and everything. The setup works fine for now, but I want to consolidate all this into one program so I'm not juggling several windows at once. Is there any program out there that will let me monitor the following from a single window: CPU Temp and Fan Speed CPU clock GPU Temp and Fan Speed GPU clock, both core and memory Additionally, mobo temp (Speccy lists both CPU and Motherboard temp, I assume that the latter is referring to North and Southbridge temperatures. I'm also looking for the ability to chart these data points on a graph over time, basically to see just how high the temperatures spike under load and for general analysis. It'd be nice if it could handle overclocking of both CPU and GPU in real time too. Any suggestions? Edit: I forgot to mention that I'm on Windows 7, 64-bit

    Read the article

  • Periodic internet connection drops

    - by sterlingholt
    My setup is a dsl modem, and a dlink di 524M router. I'm also using a Witopia VPN which runs through OpenVPN. I've been having trouble with the internet connection dropping very frequently. It comes back shortly, without even a router/modem/computer restart. This happens as frequently as every ten minutes. Occasionally (not often) it will last as long as an hour or two without dropping. When it drops, I can get it back almost immediately by clicking Reconnect in the OpenVPN GUI and letting that do it's thing. It's worth noting that I'm in China. Calling support is a bit difficult because of that. Also I don't really understand all of the router's software, although I've got it generally figured out. I've tried a bunch of stuff, attempts to diagnose and/or fix the problem. No success with any of the following: I've power cycled both the modem and the router. I've tried an ethernet connection to the router. I've connected without the VPN. I've disabled IEEE authentication on all connections. I've checked for viruses. I've tried lifting it off the ground so as to prevent overheating.

    Read the article

  • Extremely Weird Computer Problem

    - by waiwai933
    Well, having worked with computers a long time, I thought I had seen everything that could go wrong with a computer. Today, I learned that I was wrong. My Mac has been behaving extremely weirdly. While the keyboard is fine, the mouse holds extremely weird behavior. It will open apps from the Dock, but not open documents from a Stack. The top menu bar will work, but only after I restart Finder. I am unable to close most windows, except with Cmd+Q. Clicking on a link in Safari, for some reason, opens it in a new tab. I can not select checkboxes, but I know a click is going through because you can see a quick indent in the check box, before it reverts to the unchecked state. I've restarted my computer, as well. I don't see a virus when looking through top, but I guess it could be subverting top. Does anyone have an idea what is going on?

    Read the article

  • Apache multiple vhost logs, stored locally and sent to remote logstash

    - by benbradley
    I'm investigating centralised logging and it seems there's so many different ways this can be done. I don't want to run logstash as a log "sender", preferring to keep the web servers as lean and simple possible. So that means either using syslog, syslog-ng or the one I'm testing now, rsyslog. But I would like to have separate vhost log files on the web server, in addition to these logs being sent to a remote log collector. I've tested rsyslog using the imfile module to watch the Apache log files, but this means I have to hard-code each vhost log file into my rsyslog.conf. Not ideal as people will invariably forget when they add/remove sites on the server. The reason I'm using rsyslog's imfile is that Apache doesn't appear to let you log to file and syslog. And I want to keep vhost-specific log files on the web server. So how can I do this? Is there a way of having rsyslog produce local log files and forward the logs to a remote collector? I am prepared to change my Apache config to log to a single access/error log for all vhosts, so long as there are vhost-specific log files produced somewhere on the web server machine. I just don't want to lose any logging info if the remote log collector can't be contacted for any reason. Any comments/suggestions? Cheers, B

    Read the article

  • Has anyone else experienced page fault crashes with Snow Leopard on MacBook Pro?

    - by BruceMartin
    I bought a Macbook Pro computer on Sept 3rd from MacMall. As I was using it to learn Snow Leopard (this is my first Mac, I am a long time Windows developer), it would crash every one or two hours. After calling Apple support, I dropped it off at the Apple store for diagnostic testing and repair. WhenI picked up the computer from Apple, they told me that it did not crash while they had it. They suspected a software problem, so they had done a fresh install of Snow Leopard for me. At home I went through the start up procedure with the newly installed Snow Leopard. Then I downloaded the iPhone SDK, and the computer crashed again while I was away waiting for the download to finish. I was using a USB mouse, which was the only device attached. No other software installed. I was presented with a dump that mentions terms like "panic", Kernel trap", and "page fault". Does anyone have any idea what this problem might be? I really can not use this MacBook under these circumstances.

    Read the article

  • Use of backreferences in fail2ban filters possible?

    - by Izzy
    From time to time, I see collections of suspect "File not found" errors in my Apache logs, basically using the pattern File does not exist: /var/www/file, referer: http://my.server.com/file In human terms: The file was not found, though it referenced here itself. A clear hacking attempt, as that's hardly possible (and the REQUEST_URIs often enough suggest the same). In my eyes a clear case for fail2ban – if I could get backreferences to work here: failregex = ^%(_apache_error_client)s File does not exist: /var/www(.+), referer: http://.+\1$ (Justin Case: above examples assume the DIRECTORY_ROOT of that webserver being /var/www) I googled for hours, searched the fail2ban wiki up and down – but nowhere I could find a statement concerning backreferences in its filters. Are they not supported, or did I do it the wrong way? Any hints how to make it work (except from "dirty hacks" like first sending the request to another fake url using mod-rewrite, and then catching on that (if anyone is interested, I can elaborate on that approach in an answer), or doing something similar using mod-security)? as an entire log line was requested: [Fri Nov 08 14:57:28 2013] [error] [client 50.67.234.213] File does not exist: /var/www/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found;, referer: http://www.myserver.com/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found; (sorry, logs were just switched, so this long candidate was the only one left currently; minor adjustments were made for privacy reasons)

    Read the article

  • mount error 5 = Input/output error

    - by alharaka
    I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating. This the command I am using. debianvm:/home/me# whoami root debianvm:/home/me# smbclient --version Version 3.2.5 debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*********mount error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) debianvm:/home/me# The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs and mount -t cifs, along with smbmount and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8, file_mode, and dir_mode as described here. That did not help either. I have also tried ntlm and ntlmv2 assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2 it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username correctly lists all the shares and shows it as the following. Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager] Sharename Type Comment --------- ---- ------- IPC$ IPC Remote IPC ETC$ Disk Remote Administration C$ Disk Remote Administration Share Disk Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED) NetBIOS over TCP disabled -- no workgroup available I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >