Search Results

Search found 9186 results on 368 pages for 'sort'.

Page 270/368 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Juniper SSG 5 VPN

    - by Ethabelle
    I have a host who set up our Juniper SSG 5 VPN with Firmware version-6.2.0r5.0 I've been trying to set up VPN on it using this guide: http://kb.juniper.net/InfoCenter/index?page=content&id=KB4094 I've followed the steps and on my Mac, whenever I try to connect using L2TP over IPSec I get the following error; Summary of Steps: Create User (give them L2TP auth ability), Create Group, Place User in Group, Create VPN Gateway, Create VPN, create IP Pool, change default L2TP settings, create Untrust Trust Policy. The L2TP-VPN server did not respond. Try reconnecting. If the problem continues, verify your settings and contact your Administrator. I looked in my Firewall's logs, but I don't even see anything under Reports Logs Events. I'm.. obviously missing something, I just don't know what I'm missing at this point. I'm just starting networking and this is sort of Step 101 and I'm getting annoyed and just want to throw up OpenVPN, but I've read that has problems with Juniper Firewalls. Hooray.

    Read the article

  • Unable to access network resources through VPN

    - by fbueckert
    I'm currently attempting to connect one of our computers in the office to a client VPN. My development machine is running Windows 7, and can connect and see resources just fine. The problem computer is running Windows XP. They're both within the same network. Using the same credentials at both computers, the VPN connection (using the built in Windows network connections) works just fine. So far, so good. An IP address is assigned, and comparing both machines shows they're still in the same subnet. The problem is that the XP machine cannot see ANY of the computers in the client network. I tried a tracert to a target machine on the Windows 7 box, and the first item that comes up is the .0 address. Pinging it gives responses. Trying it on the Windows XP machine, however, comes up with just timeouts. Trying to trace to www.google.com allows the address to resolve (probably part of the cached resolutions), but results in just timeouts. I double-checked to make sure that the Windows firewall was not on, and trying to open the settings brings up a notification that the firewall service wasn't running, which leads me to believe that it's definitely not on. From my best guess, I've managed to connect the XP machine to a black hole of some sort. There's obviously something strange going on, but I'm not sure where I should be looking.

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • Configuring ZenOSS to monitor CPU load

    - by Tom
    I'm trying test out monitoring tools for a network at work with a coworker but neither of us have ever used an sort of monitoring tools before. Currently we are experimenting with ZenOSS and having some difficulties. We want to populate our CPU load graphs because that is one of the primary feature we are looking for in our monitoring tools but we have been unable to populate the graphs with data. So far we have installed the wmipreformance, sqldatasource, wmidatasource, snmpperformance(simple) zenpacks and the machine we are trying to monitor is running Windows XP. We have tried to model the device and everything seems to run and we've tride to add data points to graphs but the only options we recieve for graphs are CPU and Memory. We are able to monitor services, ZenOSS recognizes the make and model of the processor, RAM, and Harddrive and is even giving us metrics on available storage but again, we are looking for performance metrics such as CPU load and Memory utilization. I realize I probably didn't provide a lot of information but that is because we don't have a very good idea of what we are doing and can't find instruction either on the ZenOSS homepage or forums to monitor CPU load. If someone could give us step by step instruction on how to set up CPU load monitoring that would probably be more beneficial to us than a diagnostic of our current setup, but regardless, if I left any important information out and you need it to answer the question, please let me know. Thank you.

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Can't upgrade NVIDIA GeForce 310M display driver on Acer Aspire 5745PG

    - by Emerson
    I've been for days already trying to update my video driver. I have an Acer Aspire 5745PG with a "NVIDIA GeForce 310M" board, and I was trying to run Sony Vegas video editor with Boris Continunn plugins. It happened that some of the plugins, like BCC Text Extrude wouldn't work, showing the message "Insufficient depth resolution to run Blue". I then read somewhere that updating the display driver would do the trick. That was when my nightmares started, I lost already good 3 nights trying to sort this out, without success :( The display driver that was before (and that I current have after restoring) was the version 8.16.11.8997. First thing I tried was downloading the 8.17.12.6619 driver directly from Acer, which was shown as the latest version from Acer website: http://support.acer.com/product/default.aspx?modelId=2466 Running it would say "Diver Package Failure - Setup failed to read the required Display Driver to be used with this package" I then tried directly the NVIDIA own driver, which the latest was version 296.10: http://us.download.nvidia.com/Windows/296.10/296.10-notebook-win7-winvista-64bit-international-whql.exe That gave me similar error message :/ So after some researching I found out that some people had the same issue and they had to change the configuration file to allow the installer to recognize this NVIDIA board: http://forums.nvidia.com/index.php?showtopic=222904 That topic said to look for the "Device Instance Id" property of the "NVIDIA GeForce 310M" display , which I couldn't find, instead I found the "Hardware Id", which seemed to be the right one. I followed the instructions and changed the inf file first for the Acer installation, and after for the NVIDIA own driver. It actually managed to go ahead with the installation in both instances, but the only thing I got was a black screen, while the computer still apeared to be running fine. I had to hard reset, and then it would come back with generic vga driver. I could only get my display back using the recovery function. I imagine thousands of this notebook was sold, and it can't have its driver updated?? Could someone help me with this?? Thanks Echo

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • HP Power Manager SMTP setup doesn't have space for username & password

    - by Martha
    Is there some way to configure HP Power Manager to not assume that there's an email server running locally? We recently acquired an HP T1500 G3 UPS, which we're trying to control using HP Power Manager 4.2. The main reason we wanted to get this particular UPS is because it says it's capable of sending notifications (of the "Yo, the power's out, you may want to look into it" type) via email, as opposed to SNMP. Turns out, that's not entirely true. The server is running Windows Server 2003. It is not running an email server of any sort - we do that via two different providers. Outlook email is provided by Verizon, and our SMTP email service is provided by a small local company. When we use CDO to send auto-generated notification emails, we have to provide the SMTP server name, port, username, and password. The HP Power Manager interface only allows us to enter the server name and the username. Thus, not surprisingly, the emails never go anywhere. Help?

    Read the article

  • Hard Drive that was used before not detected or accessible in Windows 7

    - by Anders
    Hello SU: My PC crashed for some unknown reason, and I am still working on what caused that. However, I pulled my main (windows) drive from my computer and hooked it up to my roommate's machine and was able to pull the data I needed off of it (i.e. the drive is good). I hook up his drives as they were, I had to turn off his machine and unplug his secondary drive to hook mine up, boot his machine and there is no second drive available in windows explorer. I opened Device Manager to see if for some reason it's drive letter got un-assigned, but there is nothing listed in there except his primary hard drive, his optical drive and one other optical drive which I believe is the virtual drive Daemon Tools made. The drive shows up in the BIOS, however after I restarted his machine again it sits on the "Entering setup....." screen at the load window. The only thing I can think of is that may have messed with stuff is I used this tutorial to create a bootable XP install on a USB drive to install XP on my machine (I am 99% certain that the optical drive in my PC is broken) and maybe it used the other hard drive's letter for the USB drive for some reason, which doesn't make much sense since it was recognized it as a different drive letter before I started the process. It is possible that it used the secondary hard drive's letter for it's work, but once again I am uncertain. Where should I go from here? He his bound to wake up within the next several hours and will probably flip a lid if I cannot get some sort of handle on this. Any and all help is greatly appreciated. PS: Anyone who helps me get this situated has a beer or two on me, as long as you are in the greater metro Detroit area, or don't mind traveling a bit!

    Read the article

  • Postfix "mail-to-script" pipe only delivers empty messages

    - by user68202
    i have a problem here. I want that a incoming email is piped to a php script in the system through postfix. My System is running with ispconfig 3, postfix and dovecot (< virtual mailbox users are saved in mysql). I looked already into this one: How to configure postfix to pipe all incoming email to a script? ... the script is executed, but no "message" is delivered to the script. My setup so far: In ISPConfig 3 i have set up the following email route: Active Server Domain Transport Sort by Yes example.com pipe.example.com piper: 5 excerpt from my postfix master.cf: piper unix - n n - - pipe user=piper:piper directory=/home/piper argv=php -q /home/piper/mail.php so far it is working great (mail sent to [email protected]) (mail.log): Jun 21 16:07:11 example postfix/pipe[10948]: 235CF7613E2: to=<[email protected]>, relay=piper, delay=0.04, delays=0.01/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via piper service) ... and no errors in mail.err the mail.php is sucessfully executed (its chmod 777 and chown'ed to piper), but creates a empty .txt file (normally it should contain the email message): -rw------- 1 piper piper 0 Jun 21 16:07 mailtext_1340287631.txt the mail.php script ive used, is the one from http://www.email2php.com/HowItWorks if i use their (commercial) service to pipe an email to the mail.php (in a apache2 environment) through a provided "pipe-email", the message is saved sucessfully and complete. But as you can see, i dont want to use external services. -rw-r--r-- 1 web2 client0 1959 Jun 21 16:19 mailtext_1340288377.txt So, whats wrong here? I think it has something to do with the "delivering configuration" in my system...

    Read the article

  • Simplest DNS solution for remote offices

    - by dunxd
    I look after a bunch of remote offices that connect via VPN - a Cisco ASA 5505 in each office acts as Firewall and VPN end point. Beyond that we keep things as simple as possible in the offices to minimise the support burden. We don't have any kind of server except in offices large enough to justify having someone dedicated to IT. Basically there is the ASA, some computers, a network printer and a switch. One of the problems I am seeing in a lot of offices is that DNS requests looking up hosts inside our network often fail - I'm assuming timeouts due to the offices internet connection (they are all in developing world countries) having some sub-optimal qualities (e.g. high latency caused by VSAT segments, or packet loss. The obvious solution to this is to have some sort of local DNS service that can serve local requests - so I think it would need to do zone transfers from our Microsoft Windows 2008 R2 DNS servers at HQ. However, simply installing Windows Servers in each office is both expensive, and creates a support burden. This got me thinking about pfsense/m0n0wall on embedded devices - those can act as a DNS server, and could be configured at HQ and sent out as just something that needs to be plugged into the network and can then be forgotten about by the staff locally. Maybe there are some alternatives to the ASA 5505 that include some DNS functionality. Has anyone here dealt with the problem, either using some kind of embedded device, or found some other solution? Any gotchas or reasons to avoid what I have suggested?

    Read the article

  • Un-install network printer drivers from Win7 64 Home Premium

    - by AkkA
    I recently bought a NAS device that has print server functionality through USB. The printer was already installed and fully working on another Win XP box, set up that box to see the printer over the network and it prints fine. I tried to install the printer on my Win7 laptop (64 bit, Home Premium), but got the wrong drivers somehow, or it just refuses to work. I need to completely un-install the printer drivers and start from scratch. Removing the printer (by going to the printers folder, right click and remove) does not actually un-install the drivers. It only removes the printer from active use. Even if I try to re-install new drivers it will load the old ones. I have read a few things on the net that say to load up a device snap-in or something of the sort into Computer Management, but this seems to be valid for Win7 Pro or greater, the function everyone tells you to use isnt available in Home Premium. Is there anything I can use to manage device driver files in Home Premium? I want to completely remove them from the computer.

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • Permanently deleting files on Mac OS

    - by Jonik
    A while back, as relatively new Mac OS X user, I was surprised to learn that you cannot easily delete files. Directly, that is, without moving them to the trash first. On Windows and Linux this can obviously be done with ease, but not so on the Mac. I noticed this when trying clear up files from a USB memory stick — removing the files ("move to trash") does not free up space; that happens only after emptying the whole system-wide Trash. Not particularly convenient! (It seems stupid to have to empty the whole trashcan just to make some space on the USB stick. There might be gigabytes of stuff in there, and this sort of defeats its purpose - what if you'd actually need to restore something from the trash some day.) So, what's your way of getting around this? Have you bought a 3rd party application like RAW Trash for $16.95 just to delete files, or do you diligently empty the trashcan whenever needed? Or did I miss something? Also, can you convince me that this is actually the way it should be — that users shouldn't be able to fiddle with the filesystem easily? :)

    Read the article

  • Why does my exchange message filtering rule not work?

    - by Jon Cage
    I have two rules set up to sort incoming bug reports. The first is specific to a single device: Apply this rule after the message arrives sent to SMS Distribution and with <source_device_number>: in the body move it to the BugReports\<source_device_number> folder ..and the second is a catch-all for everything else: Apply this rule after the message arrives sent to SMS Distribution move it to the BugReports folder For some reason though, the first rule never seems to act even though it's higher in the list. So for some reason an email like the following doesn't seem to get caught by the first rule: From: <SourceDeviceUID> To: SMS Distributor Subject: Message from <SourceDeviceUID> Message: <source_device_number>: Device encountered a problem. Details below... ...where <source_device_number> is an integer. The second rule works fine. But for some high-priority devices, I want them automatically sorted. Why might that first rule fail?

    Read the article

  • Manipulating IIS7 Configuration with Powershell

    - by John Price
    I'm trying to script some IIS7 setup tasks in PowerShell using the WebAdministration module. I'm good with setting simple properties, but am having some trouble when it comes to dealing with collections of elements in the config files. As an immediate, example, I want to have application pools recycle on a given schedule, and the schedule is a collection of times. When I set this up via the IIS management console, the relevant config section looks like this: <add name="CVSupportAppPool"> <recycling> <periodicRestart memory="1024" time="00:00:00"> <schedule> <clear /> <add value="03:31:00" /> </schedule> </periodicRestart> </recycling> </add> In my script I would like to be able to accomplish the same thing, and I have the following two lines: #clear any pre-existing schedule Clear-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" #add the new schedule Add-WebConfiguration "/system.applicationHost/applicationPools/add[@name='$($appPool.Name)']/recycling/periodicRestart/schedule" -value (New-TimeSpan -h 3 -m 31) That does almost the same thing, but the resulting XML lacks the <clear/> element that is created with via the GUI, which I believe is necessary to avoid inheriting any default values. This sort of collection (with "add", "remove", and "clear" elements) seems to be fairly standard in the config files, but I can't seem to find any good documentation on how to interact with them consistently.

    Read the article

  • Exchange 2003 ActiveSync problem with certificate

    - by colemanm
    We're having problems getting iPhones to sync properly with SBS 2003 Exchange. When you add a new Exchange ActiveSync account on an iPhone and enter all the pertinent information, it shows a "Verifying Exchange account info" message for a minute or so, then says everything's verified and asks what you want to sync, Mail, Contacts, Calendars... so it looks like it's working. However, when you go to the Mail app and select the Exchange email account, it just shows an "Inbox" folder with nothing in it. When you try refreshing, it attempts for a second, then says "Last Updated" with a timestamp, as if it worked, but there's no mail and no error message/feedback at all. I think I've narrowed it down to some sort of certificate issue, but I'm having trouble finding out where to go from here... I ran MS's Exchange connectivity testing tool with these results: Our cert was purchased from Network Solutions, and I'd already added it to the IIS Default Website for OWA purposes. But this report makes it look like the cert is somehow problematic. I don't know what to do now... Here's a shot of the cert details, just in case:

    Read the article

  • How can I pause console output in rxvt?

    - by Javid Jamae
    I'm running rxvt in Cygwin on a Windows box. This is how I invoke it: rxvt -sr -sl 2500 -sb -geometry 90x30 -tn rxvt -fn "Lucida Console-14" -e /usr/bin/bash --login -i Anyone know how to pause the console output in rxvt? I can use Ctrl-S / Ctrl-Q to pause / un-pause, but this won't work if a script is already running and spewing output to stdout. Highlighting the terminal window with the mouse doesn't seem to work like with other consoles such as the standard Cygwin console, or the Windows command prompt console. Some sort of scroll lock would be nice, but I can't seem to find any way to do this. I know I could just pipe my output to a file, but I want a way to pause the output for something that I didn't expect to explode with console output. Basically I want to scroll back while its running without it constantly moving me to the bottom of the output buffer as it updates more data to stdout. I don't particularly care if the solution given actually pauses the script (like when you highlight the mouse in the Windows Command window), or just scroll locks and let's me scroll while its still running the underlying script, though I'd like to know how to do both if its possible.

    Read the article

  • Number of routers in small community lock up and require reboot.

    - by Anthony Hiscox
    I live in a small town which has one primary ISP. Lately I have noticed that a number of wireless routers have been locking up and requiring a reboot before allowing any connections. This has affected two of my routers, my work router, and a few others. In all cases wired continued to function as usual. Often wireless clients can see the SSID but simply won't connect. I can only think of a few possibilities and was hoping someone here might be able to point me in the right direction: Our ISP is well known to be flaky, something they are doing is causing this, what that might be I have no clue it as seems to affect the wireless only. There's a power issue in town, given our remote location and reputation for crap electrical, this seems reasonable. Only one router was plugged in to a UPS, and I'm not sure of the quality. There is some bug in all the different firmware for every one of these routers (all different). That doesn't seem reasonable, unless; it's an unknown (or known) exploit or DoS of some sort being launched by a massive team of ninjas hell bent on forcing us all to be tethered to our walls by ethernet cables or; it's just been a coincidence and I'm just paranoid (this has some weight, I mean read 4 again). Anyone else experience similar issues and have some tips?

    Read the article

  • HTTPS/HTTP redirects via .htaccess

    - by Winston
    I have a somehow complicated problem I am trying to solve. I've used the following .htaccess directive to enable some sort of Pretty URLs, and that worked fine. For example, http://myurl.com/shop would be redirected to http://myurl.com/index.php/shop, and that was well working (note that stuff such as myurl.com/css/mycss.css) does not get redirected: RewriteEngine on RewriteCond ${REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] But now, as I have introduced SSL to my webpage, I want the following behaviour: I basically want the above behaviour for all pages except admin.php and login.php. Requests to those two pages should be redirected to the HTTPS part, whereas all other requests should be processed as specified above. I have come up with the following .htaccess, but it does not work. h*tps://myurl.com/shop does not get redirected to h*tp://myurl.com/index.php/shop, and h*tp://myurl.com/admin.php does not get redirected to h*tps://myurl.com/admin.php. RewriteEngine on RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^(admin\.php$|login\.php$) RewriteRule ^(.*)$ http://%{HTTP_HOST}/${REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^(admin\.php$|login\.php$) RewriteRule ^(.*)$ https://myurl.com/%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] I know it has something to do with rules overwriting each other, but I am not sure since my knowledge of Apache is quite limited. How could I fix this apparently not that difficult problem, and how could I make my .htaccess more compact and elegant? Help is very much appreciated, thank you!

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • Windows file association for README, INSTALL, LICENSE and the like [closed]

    - by Lumi
    Possible Duplicate: How to set the default program for opening files without an extension in Windows? Many files originating in the UNIX world come without file extension. Popular examples include README, INSTALL, LICENSE. We know for a fact that these are text files. It is therefore a bit disappointing not to be able to just double-click them open in Explorer and see them in Notepad (actually, Notepad2 because of the UNIX line endings which silly Microsoft Notepad doesn't render correctly). Does anyone know of a way to create a file association for, say, README files without extension? This could then be replicated to cover the most frequently occurring file types, and then double-clicking them open would work. Update (Sort of in response to all your comments.) Thanks, folks, your comments and answers have helped me. @Indrek, yes, I was under the assumption that you could somehow create an association for just README or Makefile, and couldn't do so for files without extension. Turns out the contrary is true, and yes, that is a workaround that neatly solves the issue. Ultimately, I just want to be able to double-click to open a README or Makefile, that's all. @Sampo, the SendMe trick is also useful, although usability is not as great as a straight double-click. (I'm really lazy sometimes.) Turns out the following trick using ftype and ftype from an Administrator prompt does the double-click enabling job: assoc .=no_ext ftype no_ext=%SystemRoot%\system32\NOTEPAD.EXE %1 :: You can see it created some entries in the registry: reg query hkcr\no_ext /s reg query hkcr\. /s

    Read the article

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >