Search Results

Search found 10165 results on 407 pages for 'ui virtualization'.

Page 340/407 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • Windows 7 scheduled task returns 0x2

    - by demmith
    I have identical scheduled tasks running in Windows XP Pro and Windows 7. The XP Pro one runs fine, the Windows 7 one always returns 0x2 (which means, "The system cannot find the file specified"; however, executing from the command line is no problem) in the Last Run Result column of the Task Scheduler UI. The scheduled task executes a .bat file daily. The .bat file contains a call to execute a Perl script. As I stated in the previous paragraph, it executes under XP without any trouble but under Windows 7, no dice. The task under Windows 7 is set to "run whether the user is logged on or not." In this case it is me, I am the only user of the system. It is also set to "Run with highest privileges." And it is not hidden. The .bat file executes perfectly well from the command line - it calls the Perl script as expected and the Perl script does its thing. I have searched far and wide looking for an appropriate answer to this issue. So far I have found nothing. What the devil is going on with this Win7 scheduled task? I am ready to pull my hair out.

    Read the article

  • Want to send my neighbors to a certain website via DNS, but don't have a clue how. [closed]

    - by Akku
    My neighbors have an unsecured WIFI router, and over the administration web-UI of the router I could log in as there was no password set. I don't know which of my neighbors these are, and I'd like to configure their router in a way that they come to my website instead of Google and Facebook, where I set up a warning in german. It this page: http://www.abelssoft.de/liebenachbarn/ Basically, I just want to see if and how this is possible - I'm aware that I could just set the WiFi-password and have them call their network provider to reset the thing, but I really want to see if this could work, because it would be a way cooler effect :-). So this router interface doesn't allow custom redirects, only filters. BUT I can set the DNS that is used, so I thought there might be the possibility to set up a custom DNS on a server, set it as the main DNS and redirect from Google to the URL above. Is this possible? If so, please try to detail a way that I have to go though to achive this. Note that I'm not the super-Linux-skilled person, I have a dyndns account and a Windows machine it points to as well as an Apache+Tomcat if that helps. I could also set up virtual machines on the windows server and redirect to those using a different port. Or is there maybe a webservice that provides such custom DNS?

    Read the article

  • Cut (smart edit) .mts (AVCHD Progressive) files un Ubuntu Lucid

    - by pts
    I have a bunch of .mts files containing AVCHD Progressive video recorded by a Panasonic camera, and I need software on Ubuntu Lucid with which I can remove the boring parts, and concatenate the interesting parts, all this without reencoding the video stream. It's OK for me to cut at keyframe boundary. If Avidemux was able to open the files, it would take about 60 hours of work for me to cut the files. (At least that was it last time I tried with similar videos, but of a file format supported by Avidemux.) So I need a fast, powerful and stable video editor, because I don't want that 60 hours of work go up to 240 or even 480 hours just because the tool is too slow or unstable or has a terrible UI. I've tried Avidemux 2.5.5 and 2.5.6, but they crash trying to open such a file, even if I convert the file to .avi first using mencoder -oac copy -ovc copy. mplayer can play the files. I've tried Avidemux 2.6.0, which can open the file, but it cannot jump to the previous or next keyframe etc. (if I make it jump to the next keyframe, and then to the previous keyframe, it doesn't end up at the original keyframe, sometimes displays an error etc.). Also I'm not sure if Avidemux 2.6.x would let me save the result without reencoding. I've tried Kdenlive 0.7.7.1, but playback is very choppy, and it cannot play audio at all (complaining that SDL cannot find the device; but many other programs on the system can play audio). It would be a pain to work with. I've tried converting the .mts file to .mkv using ffmpeg -i input.mts -vcodec copy -sameq -acodec copy -f matroska output.mkv, but that caused too much visible distortions in the video in both mplayer and Avidemux. I've tried converting the .mts file with TsRemux.exe, but Avidemux 2.5.x still can't open that file. Is there another program to cut and concatenate the files? Is there a preprocessor which would create a file (without reencoding the video) on which Avidemux wouldn't crash?

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • Windows 7 ignores F6/F8 and will not boot

    - by P.Brian.Mackey
    I have a work PC with sophos safeguard encryption on it. Windows failed to start. When I bootup I receive an error saying a recent hardware or software change might be the cause. File: \Boot\BCD Status: 0xc0000098 Info: The windows boot configuration data file does not contain a valid OS entry. This began after the PC forced me to run a system recovery. My machine had powered down improperly (power outage?) and simply would not respond to my keyboard input to cancel the option to scan my system. After the scan "repaired" a boot file, my system crashed. Now it tells me I can insert my windows 7 disk and run recovery. I can't simply do this because of Safeguard. The system recovery can't see my encrypted drive. I tried hitting F2 to manually login to Safeguard and then selected the option to boot from media. The computer prompts me to hit any key to boot from disk...which I do, but once again it is not reading my keyboard input. I can't get F8/F6 to bypass startup files and get me to a command prompt like the old days. If I could get to a command prompt I might could recover the file windows jacked up from its backup location...though I may need to use the windows recovery disk UI to do this..??? In the past I've been able to slap in a PS/2 keyboard when the USB keyboards stop responding like this. I have no PS/2 keyboard available. Anyone have any idea how I can undo the damage windows system recovery has done with safeguard installed?

    Read the article

  • How to prevent Gnome-shell's Alt+Tab from grouping windows from similar apps?

    - by wleoncio
    I love pretty much everything about how Gnome Shell handles app-switching through Alt+Tab. My one gripe with it, though, is how it forces the user to use Alt+` to switch between windows of the same app. This is very annoying for me, because now I have to keep in mind if the last window I was using belonged to the same app as the current window or not. Definitely a nuisance for power users who thinks in terms of "windows I'm working with" instead of "applications I'm working on". I've tried the AlternateTab extension ( https://extensions.gnome.org/extension/15/alternatetab/ ), but it's looks way too ugly for me. Not to mention that in the end all I want is to remap Alt+(key above tab) to Alt+Tab on this application. I guess one option would be to just tweak Gnome-shell. My guess is that I should tinker with the altTab.js file at /usr/share/gnome-shell/js/ui/, but the file is too long and overwhelming for someone like me, who doesn't know JavaScript. Does anyone know how I can make Gnome Shell stop grouping windows by applications?

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • How to install/configure ffmpeg to compress mp4 videos for flash player delivery?

    - by Andrew Fulton
    We have a flash web-app that created interactive video, and are using ffmpeg to do some compression/resizing when a user "publishes" their project. The user can upload flv files and mp4 files, both of which play fine in the Flash UI before publishing. After publishing the flv files work fine, but the mp4 files will not play in the flash player: Audio will play but video won't. The mp4 files will play fine if I download them and play them in the Quicktime player but if I attempt to open them in the Adobe Media Player it reports "The media file does not contain a supported video track". If I open the Movie inspector in quicktime it tells me that the original file is an "h264" video and the ffmpeg-processed ones are "mpeg-4". I have tried forcing it to h264 by adding flags like -f h264 and -vcodec h264 but I get a screenfull of errors (no frame, illegal POC type, sps_id out of range) ending with Could not find codec parameters (Video: h264) h264 will show up if I run ffmpeg -formats and ffmpeg -codecs, and as I said it will play fine in Quicktime. Is there anything else I need to do to convince the flash player to play them? Is there anything else I need to tell you about the server that will help?

    Read the article

  • Random server lag, no CPU/mem/pagefile usage

    - by Kev
    We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server. No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.) I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

    Read the article

  • Creating a Jenkins build farm in a hands-off manner?

    - by user183394
    My colleague and I have set up and run Jenkins on a KVM guest running Ubuntu 12.04 with good results for a while now. We are thinking about deploying a cluster of Jenkins CI hosts in master/slave configuration, with the libvirt slave plugin to keep our hardware count low. Our environment is strictly Linux (CentOS, Scientific Linux, Fedora, and Ubuntu). Both of us are competent in setting up large clusters. We typically use tools like cobbler + a configuration management tool (Puppet, Chef, and alike) to set up a large number of machines (physical and/or virtual) hands off (hundreds of nodes in less than an hour typical). We would like to do the same for nodes running Jenkins. But the step by step guide doesn't give us any clues in this regard. I did see a Multi-slave config plugin. But, being used to dealing with hundreds or more machines completely hands-off, clicking the UI for many machines just doesn't feel right. Can someone point to us a reference that talks about how to set up large cluster of Jenkins CI hosts more in the hands-off way?

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just this morning), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? EDIT: It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • need to bring back win 7

    - by user290513
    I like making music and playing games and occasionally do some Photoshop. I had a windows 8 computer but my mouse pointer always got stuck, so to try out something new I installed Ubuntu. here is how I installed it: Went to advanced statup options clicked on "use a device" after plugging in my bootable USB with Ubuntu replaced my windows 8 and installed Ubuntu 14.04 LTS I hope I did it correctly though. So after a few months I could've really find out a good Audio Production (not LMMS, because I use Stagelight) software nor something that could be familiar to the UI of Photoshop. So I decided to bring back Windows, but because of the bad experience of 8 I thought about bringing back win 7 So I used an app named WinUSB to make my bootable USB drive after formatting it to NTFS in GParted But when I go to my grub menu, my USB doesn't show up and my PC being a UEFI device. I don't know how to get to the bios of my device. Can somebody tell how to install Windows 7 completely and deleting Ubuntu or at least give me a link to a tutorial. I have a netbook: it is an Acer Aspire One 725. I'm fine with using commands in terminal and another thing that my laptop doesn't have a CD drive or reader, I can't put a CD inside

    Read the article

  • Viewing Postscript (or PDF) on OS X: Aliasing issues

    - by mankoff
    I am generating postscript graphics and am trying to find a balance between non-aliasing and over-aliasing. If I use the raw ghostscript viewer gs on the Postscript, it looks good. The text appears anti-aliased, but the image remains nice and blocky. Unfortunately, gs has no real user interface and loses all of the nice things that Preview.app has. I could install gv, but the dependency bloat is huge! It requires all of gnome. And even that isn't a great viewer compared to Preview.app or Skim.app. Here is an image viewed with gs: From a user-interaction and Mac-ish perspective, Preview.app (or Skim.app is a much nicer program to use. They have the option to turn on or off aliasing, but neither option looks very good. Which aliasing on, the image is blurry. When it is off, the graphic matches what is seen from gs, but there are two issues. Minor issue: the font is ugly. Uglier than with gs. Major issue: Every PDF is un-aliased, making it hard to read regular PDFs full of text. So, in summary: Is there a way to manually generate the PDF from the PS that overcomes these issues? Is there a way to find a middle ground of alias/unalias with Preview.app? Is there another app that displays with quality like gs, but has a decent UI like Skim.app or Preview.app Is there a way to have Preview.app turn off aliasing for only one file (containing graphics) but leave it enabled in general so that text PDFs are still readable?

    Read the article

  • Enable bitlocker an save key to share

    - by user273694
    I have searched all over the web but cannot find a complete answer to this: How to enable Bitlocker on a laptop with TPM, and store a file with the Bitlocker recovery key and TPM password by USING THE manage-bde command line tool. The file should be the same as when created in the Bitlocker manager UI. I DO NOT want to save to AD. The same question was asked here but was not answered correctly. The goal is to write a script to be used with an endpoint manager. I have tried the following: manage-bde -on C: Works fine, but does not create or save a key. manage-bde -on C: -rk C:\myfolder\ and manage-bde -on C: -RecoveryKey C:\myfolder\ -rp The output from the last two methods state that a key has been saved to c:\myfolder and so on, but that is not the case. It also says that I have to: Save the password in a secure location Insert a USB flash drive with an external key file into the computer. Restart and run hardware test type "manage-bde -status" to check if the hardware test succeeded After a restart, I get an error saying that Bitlocker could not be enabled because the bitlocker startup key or recovery kpassword cannot be found on the USB device.... C: was not encrypted. Why am I asked to insert a USB?? I simply want to encrypt the hard drive and save the recovery information to a file automatically. Is that too much to ask? Help please!

    Read the article

  • Google Chrome Browser

    - by Harish
    Hi friends. Am using Google Chrome as my default web browser. I don't have any problem with it. The only problem rise is when I enter gmail.com and login into my account. I need to go to Histories in Google chrome (ctrl + shft + del) and select "Del Cokies and Other datas" for entering into gmail again. My gmail page is workin just once. I nedd to log in. Check my mail and I have to clear the cookies in order to log in again If i fail, This is d info I get The webpage at https://mail.google.com/mail/?shva=1&ui=html&zy=l&pli=1&auth=DQAAALgAAABhdI_K9uptgb6yQfGVmnl74VZEUH7U2M7WGJn3kJnCiY0CNI5QBU3X-g6UjPENGoHKSHE9nRna_Ygu_d59mN-HG1SUzNpI_UEMJ9CwDqZAYxYLEJl8r_JA2qJNGF8H0fdKfn99Gb2YeI-lprGxCrWRT7LicyADxQvNLQ6l9xBvOccEBSJfdIrna8dOXeX06N41L0zpnLQrVG1qdulR7LxId9XwtVb6QtfhwnambqLoNiY402Y5pjGG1_gFL4dNpJA&gausr=hariss89%40gmail.com has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer. Here are some suggestions: Reload this web page later. Learn more about this problem. Wat can I do ...

    Read the article

  • How can I totally flatten a PDF in Mac OS on the command line?

    - by Matthew Leingang
    I use Mac OS X Snow Leopard. I have a PDF with form fields, annotations, and stamps on it. I would like to freeze (or "flatten") that PDF so that the form fields can't be changed and the annotations/stamps are no longer editable. Since I actually have many of these PDFs, I want to do this automatically on the command line. Some things I've tried/considered, with their degree of success: Open in Preview and Print to File. This creates a totally flat PDF without changing the file size. The only way to automate seems to be to write a kludgy UI-based AppleScript, though, which I've been trying to avoid. Open in Acrobat Pro and use a JavaScript function to flatten. Again, not sure how to automate this on the command line. Use pdftk with the flatten option. But this only flattens form fields, not stamps and other annotations. Use cupsfilter which can create PDF from many file formats. Like pdftk this flattened only the form fields. Use cups-pdf to hook into the Mac's printserver and save a PDF file instead of print. I used the macports version. The resulting file is flat but huge. I tried this on an 8MB file; the flattened PDF was 358MB! Perhaps this can be combined with a ghostscript call as in Ubuntu Tip:Howto reduce PDF file size from command line. Any other suggestions would be appreciated.

    Read the article

  • Windows 7 scheduled task returns 0x2

    - by demmith
    I have identical scheduled tasks running in Windows XP Pro and Windows 7. The XP Pro one runs fine, the Windows 7 one always returns 0x2 (which means, "The system cannot find the file specified"; however, executing from the command line is no problem) in the Last Run Result column of the Task Scheduler UI. The scheduled task executes a .bat file daily. The .bat file contains a call to execute a Perl script. As I stated in the previous paragraph, it executes under XP without any trouble but under Windows 7, no dice. The task under Windows 7 is set to "run whether the user is logged on or not." In this case it is me, I am the only user of the system. It is also set to "Run with highest privileges." And it is not hidden. The .bat file executes perfectly well from the command line - it calls the Perl script as expected and the Perl script does its thing. I have searched far and wide looking for an appropriate answer to this issue. So far I have found nothing. What the devil is going on with this Win7 scheduled task? I am ready to pull my hair out.

    Read the article

  • How to download video from a website that uses flash player but

    - by TPR
    Possible Duplicate: Download Flash video file from any video site? Livestream.com seems to be using flash player to show both live streams and archived/recorded streams (meaning previously shown streams). I want to download the archived streams. I am assuming that it should be much easier to download archived video from the website compared to the live stream. Here is a sample video: http://www.livestream.com/copanamericana/video?clipId=pla_6f9f4d97-e48f-4b04-bcaa-18e281341b0f&utm_source=lslibrary&utm_medium=ui-thumb ^^ I am not interested in this particular video, just an example. Firefox plugins like DownloadHelper and all do not work. Any suggestions? If I look at the browsing cache, no matter what the website plays, all files have the same size! If I open them, of course no video gets played. So something clever/funny is going on with the flash player on livestream.com (yes, even the archives videos), so it is definitely not the same as downloading videos from youtube. However, ads played on livestream.com videos are properly stored in browser cache.

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • Is the Windows VPN secure?

    - by Tor Haugen
    I have used a few VPN solutions over the years. Most are hard to set up, slow to connect and/or rather ill-behaved (replacing system drivers, disrupting each other etc). One solution I have never used earlier is the one built into Windows. This is mostly because the infrastructure guys always refuse to use it because they claim it's 'not secure'. Now I have finally had the chance to use it (on Windows 7), and wow, it's a breeze! Easy to set up, well-behaved, it connects almost instantly, automatically authenticates with my logged-in credentials, and integrates excellently with the UI. I have to say, unless it really isn't secure, I'll be happy if I never have to use another VPN product ever again. I gather the Windows VPN used to rely on PPTP, which is not considered secure. But in Windows 7/2008, it supports L2TP/IPSec, SSTP and IKEv2, and authenticates with EAP or CHAP/CHAPv2. That seems pretty up-to-date to me. But I'm just a lowly developer. Can someone in the know give me the lowdown on this?

    Read the article

  • cPanel FTP account access to sym links from parent directory

    - by totbar
    I would like to give a potential developer temporary access to some of my projects. I have almost everything in its own subdomain, and each directory is a sibling to my public_html directory. It looks something like: ("developer" is the cPanel account name.) developer/ *This is the top level directory for the cPanel account. "/home/developer" site1/ *site1.mysite.com site2/ *site2.mysite.com site3/ *site3.mysite.com public_html/ *www.mysite.com ... etc I created a directory inside public_html called tempdev and I added symbolic links to each of the sibling directories listed above. My understanding of cPanel is that I can only assign one user with "Special FTP Access" per domain. I really dont want to give a complete stranger my login creds, (its just a development environment but still). So I used the cPanel FTP account creator UI. It will not allow me to assign the user access to the directories outside of public_html. I cant even give access to public_html either. So I made the tempdev directory in www and created the symlinks. Using the new account, I can see the symlinks, but I can go into them. Is there a better way to accomplish what I am attempting?

    Read the article

  • HSphere - Only sees Apache 2 Test Page after forced shutdown?

    - by Darkwoof
    Hi, I have a dedicated server running on a Dell PowerEdge 850 with CentOS 4.4 and HSphere 3.0 Patch 6 colocated at a datacenter. Last night my hosting company had to schedule a change in the power bar, and I gave the go ahead for them to shut down the server and bring it up when they are done. Since they do not have admin access to the machine, I suppose they did a forced shutdown. When the machine was brought up, I found that all my domains (and sub-domains) are now pointing to an "Apache 2 Test Page" instead of the pre-configured sites that were running prior to the shutdown. This apparently only affects the standard sites running on port 80 - my Webmin instance running at port 1000 is still accessible for example, as well as my HSphere control panel running at port 8080. I've checked the config settings using the HSphere UI for each of the sites, and didn't find anything wrong. I've also tried rebooting the server via SSH, which does not rectify the problem. I've previously done reboots with no issues; the sites would just come right back up when its done, but not this time. I'm guessing some configuration file got corrupted or overwritten this time? Anyone with experience with HSphere and can provide some advice on what's happened and how to solve it? Thanks. (I do not have an active support agreeement for HSphere since Parallels took over and increased the min. license to 200. I only had 25 license for use by family and friends.) Thanks in advance.

    Read the article

  • Sendmail slow to accept emails

    - by Rich
    I have a PHP web app which is using SMTP to sendmail on localhost to send email. I would like sendmail to accept the mail request immediately and queue it for later sending, as I don't want to have user-facing request threads blocked on emails. Sendmail is installed with the default settings on RHEL web servers. Sometimes sendmail is blocking for a long time after the MAIL command is sent -- sometimes taking 60 or 90 seconds to accept the mail. The time take is usually very close to 60 or 90 sec, which makes me think this is some kind of timeout. I have looked in the sendmail logs, and there are plenty of "deferred" emails, but nothing which looks responsible for this delay. How can I diagnose what is slowing down sendmail? How can I configure sendmail to always accept the mail immediately and to queue the mail for later sending? Update: I'm not sure, but it looks like this might be linked to aol.com addresses. I strongly suspect that sendmail is doing some kind of blocking receipient address verification at the accept-email-for-sending stage. How can I disable that, so that sendmail doesn't block my UI threads? Update 2: This only seems to happen at busy times. Perhaps I am running out of sendmail threads or something? How can I check that?

    Read the article

  • How to get desired FireFox last tab behavior?

    - by JustJeff
    All tabs should be the same; so if any of the have a 'close' button, they all should, including the last tab. I see no reason that a tab's close button should suddenly vanish simply b/c that tab has become the last one open. If I have N tabs open, and park the mouse over the left-most tab's close button, this vanishing close button trick means now I have to make a large mouse move to get to the app's close button. Unsat. Mouse moves = too many milliseconds wasted. Closing the last tab should NOT take me to my home page, or any other page whatsoever. I want the browser to close with the last tab. I do not expect or want "new tab" behavior when I click a Close button. Now, I've gone into about:config and played with browser.tabs.closeWindoWithLastTab, but this setting oversteps its purpose; while it does make the browser close, for some inexplicable reason, it also suppresses the last tab's close button! I have tried the "last tab close button" add-on, and while this does restore the close button, the add-on oversteps by taking the liberty of turning closeWindowWithLastTab off. Is there some way out of this pickle? Is it too hard to just code things to provide simple, orthogonal actions, so that everybody can config the UI to their liking, and not just to a few pre-fab configurations that the developers think everyone should like? Btw, FF 13.0.1 on ms windows

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >