Search Results

Search found 12250 results on 490 pages for 'gnome shell extension'.

Page 236/490 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Possible DNS Injection and/or SSL hijack?

    - by Anthony
    So if I go to my site without indicating the protocol, I'm taken to: http://example.org/test.php But if I go directly to: https://example.org/test.php I get a 404 back. If I go to just: https://example.org I get a totally different site (a page about martial arts). I went to the site via https not very long ago (maybe a week?) and it was fine. This is a shared server, as I understand it, and I do not have shell access, so I'm limited to the site's CPanel to do any further investigations. But when I go to: example.org:2083 I'm taken to https://example.org:2083, which, if someone has taken over the SSL port, could mean they have taken over the 2083 part as well (at least in my paranoid mind). I'm made more nervous by the fact that the cpanel login page at the above address looks very new (better, really) compared to the last time I went to it over the weekend. It's possible that wires got crossed somewhere after a system update, but I don't want to put in my name username and password in case it's a phishing attempt. Is there any way to know for sure without shell access to know for sure if someone has taken over? If I look up the IP address for the host name, the IP address matches what I have on a phpinfo page I can get to over http. If I go to the IP address directly on port 2083, I get the same login mentioned above (new and and suspiciously nice). But the SSL cert shows as good when I go this route. So if that's the case (I know the IP is right, the cert checks out, and there isn't any DNS involved), is that enough to feel safe at that point of entry? Finally, if I can safely log in via the IP, does anyone have any advice on where to check first on CPanel for why the SSL port is forwarding to a site on karate? Thanks.

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • How Can I Make Apache Stop Serving ALL Unknown File Types (like .php~)?

    - by user223304
    I am coming from IIS and moving to Apache and recently found out that Apache by default serves up files of an unknown file extension as PURE TEXT. This can be an issue if a user uses certain programs that back up .php files as .php~. Then the .php~ file becomes completely readable by simply navigating to it in a browser. To make matters worse these .php~ files are often considered 'hidden' in the linux environment from the user so some may not even know they exist. Bots have been created around this fact that scour the internet looking for popular file name backups and extracting potentially secure info from them. I already know how to stop serving up .php~ files or any specific file extensions. I also know not to use any editors that would save backup files like this. My question is, how can I stop this default Apache behavior of serving up ANY non-MIME file type at all? I just don't like the this behavior and would like to stop it. I don't want it serving up .aspx~, .html~, .bob, .carl, no extension or anything else that is not a real MIME type. I know that I can probably go and use a directive to first Deny access to all file types. Then add the ones I want to serve out one by one. But I'm wondering if there's an easier/quicker way. Thanks for any help.

    Read the article

  • How can I cause Task Scheduler to "fail" if a dialog box returns a certain result?

    - by Roger
    I'm working on a VBScript to do a weekly reboot of all machines on our network. I want to run this script via Task Scheduler. The script runs at 3:00 AM, but there is a small chance that users may still be on the network at that time, and I need to give them the option to terminate the reboot. If they do so, I would like the reboot to occur the next night at 3:00 AM. I've set Task Scheduler up to repeat in this way. So far, so good. The problem is that if the user selects "Cancel" in my script, the Task Scheduler does not see my task as failed, and won't run it again the next night. Any ideas? Can I pass an errorcode to task scheduler or otherwise abort the task via VBScript? My code is below: Option Explicit Dim objShell, intShutdown Dim strShutdown, strAbort ' -r = restart, -t 600 = 10 minutes, -f = force programs to close strShutdown = "shutdown.exe -r -t 600 -f" set objShell = CreateObject("WScript.Shell") objShell.Run strShutdown, 0, false 'go to sleep so message box appears on top WScript.Sleep 100 ' Input Box to abort shutdown intShutdown = (MsgBox("Computer will restart in 10 minutes. Do you want to cancel computer restart?",vbYesNo+vbExclamation+vbApplicationModal,"Cancel Restart")) If intShutdown = vbYes Then ' Abort Shutdown strAbort = "shutdown.exe -a" set objShell = CreateObject("WScript.Shell") objShell.Run strAbort, 0, false End if Wscript.Quit Appreciate any thoughts.

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • How do I keep the keyword.url setting in firefox to default when you restart the browser without del

    - by user34801
    I am on the latest version of Firefox (not beta or anything like that) and currently my keyword.url is stuck on search.google.com (which I don't remember setting even though the about:config says it's a user setting. Can someone tell me how to set it back to default and keep it at default when I reset my browser? I do not want to delete prefs.js as I do not want to go thru setting up all the extension settings I have just to have my location bar search google (if this is the only way then I'll stick with searching from the search bar instead). I've checked all my extensions that may effect the location bar but could not find anything that says it would change the default search engine for this. I've also tried to open the prefs.js in wordpad or notepad but it just ends up freezing when trying to edit it at all (yes the browser is closed at the time). I also deleted the prefs-1.js (along with 2 others) that were older (after trying to rename those to prefs.js and see if this corrects it. It might have but had such old extension settings I went back to my latest prefs.js with this one issue instead of the issue of setting back up a ton of extensions. I can give any other info if needed, someone please help me fix this issue if possible.

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • Virus on site but can't find where

    - by Rob
    WARNING! THIS IS ABOUT A VIRUS ON MY SITE. IT APPEARS IT HAS BEEN THERE FOR SOMETIME AND I'VE HAD NO PROBLEMS. BUT PLEASE BE CAREFUL. READ EVERYTHING I SAY AND SEE IF YOU CAN HELP ME WITHOUT VISITING THE LINK. AVG PICKS UP ON IT AND BLOCKS IT, MCAFEE DOES NOT. Sorry about the warning, obviously i'm not here to get anyone infected or anything like that. Basically I run the website sortitoutsi dot net. Ages ago I got a virus on my computer, they got hold of my FTP passwords and added some lines of javascript to the top of my site. I removed them and believe it was fixed. However i'm using the "Web Developer" extension for Firefox and chose to view all javascript on my page and find there are various links to horrible urls such as: gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/aboutus.org/aboutus.org/google.com/skycn.com/torrents.ru.php and gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/index.php?jl= These terms do not appear anywhere. In the source code, in any of the javascript or the css. I also can't see that there are any rogue images that I don't recognise either. So i've no idea where this javascript is coming from. Can anyone suggest how I can find references to these links and remove them? I can see them both in the Web Developer firefox extension and in the net tab using Firebug. Any help would be greatly appreciated

    Read the article

  • Additional Security Measures for Syslog over SSH

    - by Eric
    I'm currently working on setting up some secure syslog connections between a few Fedora servers. This is my currently setup 192.168.56.110 (syslog-server) <---- 192.168.57.110 (syslog-agent) From the agent, I am running this command: ssh -fnNTx -L 1514:127.0.0.1:514 [email protected] This works just fine. I have rsyslog on the syslog-agent pointing to @@127.0.0.1:1514 and it forwards everything to the server correctly on port 514 via the tunnel. My issue is, I want to be able to lock this down. I am going to use ssh keys so this is automated because there will be multiple agents talking to the server. Here are my concerns. Someone getting on the syslog-agent and logging into the server directly. I have taken care of this by ensuring that syslog_user has a shell of /sbin/nologin so that user can't get a shell at all. I don't want someone to be able to tunnel another port over ssh. Ex. - 6666:127.0.0.1:21. I know my first line of defense against this is to just not have anything listening on those ports and it's not an issue. However I want to be able to lock this down somehow. Are there any sshd_config settings on the server that I can use to make it where only port 514 can be tunneled over ssh? Are there any other major security concerns I'm overlooking at this point? Thanks in advance for your help/comments.

    Read the article

  • PowerShell 3.0 x64 bit broken after installing KB2506143

    - by Dave Parker
    I have searched using all kinds of variations on relevant terms and I cannot find a single other instance of someone else having this excact same problem, so I am hoping someone here may have a clue. Problem I installed Windows Management Framework 3.0 (KB2506143) by downloading and running Windows6.1-KB2506143-x64.msu from Microsoft.com. Once completed I rebooted my machine as requested. After rebooting and logging in, I try to run the 64-bit PowerShell command shell and it comes up for a second then goes away. The 32-bit shell seems to work fine, it is just the 64-bit one that fails. Looking in the Fusion logs, I found: *** Assembly Binder Log Entry (10/4/2012 @ 1:51:48 PM) *** The operation failed. Bind result: hr = 0x80070002. The system cannot find the file specified. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\mscorwks.dll Running under executable C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = ********\***** LOG: DisplayName = Microsoft.PowerShell.ConsoleHost, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL <remainder omitted> GacUtil reveals that there is a Microsoft.PowerShell.ConsoleHost, Version=1.0.0.0, but not 3.0.0.0. I tried uninstalling KB2506143 (which removed MSVCRT90.dll and caused Windows Live Messenger to fail on load after rebooting again, so I ran a repair in stall on Windows Live Essentials and that fixed the Messenger problem) and then re-installing it, but nothing changed. If it helps, here are what I think may be the relevant parts of my hardware/software environment. Environment Dell Latitude E6510, 8GB RAM Windows 7 Professional 64-bit with SP1 Visual Studio 2010 Professional installed (includes .NET 4.0) Visual Studio 2012 Professional installed Microsoft Forefront Client Security Any clues out there? Thanks, Dave

    Read the article

  • background jobs and ssh connections

    - by petrelharp
    This question has come up quite a lot (really a lot), but I'm finding the answers to be generally incomplete. The general question is "Why does/doesn't my job get killed when I exit/kill ssh?", and here's what I've found. The first question is: How general is the following information? The following seems to be true for modern Debian linux, but I am missing some bits; and what do others need to know? All child processes, backgrounded or not of a shell opened over an ssh connection are killed with SIGHUP when the ssh connection is closed only if the huponexit option is set: run shopt huponexit to see if this is true. If huponexit is true, then you can use nohup or disown to dissociate the process from the shell so it does not get killed when you exit. If huponexit is false, which is the default on at least some linuxes these days, then backgrounded jobs will not be killed on normal logout. But even if huponexit is false, then if the ssh connection gets killed, or drops (different than normal logout), then backgrounded processes will still get killed. This can be avoided by disown or nohup as in (2). There is some distinction between (a) processes whose parent process is the terminal and (b) processes that have stdin, stdout, or stderr connected to the terminal. I don't know what happens to processes that are (a) and not (b), or vice versa. Final question: How can I avoid behavior (3)? In other words, by default in Debian backgrounded processes run along merrily by themselves after logout but not after the ssh connection is killed. I'd like the same thing to happen to processes regardless of whether the connection was closed normally or killed. Or, is this a bad idea?

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • CMD/ADB - Autorun script to search, copy, and paste a file from android system to flash drive

    - by Outride
    I've looked around and can't find anything that answers my question. This is my first question, so any tips or thoughts are welcome, as well as an answer :p As explained in title, i want to create a script that launches, finds a file on android phone, copies it, and pastes it to a flash drive. As of right now, it's a mix of multiple tutorials, trial and error, and I'm at a point of giving up. As of right now, I have a flash drive, loaded with three scripts. As follows: Bold = name of file file.bat @echo off :: variables /min SET odrive=%odrive:~0,2% set backupcmd=xcopy /s /c /d /e /h /i /r /y echo off %backupcmd% "C:\Users\Outride\Desktop\kikDatabase.db" "%drive%\all" @echo off cls invisible.vbs CreateObject("Wscript.Shell").Run """" & WScript.Arguments(0) & """", 0, False launch.bat wscript.exe \invisible.vbs file.bat So far, I had to use android commander, manually go through the directory, find /data/data/kik.android/databases and then copy kikDatabase.db to my desktop. Then run this scrip. Yes i'm trying to pull the database to copy all my email contacts. I use launch.bat, which then makes file.bat invisible due to the invisible.vbs script. What would i need to do now to have the file searched for and copied to the flashdrive? Thanks in advance, i'll be glad to answer any questions if theres any :p just remember that i'm not exactly a tech expert haha EDIT* Cleared junk of prior edits. New - I now have a .bat script to recognize what drive the usb is on, and launch py_cmd (adb shell) This is the current script. pull.bat @echo off :: variables SET odrive=%odrive:~0,2% set launching=start "%drive%\Minimal ADB and Fastboot\py_cmd" echo off %launching% so how could I make it for the .bat or a new script, to type the following "adb pull /data/data/kik.android/databases/ %drive%\All\Database" into the adb terminal? please help! I've been racking my brain over this all night :3

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Using <VirtualHost> over .htaccess for mod_rewrite

    - by DarkWolffe
    I have a LAMP stack installed on Ubuntu 12.10 with three sites created under /etc/apache2/sites-available, all of which are working. My problem lies in wanting to use those files over .htaccess for appending the .php file extension from the URL. My file currently stands as such: # The VGC <VirtualHost *:80> ServerAdmin [email protected] ServerName thevgc.net ServerAlias www.thevgc.net DocumentRoot /var/www/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/www/> Options Indexes +FollowSymLinks +MultiViews Includes RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ $1.php [L,QSA] AddType application/x-httpd-php .php AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I'm almost certain I'm doing something wrong. All I know is that my .htaccess files refused to append the extension, or rather find the file that has the same name and load that file, so I wanted to go about this method. Any suggestions? Here is an example page from my site.

    Read the article

  • nginx: Rewrite PHP does not work

    - by Ton Hoekstra
    I've a Suffix Proxy installed and I'm using the following rewrite with wildcard subdomain DNS on: location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; } } My suffix proxy has the following URL format: (subdomain and/or domain + domain extension to proxy).proxy.org/(request-uri to proxy) I've this php code in my index.php: if(preg_match('#([\w\.-]+)\.example\.com(.+)#', $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'], $match)) { header('Location: http://example.com/browse.php?u=http://'.$match[1].$match[2]); die; } But when requested a page with a .php extension I'll get a 404 not found error: http://www.php.net.proxy.org/docs.php - HTTP/1.1 404 Not Found http://www.utexas.edu.proxy.org/learn/php/ex3.php - HTTP/1.1 404 Not Found But everything else is working (also index.php is working): http://php.net.proxy.org/index.php - HTTP/1.1 200 OK http://www.php-scripts.com.proxy.org/php_diary/example2.php3 - HTTP/1.1 200 OK http://www.utexas.edu.proxy.org/learn/php/ex3.phps - HTTP/1.1 200 OK http://www.w3schools.com.proxy.org/html/default.asp - HTTP/1.1 200 OK Somebody has an answer? I don't know why it's not working, on apache it's working fine. Thanks in advance. I've removed the location and now it's working perfectly: if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; }

    Read the article

  • Where do these mysterious DNS lookups come from and why are they slow?

    - by Hongli
    I have recently obtained a new dedicated server which I'm now setting up. It's running on 64-bit Debian 6.0. I have cloned a fairly large git repository (177 MB including working files) onto this server. Switching to a different branch is very very slow. On my laptop it takes 1-2 seconds, on this server it can take half a minute. After some investigation it turns out to be some kind of DNS timeout. Here's an exhibit from strace -s 128 git checkout release: stat("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=132, ...}) = 0 socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 5 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("213.133.99.99")}, 16) = 0 poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) sendto(5, "\235\333\1\0\0\1\0\0\0\0\0\0\35Debian-60-squeeze-64-minimal\n\17happyponies\3com\0\0\1\0\1", 67, MSG_NOSIGNAL, NULL, 0) = 67 poll([{fd=5, events=POLLIN}], 1, 5000) = 0 (Timeout) This snippet repeats several times per 'git checkout' call. My server's hostname was originally Debian-60-squeeze-64-minimal. I had changed it to shell.happyponies.com by running hostname shell.happyponies.com, editing /etc/hostname and rebooting the server. I don't understand the DNS protocol, but it looks like Git is trying to lookup the IP for Debian-60-squeeze-64-minimal as well as for happyponies.com. Why does Debian-60-squeeze-64-minimal come back even though I've already changed the host name? Why does Git perform DNS lookups at all? Why are these lookups so slow? I've already verified that all DNS servers in /etc/resolv.conf are up and responding slowly, yet Git's own lookups time out. Changing the host name back to Debian-60-squeeze-64-minimal seems to fix the slowness. Basically I just want to fix whatever DNS issues my server has because I'm sure they will cause more problems that just slowing down git checkout. But I'm not sure sure what the problem exactly is and what these symptoms mean.

    Read the article

  • How to make quicksilver remember custom trigger

    - by corroded
    I am trying to make a custom trigger for my shell/apple script file to run so I can just launch my dev environment at the push of a button. So basically: I have a shell script(and some apple script included) in ~ named start_server.sh which does 3 things: start up solr server start up memcached start up script/server I have a saved quicksilver command(.qs) that opens up start_server.sh(so start_server.sh, then the action is "Run in Terminal") I created a custom trigger that calls this saved qs command. I did that then tested it and it works. I then tried to double check it so I quit quicksilver and when I checked the triggers it just said: "Open (null)" as the action. I set the trigger again and when i restarted QS the same thing happened again. I don't know why but my old custom trigger to open terminal has worked since forever so why doesn't this one work? Here's a screenie of the triggers after I restart QS: http://grab.by/4XWW If you have any other suggestion on how to make a "push button" start for my server then please do so :) Thanks! As an added note, I have already tried the steps on this thread but to no avail: http://groups.google.com/group/blacktree-quicksilver/browse_thread/thread/7b65ecf6625f8989

    Read the article

  • program to track recent saved files or downloads?

    - by DiegoDD
    Back to windows XP times, i used a program named Ava Find that can find any file in the system, but a really nice feature it had, called "scout bot" that automatically finds new files, that is, recently created. More info here: AvaFind For example, if i saved or downloaded a file (in any program), but i cant remember its name or where i put it, i just simply looked at the scoutbot list, and it showed me the most recent files, and then i could just open them from there, or jump to their path. Since i changed to windows 7 (passing through vista), the program no longer works. It simply keeps indexing forever and never finds anything. I don't know if it is not compatible with win 7 itself, or because the fact that my windows 7 is 64 bit. I already tried to contact the Avafind creators, but they never answered, and the program seems to be no longer supported (although the site still works). Now, is there a similar program that finds RECENT files? e.g. listing the most recently saved files, system-wide (or even limiting it to some folders, discs)? And that of course, that is confirmed to work with win 7 64 bit. I know that there are many programs that can find any file, like everything, but what i want is to find recent files and their paths, without knowing their name or e. "everything" also can sort results by date, which is almost what i need, simply to look for part of the file name or extension, and get the most recent one. but if i get to many results, it takes a while, plus, i STILL need to know the name or at least extension of the file. What i want is simply a "Most recently Saved files" tool. i.e., a replacement for Scout Bot in Ava Find. Do you know any alternative?

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • Why does GIMP start up so slow on my machine?

    - by muntoo
    EDIT 2: One way would be to disable script-fu. Does anyone know how to disable it at startup? Is it possible? Would GIMP still work? What wouldn't work? Could I start it later, after loading? Is there any way to speed up GIMP's startup time on Windows Vista Home Premium 32-Bit 1.6 [Dual] Intel Processors? On XP [different computer], it loads in less than 3 seconds. On Vista, it takes 20 seconds: 2 Seconds (other - fonts, brushes, etc) 18 Seconds (extension-script-fu) It just freezes at extension-script-fu. Looking at Process Explorer, I see that it's not taking any CPU at all. EDIT 1: It does seem to be taking 50% of the CPU. It gets stuck for about 18 seconds, and then starts working again. Then, the actual GIMP program pops up [...finally]. I have the latest stable version running (I think). I tried it with XP SP2 Compatibiliy mode and/or Run As Administrator, but that didn't help.

    Read the article

  • Jailkit not locking down SFTP, working for SSH

    - by doublesharp
    I installed jailkit on my CentOS 5.8 server, and configured it according to the online guides that I found. These are the commands that were executed as root: mkdir /var/jail jk_init -j /var/jail extshellplusnet jk_init -j /var/jail sftp adduser testuser; passwd testuser jk_jailuser -j /var/jail testuser I then edited /var/jail/etc/passwd to change the login shell for testuser to be /bin/bash to give them access to a full bash shell via SSH. Next I edited /var/jail/etc/jailkit/jk_lsh.ini to look like the following (not sure if this is correct) [testuser] paths= /usr/bin, /usr/lib/ executables= /usr/bin/scp, /usr/lib/openssh/sftp-server, /usr/bin/sftp The testuser is able to connect via SSH and is limited to only view the chroot jail directory, and is also able to log in via SFTP, however the entire file system is visible and can be traversed. SSH Output: > ssh testuser@server Password: Last login: Sat Oct 20 03:26:19 2012 from x.x.x.x bash-3.2$ pwd /home/testuser SFTP Output: > sftp testuser@server Password: Connected to server. sftp> pwd Remote working directory: /var/jail/home/testuser What can be done to lock down SFTP access to the jail? FWIW, I mostly used this as a guide: http://digitalpatch.blogspot.com.ar/2010/03/openssh-daemon-hardening-part-3-setup.html

    Read the article

  • PHP Page Stopped outputting content After Running "yum install php-devel" Command

    - by stwhite
    This error is bizarre but after running the "yum install php-devel" command (after a long day of trying to install Facedetect and OpenCV for face detection) my site stopped functioning. The site uses mysql and php. When you hit the url, the page executes the mysql and the php, but it appears to randomly stop outputting the content of the page. None of the code was changed and the site was working flawlessly prior to running the mentioned ssh command. I do use output buffering in the site, but after removing the calls "ob_flush", "ob_end_flush" and "ob_start" it didn't appear to help—still having issues with the site. Any ideas what this could be? Here is output from terminal: [myserver ~]# cd Facedetect-4b1dfe1 [myserver Facedetect-4b1dfe1]# phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 [myserver Facedetect-4b1dfe1]# configure bash: configure: command not found [myserver Facedetect-4b1dfe1]# phpize && configure && make && make install Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 bash: configure: command not found bash: Read: command not found [myserver Facedetect-4b1dfe1]# make make: *** No targets specified and no makefile found. Stop. [myserver Facedetect-4b1dfe1]# yum install php5-devel

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • Changing the prompt in telnet

    - by wim
    With some help from people on here, I was able to set a custom prompt in an ssh session (thanks!). Now I need to do the same in telnet, but I'm not sure of what syntax I could use for that. Basically the telnet prompt is just a > character, I need to modify it to something I can more reliably detect in automation jobs. Hope this makes sense. From inside telnet, trying to escape that command with a bang like !PS1=spam and !PS2=eggs did not change it. wim@wim-acer:~$ ssh [email protected] -i ~/.ssh/guest_nopassphrase -t "export PS1='Sending a custom prompt \w \$ '; exec sh" Sending a custom prompt ~ $ set HOME='/var/tmp' IFS=' ' LOGNAME='guest' PATH='/sbin:/usr/sbin:/bin:/usr/bin' PPID='1128' PS1='Sending a custom prompt \w $ ' PS2='> ' PS4='+ ' PWD='' SHELL='/bin/sh' TERM='xterm' USER='guest' Sending a custom prompt ~ $ telnet localhost <snip> Entering character mode Escape character is '^]'. > !set CONSOLE='/dev/ttyp0' HOME='/var/tmp' IFS=' ' LOGNAME='root' PATH='/sbin:/bin:/usr/sbin:/usr/bin' PPID='546' PREVLEVEL='N' PS1='\w \$ ' PS2='> ' PS4='+ ' PWD='/var/tmp' RESPAWN_COUNT='1' RESPAWN_LAST='0' RESPAWN_MAX='5' RESPAWN_TIME='5' ROOTDEV='/dev/sla1' RUNLEVEL='5' SHELL='/bin/false' TERM='linux' USER='root' > > Connection closed by foreign host Sending a custom prompt ~ $ Connection to 192.168.1.124 closed. wim@wim-acer:~$

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >