Search Results

Search found 593 results on 24 pages for 'wget'.

Page 19/24 | < Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to best develop web crawlers

    - by Fernando Barrocal
    Heyall, I am used to create some crawlers to compile information and as I come to a website I need the info I start a new crawler specific for that site, using shell scripts most of the time and sometime PHP. The way I do is with a simple for to iterate for the page list, a wget do download it and sed, tr, awk or other utilities to clean the page and grab the specific info I need. All the process takes some time depending on the site and more to download all pages. And I often steps into an AJAX site that complicates everything I was wondering if there is better ways to do that, faster ways or even some applications or languages to help such work.

    Read the article

  • curl object moved here error, bash

    - by adam n
    I'm trying to download an html file with curl in bash. When I download it manually, it works fine. However, when i try and run my script through crontab, the output html file is very small and just says "Object moved to here." with a broken link. Does this have something to do with the sparse environment the crontab commands run it? I found this question: http://stackoverflow.com/questions/1279340/php-ssl-curl-object-moved-error but i'm using bash, not php. What are the equivalent command line options or variables to set to fix this problem in bash? (I want to do this with curl, not wget)

    Read the article

  • Install a vimball from the command line.

    - by Robert Massaioli
    As this post points out you can install Vimballs using the normal: vim somevimball.vba :so % :q But if you want to install a from the command line how do you do it? I ran a 'man vim' and it seems like the best "from source install" option was the '-S' option so I tried to install haskellmode with it: wget 'http://projects.haskell.org/haskellmode-vim/vimfiles/haskellmode-20090430.vba' vim -S haskellmode-20090430.vba and that failed to work. It gave me the following error: Error detected while processing function vimball#Vimball: line 10: (Vimball) The current file does not appear to be a Vimball! press ENTER or type command to continue It should be noted that using the first method I was able to successfully install the vimball. I have tried the second method on a few other vimballs and it has failed every time. Is there a way to install a vimball from the command line? It seems like a useful sort of task. Oh, and I am running the following version of vim: Version: 2:7.2.330-1ubuntu3 Thanks.

    Read the article

  • PostGreSQL - pgloader installation?

    - by KittyYoung
    Granted... this is a dumb question, but it's still a mystery to someone like me, whose never done it before... I'm trying to install pgloader, but I can't seem to find any documentation.... I'm running MAMP on MAC OS X. I've already installed the tcllib, and am about to do: wget http://pgfoundry.org/frs/download.php/233/pgloader-1.0.tar.gz tar zxvf pgloader-1.0.tar.gz I'm wondering what directory I need to actually untar pgloader into? Is there anything else that I need to do to get it to work?

    Read the article

  • CodeIgniter Project Giving 303/Compression Error

    - by Tim Lytle
    Trying to setup a CodeIgniter based project for local development (LAMP stack), and once all the config file were updated (meaning I successfully had meaningful bootstrap errors for CodeIgniter), I get this error in my browsers: Chrome Error 330 (net::ERR_CONTENT_DECODING_FAILED): Unknown error. Firefox Content Encoding Error: The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression. Just using wget to fetch the file works fine, no errors and I get the content I'm expecting. Not sure if this is something with CI and the Server, or just something weird with the project. Has anyone seen this before?

    Read the article

  • How can I download all files of a specific type from a website using PHP?

    - by CheeseConQueso
    I want to get all midi (*.mid) files from a site that's set up pretty simple in terms of directory tree structure. I wish we had wget installed here, but that's another party.... The site is VGMusic.com and the path containing all of the midi files is: http://www.vgmusic.com/music/console/nintendo/nes/ I tried glob'ing it out, but I suppose that glob only works locally? Here is what I wrote to try to make it happen (doesn't work.. obviously..): <?php echo 'not a blizzard<br>'; foreach(glob('http://www.vgmusic.com/music/console/nintendo/nes/*.mid') as $filename) { echo $filename.'<br>'; //$newfile = 'http://www.mydomain.com/nes/'.$filename; //copy($filename, $newfile) } ?> I tried it also without the http:// in there with no luck.

    Read the article

  • Modify HTML Content with Squid

    - by user298814
    We have setup our network as per the tutorial here: https://help.ubuntu.com/community/Upside-Down-TernetHowTo. Basically, we have a squid proxy that inverts images for pages that clients request. We're trying to modify the script so that we can edit the contents of the webpage before the webpage is sent to the client. We are not having any luck. I'm wondering if there is something different about .html files that makes this not possible. What is happening is that we do a wget on the URI that is requested, save it locally, modify it and then echo back the new URI. The page that the user gets is the unmodified page and not the one that we just changed.

    Read the article

  • Command to zip a directory using a specific directory as the root

    - by Slokun
    I'm writing a PHP script that downloads a series of generated files (using wget) into a directory, and then zips then up, using the zip command. The downloads work perfectly, and the zipping mostly works. I run the command: zip -r /var/www/oraviewer/rgn_download/download/fcst_20100318_0319.zip /var/www/oraviewer/rgn_download/download/fcst_20100318_0319 which yields a zip file with all the downloaded files, but it contains the full /var/www/oraviewer/rgn_download/download/ directories, before reaching the fcst_20100318_0319/ directory. I'm probably just missing a flag, or something small, from the zip command, but how do I get it to use fcst_20100318_0319/ as the root directory?

    Read the article

  • can't login to phpmyadmin

    - by user574383
    Hi, i am new at linux but i need phpmyadmin on my centos server. I did this: cd /var/www/html/ (document root of apache) wget http://sourceforge.net/projects/phpmyadmin/path/to/latest/version tar xvfz phpMyAdmin-3.3.9-all-languages.tar.gz mv phpMyAdmin-3.3.9-all-languages phpmyadmin rm phpMyAdmin-3.3.9-all-languages.tar.gz cd phpmyadmin/ cp config.sample.inc.php config.inc.php Ok so then i just got to a webbrowser and go to www.$ip/phpmyadmin and i am presented with a login screen asking for username and password. How can i get these credentials to log in? I'd like to log in as root i guess. But i don't know how to setup a root account and create a password for root using the cli and mysql. Please help? Thanks.

    Read the article

  • Editing Subversion post-commit script to enable automated Hudson builds

    - by Wachgellen
    Hey guys, I'm not so good with Linux, but I need to modify the post-commit file of my Subversion repository to get Hudson to build automatically on commits. This page here tells me to do this: REPOS="$1" REV="$2" UUID=`svnlook uuid $REPOS` /usr/bin/wget \ --header "Content-Type:text/plain;charset=UTF-8" \ --post-data "`svnlook changed --revision $REV $REPOS`" \ --output-document "-" \ http://server/hudson/subversion/${UUID}/notifyCommit?rev=$REV The part that I don't know is the address URL given at the bottom of that code snippet. I know the address of my Hudson server, but the /subversion part has me baffled, because on my system that doesn't refer to anything. My Subversion repository belongs somewhere else on the server, not inside Hudson. Can anyone tell me what I'm supposed to put as the URL (an example would help greatly)?

    Read the article

  • Magento - blank lines being added to wsdl file

    - by dan.codes
    I am trying to call the API but I keep getting a soap error that can't load the file. I found that the reason is there are about 3 blank lines at the top of the XML file that is returned. I found this by doing wget url. This use to work just fine, when I debug through the API controller the response or xml looks fine all the way through, I don't see any spaces at all. I have no idea what might be causing this. I don't think there is anything we modified that would do this.

    Read the article

  • Is the F# language reference documentation available in an offline format (PDF, CHM)?

    - by stakx
    I've found several posts on hubFS of people asking if there is, or will be, offline documentation for F#. These posts haven't been answered. So I want to give it a shot and ask the same question here on SO. Where I've looked for offline documentation so far: The April 2010 CTP release of Visual F# (version 2.0) is available for VS 2008, but it doesn't come without an offline help. There's a question on SO about offline documentation for various programming languages, but F# isn't mentioned there at the time of this writing. There is of course Microsoft's F# language reference documentation (available on MSDN), which could be downloaded for offline browsing using e.g. wget. Question: Does anyone know whether any "official" offline documentation is on the way, anytime soon? (And related to this, albeit this probably can't be answered objectively: Would it be reasonable to expect that F# likely won't undergo ECMA or ISO standardization, ie. there likely won't be a standards document describing the language?)

    Read the article

  • Why I am not that user after I load cookies?

    - by MoreFreeze
    I want to grab some site data, but it must be a login account. So I register it and login, I found some API about this site that can be used to grab data. I use this Chrome plugin "cookie.txt export" export cookies.txt, I copy all content it export and use following cmd like wget -x --load-cookies cookies.txt http://www.example.com/api/name=xxxx but it doesn't work. It download the page that need I login. So I think this site has some other verification strategy, how can I pass it? Whether I must input in browser manually?

    Read the article

  • Build Google App Engine Java source from Eclipse

    - by Robottinosino
    I would like to try to build Google App Engine for Java from source. I have tried this: I don't know how to solve the javac Ant task. I am on Mac OS X 10.6.8. The reason why I am trying to to create a Java Project from the source is that when I try to debug/step-into the sources downloaded from SVN, something is wrong in Eclipse and it does not track execution at the right code line. It seems to be executing "comment lines"? So I don't get an accurate tracking of the code path. I think what I would ideally need is: 1) svn checkout command with matching REV number of latest SDK 2) wget command downloading matching jar

    Read the article

  • How to read remote video on Amazon S3 using ffmpeg

    - by virtualize
    I need to create poster frames from videos hosted on Amazon S3 via ffmpeg. So is there a way to use the remote video file directly in ffmpeg command line like this: ffmpeg -i "http://bucket.s3.amazonaws.com/video.mp4" -ss 00:00:10 -vframes 1 -f image2 "image%03d.jpg" ffmpeg just returns: http://bucket.s3.amazonaws.com/video.mp4: I/O error occurred Usually that means that input file is truncated and/or corrupted. I also tried forcing ffmpeg to use the videos mp4 container for reading: ffmpeg -f mp4 -i "http://bucket.s3.amazonaws.com/video.mp4" ... But no luck. Wget this video from S3 and processing it locally works fine of course, as well as reading the file remotely from other 'standard' http servers. So I know that ffmpeg supports remote file reading, but why not on S3?

    Read the article

  • Create an seo and web accessibility analyzer

    - by rebellion
    I'm thinking of making a little web tool for analyzing the search engine optimization and web accessiblity of a whole website. First of all, this is just a private tool for now. Crawling a whole website takes up alot of resources and time. I've found out that wget is the best option for downloading the markup for a whole site. I plan on using PHP/MySQL (maybe even CodeIgniter), but I'm not quite sure if that's the right way to do it. There's always someone who recommends Python, Ruby or Perl. I only know PHP and a little bit Rails. I've also found a great HTML DOM parser class in PHP on SourceForge. But, the thing is, I need some feedback on what I should and should not do. Everything from how I should make the crawl process to what I should be checking for in regards to SEO and WCAG. So, what comes to your mind when you hear this?

    Read the article

  • error when trying to install nokogiri

    - by sam
    im trying to install nokogiri to use in a ruby on rails application to read xml files, ive been following the instructions on their page for home brew 0.9, when i try and install the libivcon from source as bellow wget http://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.13.1.tar.gz tar xvfz libiconv-1.13.1.tar.gz cd libiconv-1.13.1 ./configure --prefix=/usr/local/Cellar/libiconv/1.13.1 make sudo make install i get the following error `make: *** No rule to make target `install'. Stop.` any idea why that might be ? sorry if the answers a real simple one im pretty new to ror / terminal and ive been going round in loops with this for almost a day, any helps much appreciated !

    Read the article

  • input URL, output contents of "view page source", i.e. after javascript / etc, library or command-li

    - by Ryan Berckmans
    I need a scalable, automated, method of dumping the contents of "view page source" (DOM) to a file. Programs such as wget or curl will non-interactively retrieve a set of URLs, but do not execute javascript or any of that 'fancy stuff'. My ideal solution looks like any of the following (fantasy solutions): cat urls.txt | google-chrome --quiet --no-gui \ --output-sources-directory=~/urls-source (fantasy command line, no idea if flags like these exist) or cat urls.txt | python -c "import some-library; \ ... use some-library to process urls.txt ; output sources to ~/urls-source" As a secondary concern, I also need: dump all included javascript source to file (a la firebug) dump pdf/image of page to file (print to file)

    Read the article

  • Apache2 on Ubuntu Server w/ CGI, FastCGI, mod_php

    - by illegal3alien
    I've looked at various websites on configuring Apache with cgi and can't get mod_fcgid to work. It works fine using mod_php5, but I wanted to compare performance using cgi and fastcgi. I tried methods using FGCIWrapper among various other techniques and the only one that didn't result in an unlogged 403 or download of the file was using "Action application/x-httpd-php /usr/bin/php-cgi" When trying to configure mod_fcgid the file normally just started a download of an unprocessed file. I used wget to check headers and type was "application/x-httpd-php" At one point I was able to get to the page, but it resulted in a 403, which was listed in access.log, but not error.log (I was told it should be in there too) I tried to get it working on a fresh install of Ubuntu Server 10.04 LTS and 10.10 and had the same results on both, so I'm not doing something correctly in terms of configuration. I tried virtualmin and could only get mod_php to work. The page just prompted a download when selecting cgi or fcgi from the control panel.

    Read the article

  • Best practice to detect iPhone app only access for web services?

    - by Gaius Parx
    I am developing an iPhone app together with web services. The iPhone app will use GET or POST to retrieve data from the web services such as http://www.myserver.com/api/top10songs.json to get data for top ten songs for example. There is no user account and password for the iPhone app. What is the best practice to ensure that only my iPhone app have access to the web API http://www.myserver.com/api/top10songs.json? iPhone SDK's UIDevice uniqueueIdentifier is not sufficient as anyone can fake the device id as parameter making the API call using wget, curl or web browsers. The web services API will not be published. The data of the web services is not secret and private, I just want to prevent abuse as there are also API to write some data to the server such as usage log.

    Read the article

  • How to animate the command line?

    - by The.Anti.9
    I have always wondered how people update a previous line in a command line. a great example of this is when using the wget command in linux. It creates an ASCII loading bar of sorts that looks like this: [======                    ] 37% and of course the loading bar moves and the percent changes, But it doesn't make a new line. I cannot figure out how to do this. Can someone point me in the right direction?

    Read the article

  • Very very slow transfer speeds between Windows 7 and samba server running on Ubuntu 11.10/12.04 minimal

    - by kuzyt
    As mentioned in the title I tried transferring files between Windows 7 and the samba server running on both Ubuntu 11.10 and 12.04 but both showed very slow transfer speeds. Can someone please guide me in the right direction to debug this problem ? wget --output-document=/dev/null http://tokyo1.linode.com/100MB-tokyo.bin --2012-08-21 22:02:17-- http://tokyo1.linode.com/100MB-tokyo.bin Resolving tokyo1.linode.com (tokyo1.linode.com)... 106.187.33.12 Connecting to tokyo1.linode.com (tokyo1.linode.com)|106.187.33.12|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: `/dev/null' 8% [=============> ] 8,923,980 64.8K/s eta 15m 0s wlan0 IEEE 802.11abgn ESSID:"TNET" Mode:Managed Frequency:2.462 GHz Access Point: 58:6D:8F:26:20:7A Bit Rate=117 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=57/70 Signal level=-53 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:101 Invalid misc:2448 Missed beacon:0 03:00.0 Network controller: Atheros Communications Inc. AR9300 Wireless LAN adaptor (rev 01) Subsystem: Atheros Communications Inc. Device 3112 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at fea00000 (64-bit, non-prefetchable) [size=128K] Expansion ROM at fea20000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable- Count=1/4 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [70] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <2us, L1 <64us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [300 v1] Device Serial Number 00-00-00-00-00-00-00-00 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • rkhunter 1.4 different results than version before?

    - by dschinn1001
    with rkhunter version before ubuntu-update from 12.04 to 12.10 I had NOT these warnings like listed here: Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ Warning ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/tcpd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/curl [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ Warning ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ Warning ] /usr/bin/locate [ Warning ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ Warning ] /usr/bin/lynx [ Warning ] /usr/bin/mail [ Warning ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ Warning ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ Warning ] /usr/bin/rkhunter [ Warning ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ Warning ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/gawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ Warning ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ Warning ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ Warning ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ Warning ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ Warning ] /bin/dash [ Warning ] It seems that rkhunter 1.4 is oversensitive somehow about changed bin-files ? chkrootkit finds nothing and no warnings too.

    Read the article

  • Access forbidden 403 error in xampp

    - by Ramvignesh
    I am very new to xampp. I have made a fresh xampp install with the following commands. sudo su cd /tmp wget bit.ly/1cmyrUo -O xampp-32bit.run chmod 777 ./xampp-32bit.run sudo ./xampp-32bit.run Then I made a perl file to check whether my xampp works. The following is my sample.pl file content. #!usr/bin/perl print "content-type:text/html\n"; print(header()); use CGI qw(:standard); print(start_html()); print "Hello. I am ram"; print(end_html()); After copying my perl file from /home/vicky/desktop to /opt/lampp/cgi-bin. I started my xampp with the following command. /opt/lampp/lampp start Then I ran my sample.pl in the localhost with the help of the http://localhost/cgi-bin/sample.pl in my mozilla browser. I just got the following window. I found only answers relating to the 'new security concept error' and 'accessing virtual host issue'. I did came across an askubuntu query, a bid similar to that of mine. It had no answers but some comments. One comment suggested to change the file permissions. It directed to get help from here. It said to change the directory permission as 755 and file permission as 644 to resolve this kind of issue. When I tried to do that, I came to know that my cgi-bin directory already had 755 permission and my sample.pl had 644 permission. I have no solutions now. PostScript: I have attached the content of my /opt/lampp/apache2/conf/httpd.conf file. Hope this will help the answer-providers to understand my problem completely. Alias /bitnami/ "/opt/lampp/apache2/htdocs/" Alias /bitnami "/opt/lampp/apache2/htdocs" <Directory "/opt/lampp/apache2/htdocs"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • Radeon HD 2000, 3000, 4000 on 12.10 Quantal: fglrx (legacy) 12.6 unsupported, what to do?

    - by Andrew Mao
    After upgrading to 12.10 quantal, the packaged version of fglrx no longer works. I discovered that this is because there is a separate 'legacy' fglrx driver for the HD 2k-4k series cards, but it is incompatible with the xorg server on 12.10. This is the most current version of the driver for HD 2000 through HD 4000 series cards. You can't use the non-legacy fglrx driver, but you can use the open-source radeon driver if you prefer your WM compositing to be laggy and your YouTube videos to play like they would on a Pentium MMX series: http://support.amd.com/us/kbarticles/Pages/catalyst126legacyproducts.aspx Usually this driver can be installed in the following way, necessary because apt-get install fglrx would pull in the non-legacy driver: wget http://www2.ati.com/drivers/legacy/amd-driver-installer-12.6-legacy-x86.x86_64.zip unzip amd-driver-installer-* sudo sh ./amd-driver-installer-*.run --buildpkg Ubuntu/quantal sudo dpkg -i fglrx*.deb sudo aticonfig --initial -f If you use a different version of fglrx (for example, a newer 12.9 that doesn't support those cards) then the final command will give you an error no supported hardware detected or something similar. However, everything works at this point and you will get a reasonable xorg.conf: ... other stuff Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:5:0" EndSection ... other stuff At this point you're supposed to reboot and everything will be working with the fglrx driver. However, upon rebooting, you'll be treated to the following errors in Xorg.0.log when fglrx attempts to load: (EE) Failed to load /usr/lib/xorg/modules/drivers/fglrx_drv.so: /usr/lib/xorg/modules/drivers/fglrx_drv.so: undefined symbol: noXFree86DRIExtension Some searching around will show that this is a problem with the legacy ATI drivers not supporting xserver 1.13 or newer. (Arch Linux thread) ATI has released a fixed driver for its most recent (HD 5000 series or later) cards, but not for the 'legacy' cards yet. The non-legacy ATI drivers can't be used with the old cards. What should an Ubuntu user, using one of these HD 2000-4000 series cards, do? Wait for an updated 'legacy' ATI driver that properly works with xserver 1.13? Downgrade back to 12.04 Precise, which uses xserver 1.11? Try to downgrade xserver on 12.10 Quantal to 1.12, which could possibly break Unity and GNOME? Forced upgrade to HD 5000 series or later card? (Not possible with integrated graphics...) Some other 1337 action that fixes this problem painlessly?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >