Search Results

Search found 35634 results on 1426 pages for 'internet directory'.

Page 285/1426 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Free web gallery installation that can use existing directory hierarchy in filesystem?

    - by user1338062
    There are several different free software gallery projects (Gallery, Coppermine, etc), but as far as I know each of those creates a copy of imported images in their internal storage, be it directory structure or database). Is there any gallery software that would allow keeping existing directory hierarchy of media files (images, videos), as-is, and just store the meta-data of them in a database? I guess at least various NAS solutions ship with software like this.

    Read the article

  • How to share a internet connection in a network with openSuse 12.2?

    - by eneepo
    I have to computer and opensuse 12.2 on both of them: Internet 192.168.2.1 | eth0 eth1 First Computer 192.168.2.2 192.168.2.10 | eth0 Second Computer 192.168.2.11 I am following the instruction on http://www.swerdna.net.au/suseics.html but as soon as I enable eth1 on first computer I lose connection with internet. I follow all the instructions I don't know which part I'm miss-configuring.

    Read the article

  • error while loading shared libraries; cannot open shared object file: No such file or directory

    - by glitchyme
    The program evince complains that it can't find libfreetype.so.6; however I clearly have the file and its included in my LD_LIBRARY_PATH; furthermore I have another program which uses libfreetype6 and is able to run just fine. What's going on here? jbud@jb-pc ~> evince evince: error while loading shared libraries: libfreetype.so.6: cannot open shared object file: No such file or directory jbud@jb-pc ~> ldd /usr/bin/evince | grep freetype libfreetype.so.6 => /usr/local/lib/libfreetype.so.6 (0x00007f912179d000) jbud@jb-pc ~> file /usr/local/lib/libfreetype.so.6 /usr/local/lib/libfreetype.so.6: symbolic link to `libfreetype.so.6.11.1' jbud@jb-pc ~> file /usr/local/lib/libfreetype.so.6.11.1 /usr/local/lib/libfreetype.so.6.11.1: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x21a4b8005e0c9a42af001b35fb984f4e25efc71c, not stripped jbud@jb-pc ~> echo $LD_LIBRARY_PATH /usr/lib/:/usr/lib64/:/usr/lib/x86_64-linux-gnu/:/usr/local/lib/ jbud@jb-pc ~> ldd jdrive/jstuff/work/personal/noengine/client | grep freetype libfreetype.so.6 => /usr/local/lib/libfreetype.so.6 (0x00007feb5ac89000)

    Read the article

  • Apache config file. Redirect permanent gives 403 error

    - by Homunculus Reticulli
    I am changing my domain from foo.com to foobar.org. I used a Redirect permanent in my apache config file, and then restarted apache. When I try to access the old domain foo.com, I get a 403 error. This is what my apache config file looks like: <VirtualHost *:80> ServerName foo.com #ServerAlias www.foo.com #ServerAdmin [email protected] Redirect permanent / http://www.foobar.org/ DocumentRoot /path/to/project/foo/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.errors.log" <Directory /> Order Deny,Allow Deny from all </Directory> <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> <Directory /path/to/project/foo/web> Options -Indexes -Includes AllowOverride All Allow from All RewriteEngine On # We check if the .html version is here (cacheing) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.])$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # No, so we redirect to our front end controller RewriteRule ^(.*)$ index.php [QSA,L] </Directory> <Directory /path/to/project/foo/web/uploads> Options -ExecCGI -FollowSymLinks -Indexes -Includes AllowOverride None php_flag engine off </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.8/data/web/sf <Directory /lib/vendor/symfony/symfony-1.3.8/data/web/sf> # Alias /sf /lib/vendor/symfony/symfony-1.4.19/data/web/sf # <Directory /lib/vendor/symfony/symfony-1.4.19/data/web/sf> Options -Indexes -Includes AllowOverride All Allow from All </Directory> </VirtualHost> Can anyone spot what I may be doing wrong?. The site foobar.org does exist so I don't know why this error occurs - help?

    Read the article

  • Robots.txt practices with .htaccess redirections (inherits)

    - by Jayhal
    I have a question regarding how to write robots.txt files for many domains and subdomains with redirects in place. We have a hosting account that enacts primary and add-on domains. All of our domains and subdomains, including the primary domain, is redirected via htaccess 301s to their own subdirectories in the primary domain's root directory. I'm confused about how I would write the robots.txt for certain directories. First, I wanted to confirm I am right in understanding that for domains and subdomains, crawlers will look to the directory that acts as that urls root directory for the crawling rules(robots.txt). Also, that a directory will not be affected by a robots.txt present in their parent directory if the directory has its own domain/subdomain, and that url is the one being accessed by crawlers. (Am pretty sure, but I wanted to confirm I didnt have a fundamentally flawed understanding of robots.txt) In the original root directory on the account(where the primary domain was directed before htaccess was put in place) what should the robots.txt contain? When crawlers look to crawl our primary domain, will they look to the original root directory for the robots.txt or will they reference the file contained in the new subdirectory where all the primary domain's site files are located? If so, what should the root's robot.txt include if anything at all. Would I be right to include a simple 'disallow: /' for all agents, and then include more specific robots.txt files in each subdirectory with more specific instructions. Would that affect the crawling of the directory where the primary domain is now redirected? Any help is greatly appreciated, Thanks!

    Read the article

  • How to modify Perl script to move packets in diffrent directory based on version? [migrated]

    - by Peter Penzov
    I have this Perl script which is used to soft packages based on packet version: #!/usr/bin/perl -w # # Compare versions of all *.rpm files against the # latest packages installed (if installed) # # Usage: # rpmver.pl # This script looks for all *.rpm files. # use strict; use RPM2; my $rpm_db = RPM2->open_rpm_db(); for my $filename (<*.rpm>) { my $h = RPM2->open_package( $filename ); # Ensure we compare against the newest # package of the given name. my ($installed) = sort { $b <=> $a } $rpm_db->find_by_name($h->name); if (not $installed) { printf "Package %s not installed.\n", $h->as_nvre; } else { my ($result) = ($h <=> $installed); if ($result < 0) { printf "Installed package %s newer than file %s\n", $installed->as_nvre, $h->as_nvre; } else { printf "File %s newer than installed package %s\n", $h->as_nvre, $installed->as_nvre; } } } I have a Linux repository with SRPMs. I want to move the packages with the latest into different directory for example latest_lackages. How the script must be modified?

    Read the article

  • Is what someone publishes on the Internet fair game when considering them for employment as a programmer?

    - by Jon Hopkins
    (Originally posted on Stack Overflow but closed there and more relevant for here) So we first interviewed a guy for a technical role and he was pretty good. Before the second interview we googled him and found his MySpace page which could, to put it mildly, be regarded as inappropriate. Just to be clear there was no doubt that it was his page (name, photos, matching biographical information and so on). The content was entirely personal and in no way related to his professional abilities or attitude. Is it fair to consider this when thinking about whether to offer them a job? In most situations my response would be what goes on in someone's private life is their own doing. However for anyone technical who professes (implicitly or explicitly) to understand the Internet and the possibilities it offers, is posting things in a way which can so obviously be discovered a significant error of judgement? EDIT: Clarification - essentially it was a fairly graphic commentary on porn (but of, shall we say, a non-academic nature). I'm actually more interested in the general concept than the specific incident as it's something we're likely to see more in the future as people put more and more of themselves on-line. My concerns are not primarily about him and how he feels about such things (he's white, straight, male and about the last possible victim of discrimination on the planet in that sense), more how it reflects on the company that a very simple search (basically his name) returns these things and that clients may also do it. We work in a relatively conservative industry.

    Read the article

  • Wireless connected but internet not working, modem is fine because I'm typing over wifi on my android

    - by Sandro Livaja
    I can't connect to the Internet in 11.10. I tried switching my wifi card to another USB slot but nothing changed. Is it possible that the card is able to connect while not being able to receive packages? $ ifconfig eth0 Link encap:Ethernet HWaddr 40:61:86:35:11:b0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 74:ea:3a:8c:d5:5c inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::76ea:3aff:fe8c:d55c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1209 (1.2 KB) TX bytes:15329 (15.3 KB)

    Read the article

  • What are some internet trends that you've noticed over the past ~10 years? [closed]

    - by Michael
    I'll give an example of one that I've noticed: the number of web sites that ask for your email address (GOOG ID, YAHOO! ID, etc.) has skyrocketed. I can come up with no legitimate reason for this other than (1) password reset [other ways to do this], or (2) to remind you that you have an account there, based upon the time of your last visit. Why does a web site need to know your email address (Google ID, etc.) if all you want to do is... download a file (no legit reason whatsoever) play a game (no legit reason whatsoever) take an IQ test or search a database (no legit reason whatsoever) watch a video or view a picture (no legit reason whatsoever) read a forum (no legit reason whatsoever) post on a forum (mildly legit reason: password reset) newsletter (only difference between a newsletter and a blog is that you're more likely to forget about the web site than you are to forget about your email address -- the majority of web sites do not send out newsletters, however, so this can't be the justification) post twitter messages or other instant messaging (mildly legit reason: password reset) buy something (mildly legit reasons: password reset + giving you a copy of a receipt that they can't delete, as receipts stored on their server can be deleted) On the other hand, I can think of plenty of very shady reasons for asking for this information: so the NSA, CIA, FBI, etc. can very easily track what you do by reading your email or asking GOOG, etc. what sites you used your GOOG ID at to use the password that you provide for your account in order to get into your email account (most people use the same password for all of their accounts), find all of your other accounts in your inbox, and then get into all of those accounts sell your email address to spammers These reasons, I believe, are why you are constantly asked to provide your email address. I can come up with no other explanations whatsoever. Question 1: Can anyone think of any legitimate or illegitimate reasons for asking for someone's email address? Question 2: What are some other interesting internet trends of the past ~10 years?

    Read the article

  • Cat all files in a directory, with a specific file at the beginning an end...?

    - by Aeisor
    Is there a way to cat all files in a given directory, but with a particular file at the beginning and end? For example, say I have: file1.js, file2.js, file3.js, file4.js, file5.js -- Effectively I would like to cat file2.js file*.js file3.js > /var/www/output.js I've tried a few variations of these find ! -name "file2.js" ! -name "file3.js" -type f -exec cat file2.js {} file3.js > /var/www/js/output.js \; find ! -name "file2.js" ! -name "file3.js" -type f | xargs -I files cat file2.js files file3.js > /var/www/output.js but the best I can get out of it is file2.js added before and file3.js added after all other files (multiple times) I know I could specify the files in the order I wanted manually, but this is not maintainable (I'm expecting, potentially 100 files). I have looked through man cat, as well as a handful of websites devoted to xargs, find and cat to no avail. Thanks in advance.

    Read the article

  • Is it possible to use 3G internet for a TCP/IP game server?

    - by Amit Ofer
    I'm working on a turned based multiplayer android game with a friend. I started working on the game server and client using socket programming. I found a few tutorials on how to implement a basic chat on android and I started extending that example to suit my needs. Basically the game is really simple and the communication only include sending a few string from the client to the server every turn and sending the calculated scores back to all the clients after each turn. the idea is that one of the players creates the game and thus initialize the server, and each player connects to this client using ip. I tried this solution and it seems to work great when all the players are using the same wifi connection or by using router port forwarding. The problem is when trying to use 3G internet for the server, I guess the problem is that 3G ip address isn't global and you can't use port forwarding there, correct me if I'm wrong here. Is there a way to overcome this issue? or the only solution is to limit my game to wifi only or think of a different solution than the standard socket programming solution? I.E web server etc. what do you think would be the best approach here? Thanks.

    Read the article

  • Cannot create Desktop shortcut

    - by Pantelis
    I have a WiX project and I want to automatically create a ProgramMenu and Desktop shortcut. I've tried the following but the Desktop shortcut is not created. The ProgramMenu shortcut works great. <Product Id="*" Name="Application Name" Language="1033" Version="1.0.0.0" Manufacturer="Company Name"> <Package InstallerVersion="200" Compressed="yes" InstallScope="perMachine" Description="A description" Comments="Some Comments" /> <MajorUpgrade DowngradeErrorMessage="A newer version of [ProductName] is already installed." /> <MediaTemplate EmbedCab="yes"/> <!-- Minimal UI --> <UIRef Id="WixUI_Minimal"/> <!-- Adding the referenced components --> <Feature Id="Complete" Title="inStorHDRadio Complete" Level="1"> <ComponentGroupRef Id="InstallationComponents" /> <ComponentRef Id="ApplicationProgramsMenuShortcut"/> <ComponentRef Id="ApplicationDesktopShortcut"/> </Feature> </Product> <Fragment> <Directory Id="TARGETDIR" Name="SourceDir"> <!-- Installation Folder --> <Directory Id="ProgramFilesFolder"> <Directory Id="CompanyFolder" Name="CompanyName"> <Directory Id="InstallationFolder" Name="ApplicationName"/> </Directory> </Directory> <!-- Programs Menu Shortcut Folder --> <Directory Id="ProgramMenuFolder" Name="ProgramsMenu"> <Directory Id="ProgramsMenuCompanyFolder" Name="CompanyName"> <Directory Id="ProgramsMenuShortcutFolder" Name="ApplicationName"/> </Directory> </Directory> <!-- Desktop Shortcut Folder --> <Directory Id="DesktopShortcutFolder" Name="Desktop"/> </Directory> </Fragment> <!-- Compoments --> <Fragment> <ComponentGroup Id="inStorHDRadioComponents" Directory="InstallationFolder"> <!-- All application components in Program Files --> </ComponentGroup> <!-- SHORTCUTS --> <!--ProgramsMenu--> <DirectoryRef Id='ProgramsMenuShortcutFolder'> <Component Id='ApplicationProgramsMenuShortcut'> <RemoveFolder Id='RemoveProgramsMenuShortcutFolder' Directory='ProgramsMenuShortcutFolder' On='uninstall' /> <RemoveFolder Id='RemoveProgramsMenuCompanyFolder' Directory='ProgramsMenuCompanyFolder' On='uninstall' /> <Shortcut Id='ApplicationProgramsMenuShortcut' Name='Company Name' Target='[#Application.exe]' WorkingDirectory='InstallationFolder' Icon='application.ico' /> <RegistryValue Name='RegistryValueProgramMenuShortcut' Root='HKCU' Key='Software\Microsoft\[Manufacturer]\[ProductName]' Type='integer' Value='1' /> </Component> </DirectoryRef> <!--Desktop--> <DirectoryRef Id='DesktopShortcutFolder'> <Component Id='ApplicationDesktopShortcut'> <RemoveFolder Id='RemoveDesktopShortcutFolder' Directory='DesktopShortcutFolder' On='uninstall'/> <Shortcut Id='ApplicationDesktopShortcut' Name='Application Name' Target='[#Bootstrapper.exe]' WorkingDirectory='InstallationFolder' Directory='DesktopShortcutFolder' Advertise='no' Icon='application.ico'/> <RegistryValue Name='RegistryValDesktopShortcut' Root='HKCU' Key='Software\[Manufacturer]\[ProductName]' KeyPath='yes' Type='integer' Value='1' /> </Component> </DirectoryRef> </Fragment> <Fragment> <Icon Id="application.ico" SourceFile="Files\application.ico" /> <Icon Id="programs.ico" SourceFile="Files\programs.ico"/> <Property Id="ARPPRODUCTICON" Value="programs.ico" /> <Property Id="ARPHELPLINK" Value="http://www.company.com" /> </Fragment> Whats wrong with the code? The ProgramMenu shortcut is working perfectly fine, but the desktop one is not getting created.

    Read the article

  • How to install PHP-FPM and PHP on Ubuntu?

    - by Sanoj
    I have problems with installing PHP and in Ubuntu. I followed the instructions on the PHP-FPM site, PHP FastCGI Process Manager but when doing ../configure && make to compile PHP I got a lot of not found messages (listed below), and I don't know how to fix them. I tried both the Integrated compilation and Separate compilation but both compilations ends up with the same messages. Is there a solution or workaround? An alternativ way to install PHP with PHP-FPM? ../configure: 11986: ac_fn_c_check_func: not found ../configure: 11997: ac_fn_c_check_func: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: checking for socket in -lsocket: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: checking for socket in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12147: ac_fn_c_try_link: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: result: no: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: no: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: checking for __socket in -lsocket: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: checking for __socket in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12147: ac_fn_c_try_link: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: result: no: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: no: not found ../configure: 12154: ac_fn_c_check_func: not found ../configure: 12165: ac_fn_c_check_func: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: checking for socketpair in -lsocket: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: checking for socketpair in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12315: ac_fn_c_try_link: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: result: no: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: no: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: checking for __socketpair in -lsocket: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: checking for __socketpair in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12315: ac_fn_c_try_link: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: result: no: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: no: not found ../configure: 12322: ac_fn_c_check_func: not found ../configure: 12333: ac_fn_c_check_func: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: checking for htonl in -lsocket: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: checking for htonl in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12483: ac_fn_c_try_link: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: result: no: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: no: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: checking for __htonl in -lsocket: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: checking for __htonl in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12483: ac_fn_c_try_link: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: result: no: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: no: not found ../configure: 12490: ac_fn_c_check_func: not found ../configure: 12501: ac_fn_c_check_func: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: checking for gethostname in -lnsl: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: checking for gethostname in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12651: ac_fn_c_try_link: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: result: no: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: no: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: checking for __gethostname in -lnsl: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: checking for __gethostname in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12651: ac_fn_c_try_link: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: result: no: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: no: not found ../configure: 12658: ac_fn_c_check_func: not found ../configure: 12669: ac_fn_c_check_func: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: checking for gethostbyaddr in -lnsl: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: checking for gethostbyaddr in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12819: ac_fn_c_try_link: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: result: no: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: no: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: checking for __gethostbyaddr in -lnsl: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: checking for __gethostbyaddr in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12819: ac_fn_c_try_link: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: result: no: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: no: not found ../configure: 12826: ac_fn_c_check_func: not found ../configure: 12837: ac_fn_c_check_func: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: checking for yp_get_default_domain in -lnsl: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: checking for yp_get_default_domain in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12987: ac_fn_c_try_link: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: result: no: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: no: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: checking for __yp_get_default_domain in -lnsl: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: checking for __yp_get_default_domain in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12987: ac_fn_c_try_link: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: result: no: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: no: not found ../configure: 12995: ac_fn_c_check_func: not found ../configure: 13006: ac_fn_c_check_func: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: checking for dlopen in -ldl: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: checking for dlopen in -ldl... : not found cat: confdefs.h: No such file or directory ../configure: 13156: ac_fn_c_try_link: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: result: no: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: no: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: checking for __dlopen in -ldl: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: checking for __dlopen in -ldl... : not found cat: confdefs.h: No such file or directory ../configure: 13156: ac_fn_c_try_link: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: result: no: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: no: not found ../configure: 13164: 5: Bad file descriptor ../configure: 13164: :: checking for sin in -lm: not found ../configure: 13164: 6: Bad file descriptor ../configure: 13164: checking for sin in -lm... : not found cat: confdefs.h: No such file or directory ../configure: 13196: ac_fn_c_try_link: not found ../configure: 13198: 5: Bad file descriptor ../configure: 13198: :: result: no: not found ../configure: 13198: 6: Bad file descriptor ../configure: 13198: no: not found ../configure: 13214: ac_fn_c_check_func: not found ../configure: 13225: ac_fn_c_check_func: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for inet_aton in -lresolv: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for inet_aton in -lresolv... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for __inet_aton in -lresolv: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for __inet_aton in -lresolv... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for inet_aton in -lbind: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for inet_aton in -lbind... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for __inet_aton in -lbind: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for __inet_aton in -lbind... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13516: 5: Bad file descriptor ../configure: 13516: :: checking for ANSI C header files: not found ../configure: 13516: 6: Bad file descriptor ../configure: 13516: checking for ANSI C header files... : not found cat: confdefs.h: No such file or directory ../configure: 13615: ac_fn_c_try_compile: not found ../configure: 13617: 5: Bad file descriptor ../configure: 13617: :: result: no: not found ../configure: 13617: 6: Bad file descriptor ../configure: 13617: no: not found ../configure: 13665: ac_cv_header_dirent_dirent.h: not found ../configure: 13665: 5: Bad file descriptor ../configure: 13665: :: checking for dirent.h that defines DIR: not found ../configure: 13665: 6: Bad file descriptor ../configure: 13665: checking for dirent.h that defines DIR... : not found eval: 1: Bad substitution

    Read the article

  • Apache: Setting DocumentRoot to cgi directory results in downloading file instead of executing it.

    - by fastmonkeywheels
    I have a c-compiled CGI application that I need to execute from the DocumentRoot of my Apache server. The CGI file is called index.cgi and is located at /usr/lib/cgi-bin/index.cgi. I have the following Directory definition <Directory "/usr/lib/cgi-bin/"> Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AllowOverride None Order allow,deny Allow from all DirectoryIndex index.cgi </Directory> I have the following VirtualHost setting: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /usr/lib/cgi-bin # ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> If I go to 127.0.0.1 or 127.0.0.1/index.cgi I get prompted to download the index.cgi file, however if I enable the ScriptAlias in the vhost configuration block and go to 127.0.0.1/cgi-bin/index.cgi I see the output of my CGI application. I had originally solved this problem with mod_rewrite, however that worked on my test system the target (embedded) doesn't have that module available so I'm looking at another route (again).

    Read the article

  • What's the right way to create a Ubuntu user whose home directory is /var/www/SITE?

    - by Leonnears
    First of, I need to state I'm a complete ignorant when it comes to server administration on Ubuntu, and I'm doing what I can. I have been trying to do this for hours with no luck. Basically, I want to create a Ubuntu user whose home directory is /var/www/SITE, and prefered it is chroot'd to it. The chroot part is not so important right now, as first I prefer to make anything work. The user should be able to upload files here and the webserver (www-data user?) should be able to pick them up with no problem. I was able to create the user and give it the home directory /var/www/SITE. (the user is "anders"). I gave him a password, and "anders" can connect to FTP just fine and upload files. But here's where things don't work: While my user can upload files to that /var/www/SITE directory, when I access the webpage on my browser I get a Forbidden error. Note that anders is also a member of the www-data group. I can fix this by running sudo chmod g+s /var/www/SITE/* anders -R but this is of course not ideal. Ideally the files should "work" as soon as I upload them. What's the right way to fix this? If it matters (don't think so), I'm editing my files in Coda 2 and anders is the user for it.

    Read the article

  • WTH? Upgrading Ubuntu 9.10 to 10.04 Problem: No Internet Connection?

    - by damx
    Earlier today I tried upgrading my Ubuntu 9.10 Karmic Kaola from the update manager. (FYI: all 9.10 updates were applied prior to this) Everything was going well downloading until I got an error dialog informing me that some software packages weren't downloaded because of an Internet connection. Would say it was halfway thru. Anyway, Was told that the software packages that it did download, however, would kept and I figure it's not a big deal. Just run it again. But first ran Firefox to verify my connection as I haven't had any connection problems. But my internet connection was/is fine as evident by this posting. With that cleared, I ran the update manager again, clicked on "Upgrade" and this time I received "Could not download the release notes. Please check your internet connection" huh? This is my first time dealing w/ Ubuntu and my first upgrade so I am hoping someone can help. Not sure what the problem can be. I am can surf the web w/ no problems. Please help. PS: There was no installing at any point. Just downloading. PSS: The software it managed to download the first time around is now visible in the update manager but don't think I should install as I see in the compiz description it's for v1:0.8-4-0ubuntu2 I figure it's designed for 10.04 and might ruin things further if I install

    Read the article

  • Upgrading Ubuntu 9.10 to 10.04 Problem: No Internet Connection?

    - by damx
    Earlier today I tried upgrading my Ubuntu 9.10 Karmic Kaola from the update manager. (FYI: all 9.10 updates were applied prior to this) Everything was going well downloading until I got an error dialog informing me that some software packages weren't downloaded because of an Internet connection. Would say it was halfway thru. Anyway, Was told that the software packages that it did download, however, would kept and I figure it's not a big deal. Just run it again. But first ran Firefox to verify my connection as I haven't had any connection problems. But my internet connection was/is fine as evident by this posting. With that cleared, I ran the update manager again, clicked on "Upgrade" and this time I received "Could not download the release notes. Please check your internet connection" This is my first time dealing with Ubuntu and my first upgrade so I am hoping someone can help. Not sure what the problem can be. I can surf the web with no problems. PS: There was no installing at any point. Just downloading. PSS: The software it managed to download the first time around is now visible in the update manager but don't think I should install as I see in the compiz description it's for v1:0.8-4-0ubuntu2 I figure it's designed for 10.04 and might ruin things further if I install.

    Read the article

  • How to Load Balance 2 Internet Connections on a Windows 7 machine?

    - by Jimmy Chandra
    It's sort of related to this particular question, but that one is on Mac. I am looking for similar solution on Windows 7. I have 2 network connections: (Connection A) Wireless terminal connecting to ISP A (3G / EVDO internet provider) (Connection B) Broadband wired connection connecting to ISP B (Cable internet provider) Both has access to the internet. When I try connecting to a website and checking the networking tab on my Task Manager, I only see the network traffic being routed to only Connection A. Is there a way to make the computer to utilize both network (in a sense using all the bandwidth available from both the Cable ISP and the 3G / EVDO ISP) at the same time? If so, what do I need to do to set this up ... on Windows 7? Here is a bit more info on my network connections (ipconfig /all): PPP adapter Wireless Terminal: IPv4: aa.bb.ccc.ddd(preferred) Subnet mask: 255.255.255.255 Default Gateway: 0.0.0.0 DNS: aa.ee.f.ggg aa.ee.f.hhh Primary Wins: jjj.ii.k.l Secondary Wins: jjj.ii.k.m Ethernet adapter LAN: IPv4: 192.168.1.100 (connected to a router by wired that itself connect to a cable modem) subnet mask: 255.255.255.0 Default gateway: 192.168.1.1 (the wireless router) DHCP: 192.168.1.1 (the wireless router) DNS: xxx.yy.zz.ww rr.sss.t.uuu For my own privacy, I don't believe the actual number matters, the patterns are representative of the ip numbering scheme...

    Read the article

  • Why is it necessary to chmod o+r parent directory to fix 403 access forbidden error with Nginx and P

    - by davenolan
    This may be an Nginx wrinkle, or it may be because I don't understand Unix permissions. We're using Hudson CI to deploy our staging instance. So RAILS_ROOT is /var/lib/hudson/jobs/JOBNAME/workspace. Hudson runs as hudson user Nginx runs as www-data user hudson and nginx are both members of the www group root of my nginx conf points to RAILS_ROOT/public as per normal. RAILS_ROOT/config/environment.rb is owned by www-data (so Passenger runs as www-data) RAILS_ROOT and everything in it is owned by the www group and group has r/w/x permissions As it stood, Nginx threw 403 permission denied when requesting any url. error.log contained entries like this: public/index.html" is forbidden (13: Permission denied). These did not fix the or change the error (each with a stop/start of Ngnix): chmod 777 -R RAILS_ROOT chgrp www -R /var/lib/hudson I also tried Nginx as root, and passenger complained that it could not find config/environment (despite the path displayed on the error page being correct). The fix was to ensure everybody has read permissions on each directory in the heirachy. In this case chmod o+r /var/lib/hudson. But if the group has read permissions on the directory, and nginx is a member of the owner group of the directory, why was it necessary to allow everyone read permissions? Is there something have not grokked about permissions? $nginx -V nginx version: nginx/0.7.61 built by gcc 4.4.1 (Ubuntu 4.4.1-4ubuntu8) configure arguments: --prefix=/opt/nginx --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/nginx --with-http_ssl_module --with-pcre=~/src/pcre-8.00/ --with-http_stub_status_module $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10"

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • Vmware Workstation, Win7 host, Ubuntu guests with Nat + Host-only networks but they cannot connect to the Internet

    - by Ikon
    I have a Win7 host machine with Vmware Workstation. In the workstation I have 3 Ubuntu installed. All 3 Ubuntu guests have a Nat network - to access the internet without asking the router for a local address - and a Host-only network - to connect all Ubuntu quests and the host in a private network for internal communication, without touching the router. When I try to make any of the Ubuntu quests to get data from the internet - assuming that they would figure out that the Nat-ed interface can access the requested data - they fail and report that there is no route to my query. If I disconnect the 2nd interface on the Ubuntu guests with the Host-only network and restart networking, they start to know the route to the internet. Odd, during the installation of the guests they asked which of the 2 given interfaces - with Nat and Host-only config - should be used to get updates during installation and they oddly managed to get the updates. Not so after the installation has finished and rebooted. I have checked the Virtual Network Editor that the Nat interface should use my real network card to access the net, so there should be no problem. I wish not to use the router's dhcp service to give the Ubuntu quests an address, and also I don't want the guests to be accessable from the local network directly, but only by the host - that's the Host-only network is for. Any suggestions? Edit: 192.168.189.0 is the Nat interface and 192.168.7.0 is the Host-only. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.7.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.189.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.189.2 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • How can I get write permission for the Web (Inetpub) directory on a new Win 7 machine?

    - by marcipollo
    I mirror my Web site on my laptop, and am trying to move the mirror site to a new laptop. I copied the files to the Inetpub directory, and can view them perfectly, but they are read-only (the check-mark is grey, not black), and I cannot change the permission. When I un-check the read-only attribute on the Inetpub directory, and click "apply" it displays a dialog box stating that I need administrative permission to change the attributes. (I am logged in as an administrator). When I click "continue," it pops up another dialog box saying access is denied to the attributes of the file: c:\inetpub\custerr\en-us\500-100.asp That dialog box has an "ignore" button, and if I click that, it appears to work through the directory tree setting the permissions. It leaves all of the files (leafs) set to "read-write," but the directories remain "read only." I am using 64-bit Windows 7. I stopped the IIS service while doing all of this. Might it have something to do with the fact that I copied the files from a different machine in the workgroup (my old laptop)?

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >