Search Results

Search found 2956 results on 119 pages for 'sources'.

Page 86/119 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • cpan won't configure correctly on centos6, can't connect to internet

    - by dan
    I have Centos 6 setup and have installed perl-CPAN. When I run cpan it takes me through the setup and ends by telling me it can't connect to the internet and to enter a mirror. I enter a mirror, but it still can't install the package. What am I doing wrong? If you're accessing the net via proxies, you can specify them in the CPAN configuration or via environment variables. The variable in the $CPAN::Config takes precedence. <ftp_proxy> Your ftp_proxy? [] <http_proxy> Your http_proxy? [] <no_proxy> Your no_proxy? [] CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror mirror.cc.columbia.edu::cpan Configuration does not allow connecting to the internet. Current set of CPAN URLs: mirror.cc.columbia.edu::cpan Enter another URL or RETURN to quit: [] New urllist mirror.cc.columbia.edu::cpan Please remember to call 'o conf commit' to make the config permanent! cpan shell -- CPAN exploration and modules installation (v1.9402) Enter 'h' for help. cpan[1]> install File::Stat CPAN: Storable loaded ok (v2.20) LWP not available Warning: no success downloading '/root/.cpan/sources/authors/01mailrc.txt.gz.tmp918'. Giving up on it. at /usr/share/perl5/CPAN/Index.pm line 225 ^CCaught SIGINT, trying to continue Warning: Cannot install File::Stat, don't know what it is. Try the command i /File::Stat/ to find objects with matching identifiers. cpan[2]>

    Read the article

  • Varnish 3.0.2 and ISPConfig 3.0.4

    - by Warren Bullock III
    I followed the tutorial The Perfect Server - Ubuntu 11.10 [ISPConfig 3] here. I'm running an Ubuntu 11.04 (Natty Narwhal) server with 1024 RAM on Rackspace. I've gone through and updated to ISPConfig 3.0.4. Everything has been working great up to now when I decided to try and install Varnish. Initially I did an install of Varnish by issuing: apt-get update apt-get upgrade apt-get install varnish Apparently the version that was installed was Varnish 2.x so I went back and added the repositories for packages provided by varnish-cache.org curl http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add - echo "deb http://repo.varnish-cache.org/ubuntu/ lucid varnish-3.0" >> /etc/apt/sources.list apt-get update apt-get install varnish This updated my version of Varnish to 3.0.2 I then proceeded to make the following changes: vim /etc/default/varnish change DAEMON_OPTS to port 80: vim /etc/apache2/ports.conf NameVirtualHost *:8000 Listen 8000 vim /etc/apache2/sites-available/default <VirtualHost *:8000> vim /etc/apache2/sites-available/ispconfig.vhost Listen 8080 NameVirtualHost *:8080 <VirtualHost _default_:8080> I then proceeded to set my other vhosts to use 8000 (the apache2 port) so with all this set I reset both Apache2 and Varnish to test. I used Firebug in Firefox 11.0 The output from what I see doesn't seem to indicate that Varnish is working completely correct: First of all I see: X-Varnish 1644834493 but I've heard that unless you have two timestamps side by side than it's probably not working correctly so for example I was thinking I might see something like: X-Varnish 1644834493 1644837493 Also if I noticed this in the output which seems to be inconstant: X-Drupal-Cache MISS There are times when it will say HIT as well.... So the question here that I have is I think Varnish is partially working, however, why don't I see two timestamps on X-Varnish like I'm thinking I should and does the output of the screenshot I have look correct? If Varnish isn't working can someone tell me what I might being doing wrong? Thanks in advance.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Screen scraping software that will traverse pages

    - by nilbus
    We're creating a mashup site that pulls information from many sources all over the web. Many of these sites don't provide RSS feeds or APIs to access the information they provide. This leaves us with screen scraping as our method for collecting the data. There are many scripting tools out there written in different scripting languages for screen scraping that require you to write scraping scripts in the language the scraper was written in. Scrapy, scrAPI, and scrubyt are a few written in Ruby and Python. There are other web-based tools I've seen like Dapper that create XML or RSS feeds based on a webpage. It has a beautiful web-based interface that requires no scripting skills to use. This would be a great tool, if it were able to traverse multiple pages to gather data from hundreds pages of results. We need something that will scrape information from paginated web sites, much like scrubyt, but with a user interface that a non-programmer could use. We'll script up our own solution if we need to, probably using scrubyt, but if there's a better solution out there, we want to use it. Does anything like this exist?

    Read the article

  • Dovecot install: what does this error mean?

    - by jamie
    I have postfix and dovecot installed on CentOS 6 (linode) along with MySQL. The table and user is already set up, postfix installed fine, but dovecot gives me this error in the mail log: Warning: Killed with signal 15 (by pid=9415 uid=0 code=kill) The next few lines say this: Apr 7 16:13:35 dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Apr 7 16:13:35 dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1: protocols=pop3s is no longer supported. to disable non-ssl pop3, use service pop3-login { inet_listener pop3 { p$ Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:5: ssl_cert_file has been replaced by ssl_cert = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:6: ssl_key_file has been replaced by ssl_key = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:8: namespace private {} has been replaced by namespace { type=private } Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:24: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:25: auth_user has been replaced by service auth { user } I am following directions for the install on CentOS 5 with changes in the dovecot.conf file from different sources specific to CentOS 6. So the dovecot.conf file might not be correct, but there is no good source I have found yet for making dovecot install correctly. Can anyone tell me what the error above means? The terminal does not give any message as to start OK or FAIL. When I issue the service dovecot start command, it says: Starting Dovecot Imap: and nothing more.

    Read the article

  • apt-get update getting 404 on debian lenny

    - by JoelFan
    Here is my /etc/apt/sources.list ###### Debian Main Repos deb http://ftp.us.debian.org/debian/ lenny main contrib non-free ###### Debian Update Repos deb http://security.debian.org/ lenny/updates main contrib non-free deb http://ftp.us.debian.org/debian/ lenny-proposed-updates main contrib non-free When I do: # apt-get update I'm getting some good lines, then: Err http://ftp.us.debian.org lenny/contrib Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny/non-free Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/main Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/contrib Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny-proposed-updates/non-free Packages 404 Not Found [IP: 35.9.37.225 80] Err http://ftp.us.debian.org lenny/main Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/main/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/contrib/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://security.debian.org/dists/lenny/updates/non-free/binary-i386/Packages 404 Not Found [IP: 149.20.20.6 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/contrib/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/non-free/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/main/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/contrib/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny-proposed-updates/non-free/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/main/binary-i386/Packages 404 Not Found [IP: 35.9.37.225 80] E: Some index files failed to download, they have been ignored, or old ones used instead. Now what?

    Read the article

  • how to get Geo::Coder::Many with cpan?

    - by mnemonic
    Ubuntu is installed for development of a Perl project. aptitude search Geo-Coder i libgeo-coder-googlev3-perl - Perl module providing access to Google Map Aptitude does not refer to Geo::Coder::Many cpan can not build it. sudo cpan Geo::Coder::Many Then: CPAN: Storable loaded ok (v2.27) Going to read '/home/jh/.cpan/Metadata' Database was generated on Wed, 16 Oct 2013 06:17:04 GMT Running install for module 'Geo::Coder::Many' Running make for K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz CPAN: Digest::SHA loaded ok (v5.61) CPAN: Compress::Zlib loaded ok (v2.033) Checksum for /home/jh/.cpan/sources/authors/id/K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz ok CPAN: File::Temp loaded ok (v0.22) CPAN: Parse::CPAN::Meta loaded ok (v1.4401) CPAN: CPAN::Meta loaded ok (v2.110440) CPAN: Module::CoreList loaded ok (v2.49_02) CPAN: Module::Build loaded ok (v0.38) CPAN.pm: Going to build K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz Can't locate Geo/Coder/Many/Google.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. Can't locate Geo/Coder/Many/Google in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. BEGIN failed--compilation aborted at Build.PL line 54. Warning: No success on command[/usr/bin/perl Build.PL --installdirs site] CPAN: YAML loaded ok (v0.77) KAORU/Geo-Coder-Many-0.42.tar.gz /usr/bin/perl Build.PL --installdirs site -- NOT OK Running Build test Make had some problems, won't test Running Build install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites Any suggestions how to resolve this issue?

    Read the article

  • solr php extension fails to run on newest Debian Wheezy

    - by hijarian
    I'm trying to use the Solr PHP extension on the recently-upgraded Debian Wheezy. It installs both from PECL and from sources flawlessly but instead of giving me expected functionality it gives me this on every PHP run: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/solr.so' - /usr/lib/php5/20100525/solr.so: undefined symbol: curl_easy_getinfo in Unknown on line 0 Also scripts which use the extension throws an error PHP Error[2]: include(SolrClient.php): failed to open stream: No such file or directory in file <...path to my autoloader...> My main point is that it was set up before and worked like a charm. In the upgrade among the relevant packages only the versions of PHP and libcurl was changed. Instance of Solr itself was left as is. I have all possible libcurl libraries: $ locate libcurl ... /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.3 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.2.0 /usr/lib/x86_64-linux-gnu/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.so /usr/lib/x86_64-linux-gnu/libcurl.so.3 /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/lib/x86_64-linux-gnu/libcurl.so.4.2.0 ... /usr/lib32/libcurl.so.3 /usr/lib32/libcurl.so.4 /usr/lib32/libcurl.so.4.2.0 ... I have instaled the php5-curl package version 5.4.4-2 with aptitude. I installed the Sorl extensions both with sudo pecl install solr (with various combinations of -f and -n flags and tried solr-beta too) and with wget ... cd ... phpize ./configure make make install I'm installing the 1.0.2 version of extension because it worked before the upgrade from Squeeze to Wheezy. As I said earlier, extension installs without any errors. I have already added the extension=solr.so incantation to the /etc/php5/mods-available/solr.ini What magic should I do to make solr extension work? Is this true that the only solution that I have is to downgrade the libcurl version as it was before the upgrade?

    Read the article

  • Hudson Mercurial checkout throws exception on Debian

    - by Jack
    I'm trying to configure Hudson to checkout my site's sources from Mercurial but it throws an exception. The /var/lib/hudson/jobs/jobname directory does exist, and I can create a workspace directory in there (even after su hudson), but as soon as I run the Hudson job again this directory disappears and the job ends with the same error: java.io.IOException: Cannot run program "hg" (in directory "/var/lib/hudson/jobs/jobname/workspace"): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.<init>(Proc.java:192) at hudson.Proc$LocalProc.<init>(Proc.java:164) at hudson.Launcher$LocalLauncher.launch(Launcher.java:639) at hudson.Launcher$ProcStarter.start(Launcher.java:274) at hudson.Launcher$ProcStarter.join(Launcher.java:281) at hudson.plugins.mercurial.MercurialSCM.joinWithPossibleTimeout(MercurialSCM.java:298) at hudson.plugins.mercurial.HgExe.popen(HgExe.java:191) at hudson.plugins.mercurial.HgExe.tip(HgExe.java:171) at hudson.plugins.mercurial.MercurialSCM.calcRevisionsFromBuild(MercurialSCM.java:254) at hudson.scm.SCM._calcRevisionsFromBuild(SCM.java:304) at hudson.model.AbstractProject.calcPollingBaseline(AbstractProject.java:1183) at hudson.model.AbstractProject.checkout(AbstractProject.java:1172) at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:499) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:415) at hudson.model.Run.run(Run.java:1362) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) Running on Debian 6.0.1 I wonder if anyone has ran into this before, and hopefully solved it?

    Read the article

  • Recommended setting for using Apache mod_mono with a different user

    - by Korrupzion
    Hello, I'm setting up an ASP.net script in my linux machine using mod_mono. The script spawn procceses of a bin that belongs to another user, but the proccess is spawned by www-data because apache runs with that user, and i need to spawn the proccess with the user that owns the file. I tried setuid bit but it doesn't make any effect. I discovered that if I kill mod-mono-server2.exe and I run it with the user that I need, everything works right, but I want to know the proper way to do this, because after a while apache runs mod-mono-server2.exe as www-data again. Mono-Project webpage says: How can I Run mod-mono-server as a different user? Due to apache's design, there is no straightforward way to start processes from inside of a apache child as a specific user. Apache's SuExec wrapper is targeting CGI and is useless for modules. Mod_mono provides the MonoStartXSP option. You can set it to "False" and start mod-mono-server manually as the specific user. Some tinkering with the Unix socket's permissions might be necessary, unless MonoListenPort is used, which turns on TCP between mod_mono and mod-mono-server. Another (very risky) way: use a setuid 'root' wrapper for the mono executable, inspired by the sources of Apache's SuExec. I want to know how to use the setuid wrapper, because I tried adding the setuid to 'mono' bin and changing the owner to the user that I want, but that made mono crash. Or maybe a way to keep running mono-mod-server2.exe separated from apache without being closed (anyone has a script?) My environment: Debian Lenny 2.6.26-2-amd64 Mono 1.9.1 mod_mono from debian repository Dedicated server (root access and stuff) Using apache vhosts -I use mono for only that script Thanks!

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • Cannot open files in Visual Studio but in Delphi and Notepad

    - by Andrew J. Brehm
    About an hour ago Visual Studio 2008 decided that it cannot find files any more. This is on 64 bit Windows Vista. When I right-click on a text file (source code or otherwise) and select "open with" and "Visual Studio 2008", I get the following error (example): Windows cannot find 'C:\Users\ajbrehm\Documents\Visual Studio 2008\Projects\Hello Prism\Hello Prism\Main.pas'. Make sure you typed the name correctly, and then try again. When I right-click the same file and select "open with" and "Delphi 2010" or "Notepad" (both other options available for text files on my system), the file opens correctly. Oddly enough when the file is part of a Visual Studio project and I open the project itself with Visual Studio (this works), I can open the file from within Visual Studio. Any ideas what might be going on? This started about an hour after I made a complete backup of my Vista VM and after I installed IIS 7, SQL Express, and Sourcegear Vault. The first files I noticed couldn't be opened in Visual Studio any more where Pascal source files in checked-outed folders from Vault. And Vault also seems to be unable to see one of the sources files and claims they don't exist. I found out about Visual Studio not opening ANY files any more when I tried to recreate the file Vault refused to see. Update: I just checked. Another user, "administrator", can still open text files with Visual Studio 2008. Both users have administrator rights. Update: I just restored the hours-old backup. Same problem. Apparently whatever triggered this happened before the install of IIS 7 and SQL Express. Never noticed it before.

    Read the article

  • Our application is responding different on client server

    - by WtFudgE
    I have an application running on our local server and it works perfect. However when I deploy my application to our client's server there are problems. If i copy all the sources to another local from their server, it works fine again.... I am talking about an ASP.NET application talking with windows server 2008. The server specs of our client are: intel Xeon 2.13 GHz 6 GB RAM 300 GB HDD windows server 2008 R2 Although I think it's not that important I will describe the problems I am talking about to give u a better idea: It is related to upating and deleting fields in the sql server. Whenever there is more than one user running the application, and updates or deletes (not even the same record!0, there seems to be fields updating/deleting wrongly. But as I said before, if we copy the source to another server it is not the case. So I assumed it is more configuration related. Do you guys have any idea what the issue could be? Thanks a lot EDIT: my client old me he disabled the sql related accounts Could this be related?

    Read the article

  • Advice on networking career

    - by fmysky
    Hello! I need some insights on a networking career. I have a valid CCNA and few months of experience as a Jr. network analyst. My focus is a type of job where I can administer networking/storage hardware and possibly managing servers/workstations too. I am new to this job area, specifically new to city like LA. I am currently unemployed and so, my question is: should I continue with my cisco certifications(routing/wireless) or other comptia certs or both to get a reliable job? I am really interested in CCNA wireless but watching craigslist(LA) and other job sites, it seems people need more cisco voice people. While, some others also ask for Net+ certs. I have scarce financial sources so, its better to make some good decisions. I have not been applying yet due to some personal problems but I will soon. Thank you! P.S. I don't know if I can ask questions like this here. Sorry about that.

    Read the article

  • How can MySQL be in GDAL's dependencies when it's already installed?

    - by Julien Fouilhé
    I'm trying to install GDAL on my CentOS 64 bits server to be able to make some GIS operations. I tried a simple: # yum install gdal First, the GDAL version is 1.4 (the last released one is 1.9) Then, I see in the dependencies list mysql. But I have mysql already installed, from another repository (remi), with a newer version than the one suggested by yum... Is it a problem of architecture (yum suggests i386)? I risked a yes, but still impossible to install it! Here's the error I have. Transaction Check Error: package mysql-5.5.28-1.el5.remi.x86_64 (which is newer than mysql-5.0.95-1.el5_7.1.i386) is already installed Then, I tried to install it from sources with last version available (1.9.2). I downloaded the GDAL tar.gz, extracted the files and installed it like following: # tar -xzf gdal-1.9.2.tar.gz # ./configure --with-static-proj4=/usr/local/lib --with-threads --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-gif=internal --with-png=internal --with-libz=internal # make # make install But during the make, I have some strange errors displaying, about RegisterOGRMySQL, that I can't understand: chmod a+x gdal-config /bin/sh /home/benjamin/gdal-1.9.2/libtool --mode=link g++ gdalinfo.lo /home/benjamin/gdal-1.9.2/libgdal.la -o gdalinfo libtool: link: g++ .libs/gdalinfo.o -o .libs/gdalinfo /home/benjamin/gdal-1.9.2/.libs/libgdal.so -L/usr/local/lib/lib -L/usr/kerberos/lib64 -lproj -lsqlite3 /usr/lib64/libexpat.so -lpthread -lrt -lcurl -ldl -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lidn -lssl -lcrypto -lz -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/lib64 /home/benjamin/gdal-1.9.2/.libs/libgdal.so: undefined reference to `RegisterOGRMySQL' collect2: ld returned 1 exit status make[1]: *** [gdalinfo] Error 1 make[1]: Leaving directory `/home/benjamin/gdal-1.9.2/apps' make: *** [apps-target] Error 2 Has anyone a solution? Thanks a lot!

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine?

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • How could Google Latitude find my exact PC location with no GPS or public wifi?

    - by Mike
    I found a similar question here but I still don't get it. You see, I live in a small town and every time I check my IP location via online services or speed test websites, my location appears to be my ISP server location (which in my case is 250 miles away). But when I tried Google latitude, it pinpointed my exact location within less than 100 meters! I use Windows Vista, Google Chrome, and when I got the message that "Google is trying to locate you", I agreed just to check what the result will be. It was scary, very scary! What I've come up after reading the above link is that Google have a kind of extensive WiFi database locations. That could be understandable with the case of public and open WiFis that are used with a lot of people. Some of them might be using applications that could gather location data and somehow this information ends up in giant Google databases. From those, Google could pinpoint a WiFi location based on its MAC address along with these bits of info that have been gathered via various sources. The issue here is that my WiFi is private, I don't even broadcast my WiFi name. So how on earth did Google find my exact PC location? Please break down the answer in layman's terms as possible.

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • Pasting formatted Excel range into Outlook message

    - by Steph
    Hi everyone, I am using Office 2007 and I would like to use VBA to paste a range of formatted Excel cells into an Outlook message and then mail the message. In the following code (that I lifted from various sources), it runs without error and then sends an empty message... the paste does not work. Can anyone see the problem and better yet, help with a solution? Thanks, -Steph Sub SendMessage(SubjectText As String, Importance As OlImportance) Dim objOutlook As Outlook.Application Dim objOutlookMsg As Outlook.MailItem Dim objOutlookRecip As Outlook.Recipient Dim objOutlookAttach As Outlook.Attachment Dim iAddr As Integer, Col As Integer, SendLink As Boolean 'Dim Doc As Word.Document, wdRn As Word.Range Dim Doc As Object, wdRn As Object ' Create the Outlook session. Set objOutlook = CreateObject("Outlook.Application") ' Create the message. Set objOutlookMsg = objOutlook.CreateItem(olMailItem) Set Doc = objOutlookMsg.GetInspector.WordEditor 'Set Doc = objOutlookMsg.ActiveInspector.WordEditor Set wdRn = Doc.Range wdRn.Paste Set objOutlookRecip = objOutlookMsg.Recipients.Add("[email protected]") objOutlookRecip.Type = 1 objOutlookMsg.Subject = SubjectText objOutlookMsg.Importance = Importance With objOutlookMsg For Each objOutlookRecip In .Recipients objOutlookRecip.Resolve ' Set the Subject, Body, and Importance of the message. '.Subject = "Coverage Requests" 'objDrafts.GetFromClipboard Next .Send End With Set objOutlookMsg = Nothing Set objOutlook = Nothing End Sub

    Read the article

  • How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid)

    - by Svyatoslavik
    How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid) Problem that lucid have old ImageMagic version(6.5.2) Its very important because me need work with SVG grafics, In my local pc I have ubuntu 11.04 and ImageMagic 6.6.2 and all work fine, In server I have 6.5... and I have problem. Reinstall ubuntu to 11.* this is no solution. I tried change /etc/apt/source.list from ubuntu 10.04 (lucid) to list from ubuntu 11.04 (natty) and update ImageMagic. After this action I have ImageMagic 6.6.2 (I looked phpinfo()) but ImageMagick is not work now. If I try do any action I get error: [error] 8996#0: *19983 FastCGI sent in stderr: "PHP Fatal error: Uncaught exception 'ImagickException' with message 'no decode delegate for this image format `/tmp/magick-XXnYKWKC' @ error/constitute.c/ReadImage/532' How it fix? Or how return to old version imagemagick? Problem if I try install from sources: /tmp/image/ImageMagick-6.7.2-7# ./configure configuring ImageMagick 6.7.2-7 checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking whether build environment is sane... yes checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/tmp/image/ImageMagick-6.7.2-7': configure: error: C compiler cannot create executables See `config.log' for more details /tmp/image/ImageMagick-6.7.2-7#

    Read the article

  • Multiple cable adapter setup not working - VGA to smartphone. All cables tested and work

    - by Christopher Rucinski
    Issue Pictured overhead projector setup does not work. #1 - #2 - #3 - Phone. All cables are tested and work! The issue is the HDMI connection between cable #2 and #3. With all other cables, the screen will automatically be displayed onto the projector screen. No extra work needed. With the pictured setup, the smartphone screen is not displayed onto the projector screen. What is the issue with the HDMI connection?? Background We recently had to do presentations at work (school), but the administration only provided VGA means of hooking up to it. Mostly likely reason probably dealt with cost. Anyways, there are several teachers that have brand new Samsung Series 9 ultrabooks (or similar). You know, the ones without VGA support. So I bought an adapter for those ultrabooks. Cable #5 in the picture below. However, both my coworker and I have been wanting to just display our phone screens on the projector. This I knew would require some extra work. What I have VGA cable to projector (cables go through the wall) For laptops HDMI to VGA cable For laptops MHL adapter For 11-pin microUSB phones microHDMI to VGA cable For ultrabooks 11-pin to 5-pin microUSB adapter For older 5-pin microUSB phones) Equipment Projectors 1 projector with VGA and HDMI input (issue is coworkers forget to switch sources) 1 projector with VGA only input Laptops 2 new Samsung ultrabooks w/o VGA or HDMI support 1 ultrabook with VGA and HDMI support several other laptops with at least VGA support 1 tablet with 11-pin microUSB at least 1 new phone with 11-pin microUSB at least 1 old phone with 5-pin microUSB Tested VGA cable (#1) to laptop Good VGA cable (#1) to HDMI adapter (#2) to laptop Good VGA cable (#1) to microHDMI adapter (#5) to laptop Good Projector to HDMI cable (not shown) to MHL adapter (#3) to Galaxy Note 3 smartphone Good VGA cable (#1) to HDMI adapter (#2) to MHL adapter (#3) to Galaxy Note 3 smartphone Does not work!! Extra Notes The 11-pin MHL adapter will not fit inside the 11-pin to 5-pin microUSB adapter so older phones can be displayed on the screen.

    Read the article

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • Unable to connect to shared (iscsitarget) dvd-rw drive on ubuntu karmic box

    - by Develop7
    Intro I have desktop with DVD-RW drive that runs primarily on Linux (namely Ubuntu 9.10). My wife has netbook that rins Windows XP with no cd/dvd drive. There's also LAN through our ADSL modem/router. I've "ported" (actually, I've just grabbed sources and ran dpkg-buildpackage) iscsitarget package from Ubuntu Lucid to Karmic (here are packages), installed it (sudo aptitude install iscsitarget; sudo m-a a-i iscsitarget) and configured it in the following way (/etc/ietd.conf): Target iqn.2020-01.local.develop7-desktop:storage.disc.dvdrw Lun 0 Path=/dev/sr0,Type=blockio #I've skipped commented lines Also, I've opened port 3260 with ufw: $ sudo ufw status | grep 3260 3260 ALLOW 192.168.1.0/24 Problem But (here's the trouble) I still can't connect to this target from Windows box. Microsoft Software iSCSI Initiator screams "Logon failure" upon connect attempt, and, respectively, fails to connect. After unsuccessful connection attempt I've noticed this line in dmesg | tail's output: iscsi_trgt: ioctl(299) invalid ioctl cmd c078690d Question So the question is — what's wrong with my config/iSCSI target/whatever else? Or, in short — what I'm doing wrong? Thanks in advance.

    Read the article

  • Unable to connect to shared (iscsitarget) dvd-rw drive on ubuntu karmic box

    - by develop7
    Preface: I have desktop with DVD-RW drive that runs primarily on Linux (namely Ubuntu 9.10). My wife has netbook that rins Windows XP with no cd/dvd drive. There's also LAN through our ADSL modem/router. I've "ported" (actually, I've just grabbed sources and ran dpkg-buildpackage) iscsitarget package from Ubuntu Lucid to Karmic (here are packages), installed it (sudo aptitude install iscsitarget; sudo m-a a-i iscsitarget) and configured it in the following way (/etc/ietd.conf): Target iqn.2020-01.local.develop7-desktop:storage.disc.dvdrw Lun 0 Path=/dev/sr0,Type=blockio #I've skipped commented lines Also, I've opened port 3260 with ufw: $ sudo ufw status | grep 3260 3260 ALLOW 192.168.1.0/24 But (here's the trouble) I still can't connect to this target from Windows box. Microsoft Software iSCSI Initiator tells "Logon failure" upon connect attempt. After unsuccessful connection attempt I've noticed this line in dmesg | tail's output: iscsi_trgt: ioctl(299) invalid ioctl cmd c078690d So the question is — what's wrong with my config/iSCSI target/whatever else? Or, in short — what I'm doing wrong? Thanks in advance.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >