Search Results

Search found 9647 results on 386 pages for 'cross compile'.

Page 197/386 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Best way to install multiple versions of Apache, PHP and MySQL on a single FreeBSD host

    - by Mikael Roos
    I want a test- and development-environment for web using Apache, PHP and MySQL. I need to be able to test a single web-application with multiple versions of PHP (5.2, 5.3, etc) and multiple versions of MySQL (5.0, 5.1, 5.5, etc). It shall be hosted on a FreeBSD server. My idea is to compile each version into a directory structure and running them on separate portnumbers. For example: opt/apache2.2-php5.2-mysql-5.0 (httpd on port 8801, mysql on port 8802) (directory contains each software, compiled and linked towards eachother) opt/apache2.2-php5.3-mysql-5.1 (httpd on port 8803, mysql on port 8804) (and so on) Any thoughts or suggestions of the best way to setup this type of environment?

    Read the article

  • Fix for OpenSolaris with no gcc vs. Nexenta with no ext3

    - by Jake Wharton
    I'm attempting to migrate my server from linux to a Solaris variant during a hardware upgrade. The machine is based around an Abit AN-M2 board which has an NForce chipset. I have what seems to be a chicken-and-egg problem of sorts: OpenSolaris 2009.06 does not recognize the NIC and I cannot compile the drivers for it as it also lacks gcc. I haven't tested as to whether or not I can mount an ext3 partition yet but its moot if there is no networking. Nexenta 3.0b3 recognizes the NIC but I cannot get the ext3 drives mounted due to FSWfspart refusing to install. I do not know much about Solaris but I wager this is due to the fact that Nexenta is based around Debian as well. While I am reusing the mobo/CPU combo, I did just spent a lot of money on the other hardware around it and would very much like to get it up and running smoothly and quickly. Does anyone have any suggestions that are not: Get a new mobo/CPU Run another OS Use alternate NIC

    Read the article

  • AWS EC2 and build-essential

    - by Randy Hartmen
    Hi, I am trying to compile Node.js on Amazon EC2, but I can't even install "build essential". Where's the problem? Thanks. sudo yum install build-essential Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile (...) No package build-essential available. Error: Nothing to do ./configure Checking for program g++ or c++ : not found Checking for program icpc : not found Checking for program c++ : not found error: could not configure a cxx compiler! could not configure a cxx compiler!

    Read the article

  • How can I install a headless JDK on an Ubuntu Jaunty server?

    - by Hanno Fietz
    I recently set up a build server that requires a JDK to run (for example, to compile the Java sources). The OpenJDK package in Ubuntu pulls in the OpenJDK JRE as a dependency which, in turn, depends on a large number of packages that are only relevant for graphical environments. For the standard JRE, there's a headless version of the package, but for the JDK, no. This issue has been discussed in various places before, and one solution that I found and used was this: $ apt-get --no-install-recommends -d install openjdk-6-jdk $ dpkg -i --ignore-depends=openjdk-6-jre /path/to/just-downloaded.deb While this worked, it now leaves my system with a broken dependency tree and apt-get refuses further installs untill I run apt-get -f. Is there a better solution to this?

    Read the article

  • Limit on WMIC requests from a Windows Service

    - by Anders
    Hi all, Does anyone know if there is limit on how many wmic requests Windows can handle simultaneously if they are originating from a Windows service? The reason I'm asking is because my application fails when too many simultaneous requests have been initiated. I don't get any data back from the application. However, If I compile the Python application and run it as a stand alone application all will work fine. The wmic calls are looking like this: subprocess.Popen("wmic path Win32_PerfFormattedData_PerfOS_Memory get CommittedBytes", stdout=subprocess.PIPE, stderr=subprocess.PIPE) This makes me wonder, is there a limit Windows Services and what they can perform? I mean, if the .exe file can handle all requests, then it must be something to do with the fact that I have compiled it as a Windows service.

    Read the article

  • Using the right folder for the right job. Article link, please?

    - by Droogans
    There are specific folders designed for specific tasks. /var/www holds your web sites, /usr/bin contains files to run your applications...yet I still find myself putting nearly all of my work in ~. Is it possible to overuse my home directory? Will it come back to haunt me? Anyone have a good link to an article of best practices for organizing your files so that they are placed in their "correct" place? Is there even such a thing in Linux? I am referring specifically to user-generated content. I do not compile applications from source, I use apt-get for those tasks. This article has a great introduction to what I'm looking for. Table 3-2, "Subdirectories of the root directory" is the sort of thing I'm looking for, but with more details/examples.

    Read the article

  • How would I put together a site requiring several TB? [closed]

    - by acidzombie24
    Lets say I have a site with unmetered 100MBPS bandwidth (i assume its bits?) and the ram i require. Most plans i see offer HDD that hold 250gb and 1TB. But what happens if i compile/generate enough data that i require 10tb or 25tb? (I'd likely have two servers but...) I wouldn't be serving all of that data (well not to the public) so CDN wouldn't make sense. What do i do in this scenario? Do I need to get a custom plan from a hosting provider? (if so how do i find them?) Are there services that allow me to mount remote drives (that sounds wrong unless its a CDN so maybe not). Are there host that deals specifically with unmetered bandwidth and provides lots of disk space? Math says ~1TB is the most i'll ever need but if i happen to need more i'd like to know my options.

    Read the article

  • Corporate IM with video that actually works, suggestions?

    - by Erik P. Skaalerud
    Hi. Does anyone here have a suggestion for a cross-platform IM solution wich will work with voip/video on both Windows (XP and 7) and Mac OS X from 10.4 and upwards? Right now were in a kind of mixed enviroment, with some Mac users using iChat server since they need video support (conference across several offices over VPN), but it wont't work on windows clients. The rest of us are happily using Openfire+Spark, but there's no VoIP or video avaible from what i've found, unless you want to add in several 3rd party software (like red5 and asterisk). Requirements: As said before; must work on both Windows and Mac Internal server (no Skype etc) File transfer between platforms SSO (Single Sign-On) via Active Directory authentication Some sort of screen sharing would be a plus, like switching over to a screen capture (powerpoint, software training etc) We can afford to buy software if that's needed to get this working without any hiccups across platforms. Pre-thanks to anyone who gives suggestions.

    Read the article

  • install SSH2 (server?) on FreeBSD

    - by Robert Cabri
    At work we have freeBSD testmachine. And we have a PHP script that uses SSH2 protocol to connect with the freeBSD server. With this script I'm not able to connect to the freeBSD server. I wanted to install it, but I'm not sure where to find the right source to compile. The connection can be made with a different machine, but isntalling a new distro isn't the solution right now. I can connect to the server over ssh, but not with the ssh2 protocol in PHP (http://nl.php.net/ssh2).

    Read the article

  • Sphinx 2.0.8 with Postgresql 9.2.4

    - by Calvin
    I want to install Sphinx 2.0.8 from source on CentOS 5.6 with PostgreSQL 9.2.4 my server type : Linux localhost.localdomain 2.6.18-348.6.1.el5 #1 SMP Tue May 21 15:29:55 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux. First, i compile with : ./configure --prefix=/usr/local/sphinx --with-pgsql --without-mysql --with-pgsql-libs=/var/lib/pgsql/9.2/data/ --with-pgsql-includes=/usr/pgsql-9.2/include/ then it seems work, but after i run make errors appeared /usr/bin/ld: cannot find -lpq I have installed postgresql92-devel and libs also libpqxxx and worked with that error all day long but i am not solve that yet. Thanks for helping me.

    Read the article

  • I can't launch any Win8 apps after upgrading to Windows 8.1

    - by locka
    I just upgraded to 8.1 and now none of the Metro apps start. The issue is that if I start any metro app, including the Store and PC Settings they immediately fail. The classic desktop is fine, as are standard programs, it's just the metro apps. If I look in the system event log I see errors like this: *Activation of application winstore_cw5n1h2txyewy!Windows.Store failed with error: This application does not support the contract specified or is not installed. See the Microsoft-Windows-TWinUI/Operational log for additional information.* In addition the tiles in metro have a small cross icon on them: I suspect that my Live ID (which I somehow managed to skip during update) is not set properly and consequently none of the online stuff works. But how do I fix this? I can't start PC settings, I can't start store. I see no way in the classic desktop of setting these things. I don't want to have to reinstall for this. Is there a simple fix?

    Read the article

  • Visible Keylogger (ie not evil)

    - by Ben Haley
    I want keylogging software on my laptop for lifelogging purposes. But the software I can find is targeted towards stealth activity. Can anyone recommend a keylogging software targeted towards personal backup. Ideal Functionality Runs publicly (like in the task bar). Easy to turn off (via keyboard shortcut is best... at least via button click) Encrypted log Fast Free Cross platform ( windows at least ) The best I have found is pykeylogger which does not attempt to be stealthy, but does not attempt to be visible either. I want a keylogger focused on transparency, speed, and security so I can safely record myself. *note: Christian has a similar question with a different emphasis

    Read the article

  • Compiling LaTeX document makes Google Drive crash

    - by Sander
    I've the issue that when I compile a LaTeX document that is located inside my Google Drive this will after a few turns make the OSX Google Drive application crash. As this is an important document I want to keep it all the time inside the Google Drive location to ensure cloud backup but this ofcourse is not guaranteed if this makes my Google Drive crash all the time. I don't seem te be able to identify what is causing this and I was hoping that maybe some people here have any idea what might cause this? We're talking about a 8 pages document with 3 images, so nothing crazy big or complex.

    Read the article

  • Cascading switches: uplink to uplink?

    - by wuckachucka
    I'm a bit confused as to why I haven't seen any references online to using Switch A's uplink port (1Gbps, 24-port 10/100) to connect to Switch B's uplink port: everything I've seen -- including documentation, forums, articles, etc. -- has Switch A's uplink port going to one of Switch B's 10/100 access ports. As I understand it, the Uplink port (besides greater speed normally) is no different than another port except that it's "internally crossed-over" so that you can use a straight cable with it. I've also seen documentation on using the uplink port to connect a switch to a gateway router, or even a server, as it provides greater bandwidth than the access ports, but yet not sure why nobody seems to be cross-uplinking, even when there's 2 uplink ports available on some higher-end switches. Switch in question is Linksys SRW224P (x2). Am I missing something?

    Read the article

  • Explicitly instantiating a generic member function of a generic structure

    - by Dennis Zickefoose
    I have a structure with a template parameter, Stream. Within that structure, there is a function with its own template parameter, Type. If I try to force a specific instance of the function to be generated and called, it works fine, if I am in a context where the exact type of the structure is known. If not, I get a compile error. This feels like a situation where I'm missing a typename, but there are no nested types. I suspect I'm missing something fundamental, but I've been staring at this code for so long all I see are redheads, and frankly writing code that uses templates has never been my forte. The following is the simplest example I could come up with that illustrates the issue. #include <iostream> template<typename Stream> struct Printer { Stream& str; Printer(Stream& str_) : str(str_) { } template<typename Type> Stream& Exec(const Type& t) { return str << t << std::endl; } }; template<typename Stream, typename Type> void Test1(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); /****** vvv This is the line the compiler doesn't like vvv ******/ out.Exec<bool>(t); /****** ^^^ That is the line the compiler doesn't like ^^^ ******/ } template<typename Type> void Test2(const Type& t) { Printer<std::ostream> out = Printer<std::ostream>(std::cout); out.Exec<bool>(t); } template<typename Stream, typename Type> void Test3(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); out.Exec(t); } int main() { Test2(5); Test3(std::cout, 5); return 0; } As it is written, gcc-4.4 gives the following: test.cpp: In function 'void Test1(Stream&, const Type&)': test.cpp:22: error: expected primary-expression before 'bool' test.cpp:22: error: expected ';' before 'bool' Test2 and Test3 both compile cleanly, and if I comment out Test1 the program executes, and I get "1 5" as I expect. So it looks like there's nothing wrong with the idea of what I want to do, but I've botched something in the implementation. If anybody could shed some light on what I'm overlooking, it would be greatly appreciated.

    Read the article

  • How to configure fastcgi with lighttpd

    - by silverburgh
    Hi, I am trying to configure FastCgi with ligttpd server. I was able to run vanilla lighttpd like this: ./lighttpd -f lighttpd.conf And then I compile/install the source of fastcgi, and I add the following in my lighttpd.conf: fastcgi.server = ( "/fastcgi_scripts/" => (( #"host" => "127.0.0.1", #"port" => 9091, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) But lighttpd won't start after I add the above. Can you please tell me how can I run fastcgi with lighttpd? I want to use a c program with fastcgi with lighttpd. Thank you.

    Read the article

  • PHP 5.2 installation with GD on CentOS 6

    - by Pratik Thakkar
    I am trying to install PHP 5.2.17 on CentOS 6.2. I have downloaded RPMs from http://www6.atomicorp.com/channels/atomic/centos/6/i386/RPMs/ Problem is that the PHP RPM seems to have GD disabled by default. Hence in spite installing php-gd RPM, GD is disabled. Is there any way that I can enable GD. Atomicorp seems to be the only website that has PHP 5.2.17 RPMs. I am not an advanced user to be able to compile PHP. I would appreciate help on this.

    Read the article

  • Can I make Apache drop a connection when matching a URL?

    - by PP
    Using mod_rewrite I can construct a rule to respond with a clean error code (e.g. 404 not found, 410 gone, or 403 unauthorised) when a page is requested that I don't want to serve. But frequently I get completely erroneous requests from hackers scanning my website for vulnerabilities or possibly cross-site scripting attempts. For these customers I do not want to return a clean error - I'd rather do something else like immediately drop the connection with no response or, alternatively, hold the connection open for a lengthy period of time to frustrate the automated process. Any ideas how to accomplish this with Apache? I've read that nginx has the ability to immediately terminate a connection when a particular pattern is matched.

    Read the article

  • ffmpeg add two audio streams to video

    - by Tossin Hausen
    I tried this: ffmpeg -i /sdcard/video/transcode/video.avi -map 0:0,0 -i /sdcard/video/transcode/first.mp3 -map 1:0,1 -i /sdcard/video/transcode/second.mp3 -map 2:0,2 -acodec copy -vcodec py /sdcard/video/transcode/Output.avi to add two audio streams to one video file. But ffmpeg says the number of mappings should match the number of output streams. What is wrong here? I'm trying to work with an Android build of FFmepg "ffmpeg for android beta". "Does not work" means that this uncommunicative Android build of FFmpeg just stops without giving any error message. The -codec copy option does not work with this build. Now I tried the same set of files with the FFmpeg called command line tool that comes with Ubuntu 10. Something (can't say where it is from). The -codec copy option does not work with this FFmpeg too. Here the complete output: m30x:~/movie/Film$ ffmpeg -i input.avi -i first.mp3 -i second.mp3 -map 0 -map 1 -map 2 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x93cfd10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Input #2, mp3, from 'second.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #2.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Number of stream maps must match number of output streams Merging only one audio stream with the video stream works with Ubuntu and Android version of FFmpeg. Here the complete output: ffmpeg -i input.avi -i first.mp3 -map 0 -map 1 -acodec copy -vcodec copy output.avi FFmpeg version SVN-r0.5.9-4:0.5.9-0ubuntu0.10.04.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.9-0ubuntu0.10.04.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jun 12 2012 16:27:34, gcc: 4.4.3 [NULL @ 0x9bfad10]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (30000/1) -> 25.00 (25/1) Input #0, avi, from 'input.avi': Duration: 01:30:33.00, start: 0.000000, bitrate: 901 kb/s Stream #0.0: Video: mpeg4, yuv420p, 576x432, 25 tbr, 25 tbn, 30k tbc Input #1, mp3, from 'first.mp3': Duration: 01:30:32.84, start: 0.000000, bitrate: 63 kb/s Stream #1.0: Audio: mp3, 22050 Hz, stereo, s16, 64 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 576x432, q=2-31, 90k tbn, 25 tbc Stream #0.1: Audio: libmp3lame, 22050 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding frame= 6157 fps=6156 q=-1.0 size= 31667kB time=246.28 bitrate=1053.3kbits/s Do you have an idea why it does not work with two audio streams? By the way, ffmpeg -i input_with_first_audio_stream.avi -i second.mp3 -acodec copy -vcodec copy output_two_audio_streams.avi -newaudio works with both versions of ffmpeg that I use, but the first audio stream is played too fast (x10 or more), while the second audio stream is played correct. Many thanks in advance and sorry for my unconventional question and outdated versions of ffmpeg. But I am a lamer and it is not so easy for me to compile from the source (especially for the Android version). I will try to compile an up to date version of ffmpeg with Ubuntu, but I don't have much free time.

    Read the article

  • Heartbeat won't start up from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. Let's say one node is down... The servers are directory connected with a 1000Mbps cross over cable on eth1 and have access to a IP camera LAN on eth0 The node that is still functioning won't start up heartbeat and provide access to the drbd resource. I have to manually restart heartbeat by "sudo service heartbeat restart" to get everything up and running. How can I get it to start fine from a cold start? Here is the my ha.cf and some material from the syslog... If I'm missing any information that might be of some help. http://pastebin.com/rGvzVSUq <--- Syslog http://pastebin.com/VqpaPSb5 <--- ha.cf

    Read the article

  • CentOS: update a package from a repository safely on a production server

    - by dan
    Hello everybody. I have a CentOS server on a production environment. I need to update the PHP package that I installed using the REMI repository. Quite easy: yum update php But what is it going to happen if something goes wrong during the update? How can I rollback? What's the best technique to make sure not to compromise a production server due to an update? Is it maybe better to compile PHP from the source, rather than using a binary package? EDIT: I am not afraid of incompatibility between my code and the new PHP version (I have well tested that on development). I am more afraid of something going wrong while CentOS updated the binary (power cut, lost connection, unexpected conflit) Thanks, Dan

    Read the article

  • xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • Why is mcrypt not included in most Linux distributions?

    - by Daniel Lopez
    libmcrypt is a powerful encryption library that is very popular with PHP-based applications. However, most Linux distributions do not include it. This causes problems for many users that need to download and compile it separately. I am guessing that the reason it is not shipped is related to encryption or patent issues. However, the source code for library itself is hosted and available on sourceforge.net I have been searching unsuccessfully for a document of authoritative post that explains the exact issues why this extension is not bundled with mainstream distributions. Can anyone provide a pointer to such material or provide an explanation?

    Read the article

  • Weblogic 10.3 weblogic.jar classpath issues

    - by user63063
    I am upgrading an old WLS8.1 app to 10.3 (11g) My ant build includes only the new weblogic.jar in the compile classpath and the build runs with no issues but when I include weblogic.jar as a libeary in the IDE (Intellij) i see many unresolved imports (for example: weblogic.xml.xpath.DOMXPath) when I check the weblogic.jar I see that the classes are indeed missing from it. compiling with verbose revealed that by including weblogic.jar in the ant classpath, many other jars in the BEA_HOME/modules are loaded to the classpath as well (for example: com.bea.core.xml.weblogic.xpath_1.4.0.0.jar) Can anyone explain what is going on? How can I fix my IDE classpath - do I need to import all the module-jars? Many of the module jars seems like they are there to support old deprecated weblogic 8 APIs (like: weblogic.xml.xpath.DOMXPath) how can I exclude these modules from my ant build? (I want to expose the APIs I need to upgrade) Thanks, NY

    Read the article

  • Why does Tomcat try to use the cache when compilation failed?

    - by etheros
    For some reason, it appears Tomcat is trying to hit its compilation cache when compilation failed. For example, if I create a JSP containing nothing but Hello, <%=world%>!, predictably, I get an error: org.apache.jasper.JasperException: Unable to compile class for JSP. Subsequent requests however alternate between this and org.apache.jasper.JasperException: org.apache.jasper.JasperException: Unable to load class for JSP. Further, if I create a JSP containing Hello!, it of course works just fine. If I modify it contain Hello, <%=name%>!, the response alternates between the previously-mentioned compilation error, and the cached Hello!. What's going on?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >