Search Results

Search found 11070 results on 443 pages for 'bin deployment'.

Page 236/443 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • How to load a Bluetooth .apk file in Android G1 phone through Linux OS?

    - by Praween k
    Hi, I want to install a Bluetooth Application over my G1 device in linux environment.Please any Body provide me the procedure for this ? Whenever i am installing the application following error was thrown:- adb install /home/parveen/workspace/BluetoothChat/bin/BluetoothChat.apk 337 KB/s (28084 bytes in 0.081s) pkg: /data/local/tmp/BluetoothChat.apk Failure [INSTALL_FAILED_ALREADY_EXISTS]** Please help me out of this? Thanks in Advance Praween.

    Read the article

  • compile AMR-nb codec with RVCT for WinCE/Window Mobile

    - by pps
    Hello everybody, I'm working on amr speech codec (porting/optimization) I have an arm (for WinCE) optimized version from voiceage and I use it as a reference in performance testing. So far, binary produced with my lib beats the other one by around 20-30%! I use Vs2008 and I have limited access to ARM instruction set I can generate with Microsoft compiler. So I tried to look for alternative compiler to see what would be performance difference. I have RVCT compiler, but it produces elf binaries/object files. However, I run my test on a wince mobile phone (TyTn 2) so I need to find a way to run code compiled with RVCT on WinCE. Some of the options are 1) to produce assembly listing (-S option of armcc), and try to assemble with some other assembler that can create COFF (MS assembler for arm) 2) compile and convert generated ELF object file to COFF object (seems like objcopy of gnu binutils could help me with that) 3) using fromelf utility supplied by RVCT create BIN file and somehow try to mangle the bits so I can execute them ;) My first attempt is to create a simple c++ file with one exported function, compile it with RVCT and then try to run that function on the smartphone. The emitted assembly cannot be assembled by the ms assembler (not only they are not compatible, but also ms assembler rejects some of the instructions generated with RVCT compiler; ASR opcode in my case) Then I tried to convert ELF object to coff format and I can't find any information on that. There is a gcc port for ce and objcopy from that toolset is supposed to be able to do the task. However, I can't get it working. I tried different switches, but I have no idea what exactly I need to specify as bfdname for input and output format. So, I couldn't get it working either. Dumping with fromelf and using generated bin file seems to be overkill, so I decided to ask you guys if there is anything I should try to do or maybe someone has already done similar task and could help me. Basically, all I want to do is to compile my code with RVCT compiler and see what's the performance difference. My code has zero dependencies on any c runtime functions. thanks!

    Read the article

  • Deploying new code live

    - by nicoX
    What's the best practise to deploy new code on a live (e-commerce) site? For now I have stopped apache for +/- 10 seconds when renaming directory public_html_new to public_html and old to public_html_old. This creates a short down-time, before I start Apache again. The same question goes if using Git to pull the new repo to the live directory. Can I pull the repo while the site is active? And how about if I need to copy a DB as well? During the tar (backup purpose) compression of the live site I noticed that changes occurred in the media directory. That indicated to me that files keep on changing periodically. And if these changes can interfere if Apache is not stopped during deployment.

    Read the article

  • Rails Gem Install Problems: Google-Geocode

    - by spin-docta
    I'm try to install google-geocode for rails sudo gem install google-geocode but I get the following error. Any suggestions? Building native extensions. This could take a while... ERROR: Error installing google-geocode: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include,/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... no libxml2 is missing. try 'port install libxml2' or 'yum install libxml2' *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby --with-iconv-dir --without-iconv-dir --with-iconv-include --without-iconv-include=${iconv-dir}/include --with-iconv-lib --without-iconv-lib=${iconv-dir}/lib --with-xml2-dir --without-xml2-dir --with-xml2-include --without-xml2-include=${xml2-dir}/include --with-xml2-lib --without-xml2-lib=${xml2-dir}/lib --with-xslt-dir --without-xslt-dir --with-xslt-include --without-xslt-include=${xslt-dir}/include --with-xslt-lib --without-xslt-lib=${xslt-dir}/lib --with-xml2lib --without-xml2lib Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/nokogiri-1.4.0 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/nokogiri-1.4.0/ext/nokogiri/gem_make.out

    Read the article

  • What are the benefits of running chef-server instead of chef-solo?

    - by strife25
    I am looking at automated deployment solutions for my team and have been playing with Chef for the past few days. I've been able to get a simple web app up an running from a base Red Hat VM using chef-solo. Our end goal is to use Chef (or another system) to automatically deploy application topologies to the cloud as we run builds. Our process would basically run like so: Our web app code, dependencies, and chef cookbooks are stored in SCM A build is executed and greats a single package for images to acquire and test against The build engine then deploys new cloud images that run a chef client to get packages installed. The images acquire the cookbooks from SCM or the Chef server and install everything to get up and running What are the benefits and/or use cases for getting a Chef Server running? Are there any major benefits to have a Chef Server hold and acquire the cookbooks from SCM vs. using chef-solo and having a script that will pull the cookbooks from SCM?

    Read the article

  • Flexible classroom environments (OS, Office)

    - by HannesFostie
    I work in the IT department of a training center, we still offer XP and Office 2003 trainings but also offer Vista and Win7 and Office 2007. Currently, we use VMs on VMware Server but this is obviously not a superb choice. We're thinking of implementing something like VDI (brainstorm phase, we hardly have any details) but I decided to check here if people would have some clever alternatives. Requirements: * Flexible when it comes to deployment * Centralized management would be a big plus * Allow for different software, whether they be compatible or not (all of office except for outlook can be installed simultaneously. for outlook you need to choose between 2003 or 2007) * Allow for different OS We have a big enough budget to implement a proper SAN environment to accomodate the virtualization of the solution, whatever kind it may be. A support contract will probably be necessary as well, because we need to be able to offer quick solutions to problems and with only 2 sysadmins that is simply impossible to guarantee.

    Read the article

  • Does RIA Services requires an extra install on the server?

    - by jkohlhepp
    If I want to deploy an ASP.NET application that hosts RIA Services endpoints for a Silverlight application, do I have to install anything extra on the web server? Or is it just some extra DLLs that can be deployed to my applications Bin folder? I know that when you are doing RIA Services development there are additional toolkits and what-not to install, but I'm not sure if those are needed on the server.

    Read the article

  • phpThumb cannot find ImageMagick / Imagick

    - by fistameeny
    Hi, I'm having a problem with phpThumb. It says in the documentation that to get the best out of it, use ImageMagick / Imagick. I've got this installed on the Server (running Centos 5.1), and can run convert --version and get the right info back. I can also run which convert which returns /usr/bin/convert However, phpThumb can't location the convert program - the demo's show that: (requires ImageMagick, this server is running "n/a" so it will not work) Does anyone have any pointers on how to fix this? Cheers, Matt

    Read the article

  • Releasing an OLE IStorage file handle in C#

    - by Bernard Darnton
    I'm trying to embed a PDF file into a Word document using the OLE technique described here: http://blogs.msdn.com/brian_jones/archive/2009/07/21/embedding-any-file-type-like-pdf-in-an-open-xml-file.aspx I've tried to implement to C++ code provided in C# so that the whole project's in one place and am almost there except for one roadblock. When I try to feed the generated OLE object binary data into the Word document I get an IOException. IOException: The process cannot access the file 'C:\Wherever\Whatever.pdf.bin' because it is being used by another process. There is a file handle open the .bin file and I don't know how to get rid of it. I don't know a huge amount about COM - I'm winging it here - and I don't know where the file handle is or how to release it. Here's what my C#-ised code looks like. What am I missing? public void ExportOleFile(string oleOutputFileName, string emfOutputFileName) { OLE32.IStorage storage; var result = OLE32.StgCreateStorageEx( oleOutputFileName, OLE32.STGM.STGM_READWRITE | OLE32.STGM.STGM_SHARE_EXCLUSIVE | OLE32.STGM.STGM_CREATE | OLE32.STGM.STGM_TRANSACTED, OLE32.STGFMT.STGFMT_DOCFILE, 0, IntPtr.Zero, IntPtr.Zero, ref OLE32.IID_IStorage, out storage ); var CLSID_NULL = Guid.Empty; OLE32.IOleObject pOle; result = OLE32.OleCreateFromFile( ref CLSID_NULL, _inputFileName, ref OLE32.IID_IOleObject, OLE32.OLERENDER.OLERENDER_NONE, IntPtr.Zero, null, storage, out pOle ); result = OLE32.OleRun(pOle); IntPtr unknownFromOle = Marshal.GetIUnknownForObject(pOle); IntPtr unknownForDataObj; Marshal.QueryInterface(unknownFromOle, ref OLE32.IID_IDataObject, out unknownForDataObj); var pdo = Marshal.GetObjectForIUnknown(unknownForDataObj) as IDataObject; var fetc = new FORMATETC(); fetc.cfFormat = (short)OLE32.CLIPFORMAT.CF_ENHMETAFILE; fetc.dwAspect = DVASPECT.DVASPECT_CONTENT; fetc.lindex = -1; fetc.ptd = IntPtr.Zero; fetc.tymed = TYMED.TYMED_ENHMF; var stgm = new STGMEDIUM(); stgm.unionmember = IntPtr.Zero; stgm.tymed = TYMED.TYMED_ENHMF; pdo.GetData(ref fetc, out stgm); var hemf = GDI32.CopyEnhMetaFile(stgm.unionmember, emfOutputFileName); storage.Commit((int)OLE32.STGC.STGC_DEFAULT); pOle.Close(0); GDI32.DeleteEnhMetaFile(stgm.unionmember); GDI32.DeleteEnhMetaFile(hemf); }

    Read the article

  • How to make ensime work in windows?

    - by Eastsun
    I'm new to emacs and I want to use ensime in Windows. I had a try but it doesn't work. It seems that it doesn't work because there is a *nix format file named "\ensime\bin\server.sh" . Very appreciate if someone give me some tips.

    Read the article

  • IIS 7 - Provisioning portal

    - by Doug
    I am wanting to setup our production IIS environments with a provisioning portal to ensure that deployment staff always setup sites in a uniform configuration, and that they don't actually have remote access to the servers directly. What is the best 'simple' provisioning tool for such a purpose? Do people write their own using something like Powershell remoting? I don't want to install a tool like HELM or similar as it feels like it creates unnecessary bloat on top of a production environment. features should include: create new website and app pool combo restart, start and stop application pools change bindings on websites

    Read the article

  • Using zc.buildout, how do I install a tarball from a website?

    - by Brad Wright
    I'm trying to get zc.buildout to install Gunicorn from source. Using the following configuration: [gunicorn] recipe = collective.recipe.distutils url = http://github.com/benoitc/gunicorn/tarball/master results in the following error: SystemError: ('Failed', '"/usr/bin/python" setup.py -q install --install- purelib="/mnt/hgfs/Projects/intranation/parts/site-packages" --install-platlib="/mnt/hgfs/Projects/intranation/parts/site-packages"') Providing a --install-dir or --prefix doesn't help. Is there a recipe for zc.buildout that downloads a tarball and installs it via easy_install or similar?

    Read the article

  • json problems with making a ruby on rails application

    - by Prince Merdz
    So I'm using Bitnami to learn Ruby on Rails. I have also previously tried the manual installation for ruby and rails and was met by the same problem so I thought I should try first the easy package deal of Bitnami. Anyway my problem with json is that it causes the bundle install to fail. First the auto bundle install that rails new does fails because of an ssl error. Which is easily solved by changing the source in the gemfile which is https to http. However when I try to bundle install it does another error when it tries to install json. C:\RubyStack-3.2.7-0\projects\testing>bundle install Fetching gem metadata from http://rubygems.org/......... Using rake (0.9.2.2) Using i18n (0.6.0) Using multi_json (1.3.6) Installing activesupport (3.2.8) Using builder (3.0.0) Installing activemodel (3.2.8) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.1) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Installing actionpack (3.2.8) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.10) Using mail (2.4.4) Installing actionmailer (3.2.8) Using arel (3.0.2) Using tzinfo (0.3.33) Installing activerecord (3.2.8) Installing activeresource (3.2.8) Using bundler (1.1.5) Using coffee-script-source (1.3.3) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Installing json (1.7.5) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension . C:/RUBYST~1.7-0/ruby/bin/ruby.exe extconf.rb creating Makefile make 0 [main] echo 5244 open_stackdumpfile: Dumping stack trace to echo.exe.sta ckdump make: *** [generator-i386-mingw32.def] Error 5 Gem files will remain installed in C:/RUBYST~1.7-0/ruby/lib/ruby/gems/1.9.1/gems /json-1.7.5 for inspection. Results logged to C:/RUBYST~1.7-0/ruby/lib/ruby/gems/1.9.1/gems/json-1.7.5/ext/j son/ext/generator/gem_make.out An error occured while installing json (1.7.5), and Bundler cannot continue. Make sure that `gem install json -v '1.7.5'` succeeds before bundling. This is the gem_make.out file it produces after trying to install json (btw windows also produces an error that echo.exe has stopped working while running the gem install json) C:/RUBYST~1.7-0/ruby/bin/ruby.exe extconf.rb creating Makefile make 0 [main] echo 5244 open_stackdumpfile: Dumping stack trace to echo.exe.stackdump make: *** [generator-i386-mingw32.def] Error 5 I can't even start learning ror for the setup is already a huge pain. (btw I have no prior experience with web frameworks, just desktop programming). help?

    Read the article

  • Integers not properly returned from a property list (plist) array in Objective-C

    - by Gaurav
    In summary, I am having a problem where I read what I expect to be an NSNumber from an NSArray contained in a property list and instead of getting a number such as '1', I get what looks to be a memory address (i.e. '61879840'). The numbers are clearly correct in the property list. Below is my process for creating the property list and reading it back. Creating the property list I have created a simple Objective-C property list with arrays of integers within one root array: <array> <array> <integer>1</integer> <integer>2</integer> </array> <array> <integer>1</integer> <integer>2</integer> <integer>5</integer> </array> ... more arrays with integers ... </array> The arrays are NSArray objects and the integers are NSNumber objects. The property list has been created and serialized using the following code: // factorArray is an NSArray that contains NSArrays of NSNumbers as described above // serialize and compress factorArray as a property list, Factors-bin.plist NSString *error; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"Factors-bin.plist"]; NSData *plistData = [NSPropertyListSerialization dataFromPropertyList:factorArray format:NSPropertyListBinaryFormat_v1_0 errorDescription:&error]; Inspecting the created plist, all values and types are correct. Reading the property list The property list is read in as Data and then converted to an NSArray: NSString *path = [[NSBundle mainBundle] pathForResource:@"Factors" ofType:@"plist"]; NSData *plistData = [[NSData alloc] initWithContentsOfFile:path]; NSPropertyListFormat format; NSString *error = nil; NSArray *factorData = (NSArray *)[NSPropertyListSerialization propertyListFromData:plistData mutabilityOption:NSPropertyListImmutable format:&format errorDescription:&error]; Cycling through factorData to see what it contains is where I see the erroneous integers: for (int i = 0; i < 10; i++) { NSArray *factorList = (NSArray *)[factorData objectAtIndex:i]; NSLog(@"Factors of %d\n", i + 1); for (int j = 0; j < [factorList count]; j++) { NSLog(@" %d\n", (NSNumber *)[factorList objectAtIndex:j]); } } I see all the correct number of values, but the values themselves are incorrect, i.e.: Factors of 3 61879840 (should be 1) 61961200 (should be 3) Factors of 4 61879840 (should be 1) 61943472 (should be 2) 61943632 (should be 4) Factors of 5 61879840 (should be 1) 61943616 (should be 5)

    Read the article

  • Script executes successfully in commandline but not as a cronjob

    - by JasonOng
    I've a bash script that runs a ruby script that fetches my twitter feeds. ## /home/username/twittercron #!/bin/bash cd /home/username/twitter ruby twitter.rb friends It runs successfully in command line. /home/username/twittercron But when I try to run it as a cronjob, it ran but wasn't able to fetch the feeds. ## crontab -e */15 * * * * * /home/username/twittercron The script has been chmod +x. Not sure why it's as such. Any ideas?

    Read the article

  • Putting solution build output in a different directory !!

    - by Rajesh
    Hi all, I have an issue in building my solution (Hardcopy.sln) .This solution consists of many other modules & each module is directing their output to the bin/debug/ folder. during the whole solution build . i want to redirect the output of each module to a different location .how to do the same. i am using the MSbuild utility to build the solution in my nant scripts . i want to do it using Msbuild utility in the Nant is there any way out: Thanks Rajesh

    Read the article

  • Upgraded from VS 2008 -> VS 2010. Can't Connect to SQL Server in Staging Environment

    - by Bob Kaufman
    I have a test application written in C#/ASP.NET that I've developed using Visual Studio 2008 Professional/.NET 3.5 which connects to a local SQL Server 2008 Express instance. I upgraded the development machine to Visual Studio 2010 Professional maintaining .NET 3.5 and everything in the development environment continues to work correctly. Upon deployment of the new app to an internal staging machine, that app cannot connect to its local SQL Server 2008 Express database. I get the customary "server not found" error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible... Does something need to be upgraded on the staging machine to be able to host a Visual Studio 2010/.NET 3.5 application?

    Read the article

  • Unable to load <mytest> Because it is not located under Appbase

    - by Sam
    I have created a nunit project(NunitLoginTest.nunit) by selcting my test project in the nunit\bin directory and now I am trying to load that project but it is giving me the following error. Unable to load Because it is not located under Appbase, could not load file or assembly "nunitLogintest" or one of its dependencies. The system cannot find the specified path Can any one have any idea what is it related to. I also have checked my config file.

    Read the article

  • Is using Capistrano for user maintenance tasks on university lab feasible?

    - by danielkza
    I've been looking around for tools to replace some legacy scripts for creating and maintaining accounts in a university computer lab ecosystem consisting of things like: LDAP and Kerberos for authentication User home storage and web pages Entries on an SQL database Printing quotas Mailing lists, etc. I'd also like to automate machine and VM membership for Kerberos and Puppet if possiible. I've found Capistrano, and while the basic principle of running tasks on remote hosts through SSH seems to fit, and the DSL in Ruby looks quite nice, I've found most documentation is related to application deployment, not generic tasks. I'm also not aware of any good way to parameterize tasks so I can pass on the user information for creation. Is something about Capistrano I am missing, or is it not the correct tool for this job? Are there any more userful alternatives?

    Read the article

  • Can't get simple Apache VHost up and running

    - by TK Kocheran
    Unfortunately, I can't seem to get a simple Apache VHost online. I used to simply have one VHost which bound to all: <VirtualHost *:80>, but this isn't appropriate for security anymore. I need to have one VHost for localhost requests (ie my dev server) and one for incoming requests via my domain name. Here's my new VHost: NameVirtualHost domain1.com <VirtualHost domain1.com:80> DocumentRoot /var/www ServerName domain1.com </VirtualHost> <VirtualHost domain2.com:80> DocumentRoot /var/www ServerName domain2.com </VirtualHost> After I restart my server, I see the following errors in my log: [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs What am I doing wrong? EDIT As per the answer give below, I have modified my configuration. Here are my configuration files: /etc/apache2/ports.conf: Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Here are my actual defined sites: /etc/apache2/sites-enabled/000-localhost: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerAdmin ######### DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 <Location /> <Limit GET POST PUT> order allow,deny allow from all deny from 65.34.248.110 deny from 69.122.239.3 deny from 58.218.199.147 deny from 65.34.248.110 </Limit> </Location> </VirtualHost> /etc/apache2/sites-enabled/001-rfkrocktk.dyndns.org: NameVirtualHost rfkrocktk.dyndns.org:80 <VirtualHost rfkrocktk.dyndns.org:80> DocumentRoot /var/www ServerName rfkrocktk.dyndns.org </VirtualHost> And, just for kicks, my main file: /etc/apache2/apache2.conf: # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ what else do I need to do to fix it? Should I be telling apache to listen on 127.0.0.1:80, or isn't it already listening there?

    Read the article

  • What would cause my Silverlight .xap to quadruple in size suddenly?

    - by Edward Tanguay
    I've been working on a Silverlight application. I just noticed that the .xap file is now four times as big as it was, what could have caused that? Here's some other info: there seems to now be many more language settings in the bin/Release directory I checked "Reduce size of .xap" in under Properties/Silverlight but that just brought it down from 1300 to 1200. I reference the System.Windows.Controls.Toolkit dll but I was doing that even when it was 325K Screenshot of build directories before and after:

    Read the article

  • MXMLC and 64bit JRE

    - by sascha
    Are there any workarounds to get the Flex compiler to work with a 64bit JRE? If I use an MXMLC task in an Ant buildfile in Eclipse it works fine but if I try to use MXMLC from the command line (or try the Run... command from FDT in Eclipse) it fails, telling me ... "Error loading: C:\Program Files\Java\jrrt-1.6.0\jre\bin\jrockit\jvm.dll" (this is with a 64bit JRockit runtime but that shouldn't matter).

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >