Search Results

Search found 10379 results on 416 pages for 'handle'.

Page 387/416 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • [Delphi] open text files in one application

    - by Remus Rigo
    hi all I want to write an text editor and to assign the txt files to it. My problem is that I want to have only one instance running and when a new file is opened to send the filename to the first app that is already running... (I want to do this using mutex). Here is a small test DPR looks like this uses Windows, Messages, SysUtils, Forms, wndMain in 'wndMain.pas' {frmMain}; {$R *.res} var PrevWindow : HWND; S : string; CData : TCopyDataStruct; begin PrevWindow := 0; if OpenMutex(MUTEX_ALL_ACCESS, False, 'MyMutex') <> 0 then begin repeat PrevWindow:=FindWindow('TfrmMain', nil); until PrevWindow<>Application.Handle; if IsWindow(PrevWindow) then begin SendMessage(PrevWindow, WM_SYSCOMMAND, SC_RESTORE, 0); BringWindowToTop(PrevWindow); SetForegroundWindow(PrevWindow); if FileExists(ParamStr(1)) then begin S:=ParamStr(1); CData.dwData:=0; CData.lpData:=PChar(S); CData.cbData:=1+Length(S); SendMessage(PrevWindow, WM_COPYDATA, 0, DWORD(@CData) ); end; end; end else CreateMutex(nil, False, 'MyMutex'); Application.Initialize; Application.CreateForm(TfrmMain, frmMain); Application.Run; end. PAS: type TfrmMain = class(TForm) memo: TMemo; private procedure WMCopyData ( var msg : TWMCopyData ) ; message WM_COPYDATA; public procedure OpenFile(f : String); end; var frmMain: TfrmMain; implementation {$R *.dfm} procedure TfrmMain.WMCopyData ( var msg : TWMCopyData ) ; var f : String; begin f:=PChar(msg.CopyDataStruct.lpData); //ShowMessage(f); OpenFile(f); end; procedure TfrmMain.OpenFile(f : String); begin memo.Clear; memo.Lines.LoadFromFile(f); Caption:=f; end; this code should be ok, but if i want to open a text file (from the second app), the first app receives a message like this: thanks

    Read the article

  • C# Different class objects in one list

    - by jeah_wicer
    I have looked around some now to find a solution to this problem. I found several ways that could solve it but to be honest I didn't realize which of the ways that would be considered the "right" C# or OOP way of solving it. My goal is not only to solve the problems but also to develop a good set of code standards and I'm fairly sure there's a standard way to handle this problem. Let's say I have 2 types of printer hardwares with their respective classes and ways of communicating: PrinterType1, PrinterType2. I would also like to be able to later on add another type if neccessary. One step up in abstraction those have much in common. It should be possible to send a string to each one of them as an example. They both have variables in common and variables unique to each class. (One for instance communicates via COM-port and has such an object, while the other one communicates via TCP and has such an object). I would however like to just implement a List of all those printers and be able to go through the list and perform things as "Send(string message)" on all Printers regardless of type. I would also like to access variables like "PrinterList[0].Name" that are the same for both objects, however I would also at some places like to access data that is specific to the object itself (For instance in the settings window of the application where the COM-port name is set for one object and the IP/port number for another). So, in short something like: In common: Name Send() Specific to PrinterType1: Port Specific to PrinterType2: IP And I wish to, for instance, do Send() on all objects regardless of type and the number of objects present. I've read about polymorphism, Generics, interfaces and such, but I would like to know how this, in my eyes basic, problem typically would be dealt with in C# (and/or OOP in general). I actually did try to make a base class, but it didn't quite seem right to me. For instance I have no use of a "string Send(string Message)" function in the base class itself. So why would I define one there that needs to be overridden in the derived classes when I would never use the function in the base class ever in the first place? I'm really thankful for any answers. People around here seem very knowledgeable and this place has provided me with many solutions earlier. Now I finally have an account to answer and vote with too. EDIT: To additionally explain, I would also like to be able to access the objects of the actual printertype. For instance the Port variable in PrinterType1 which is a SerialPort object. I would like to access it like: PrinterList[0].Port.Open() and have access to the full range of functionality of the underlaying port. At the same time I would like to call generic functions that work in the same way for the different objects (but with different implementations): foreach (printer in Printers) printer.Send(message)

    Read the article

  • C++ Regarding cin.ignore()

    - by user1578897
    i would hope someone can modify my code as its so buggy. sometime its work, sometime it dont.. so let me explain more.. Text file data is as below Line3D, [70, -120, -3], [-29, 1, 268] Line3D, [25, -69, -33], [-2, -41, 58] To read the above line.. i use the following char buffer[30]; cout << "Please enter filename: "; cin.ignore(); getline(cin,filename); readFile.open(filename.c_str()); //if successfully open if(readFile.is_open()) { //record counter set to 0 numberOfRecords = 0; while(readFile.good()) { //input stream get line by line readFile.getline(buffer,20,','); if(strstr(buffer,"Point3D")) { Point3D point3d_tmp; readFile>>point3d_tmp; // and so on... Then i did a overload on the ifstream for Line3d ifstream& operator>>(ifstream &input,Line3D &line3d) { int x1,y1,z1,x2,y2,z2; //get x1 input.ignore(2); input>>x1; //get y1 input.ignore(); input>>y1; //get z1 input.ignore(); input>>z1; //get x2 input.ignore(4); input>>x2; //get y2 input.ignore(); input>>y2; //get z2 input.ignore(); input>>z2; input.ignore(2); Point3D pt1(x1,y1,z1); Point3D pt2(x2,y2,z2); line3d.setPt1(pt1); line3d.setPt2(pt2); line3d.setLength(); } But the issue is the record work sometime and sometime it dont.. what i mean is if at this point //i add a cout cout << x1 << y1 << z1; cout << x2 << y2 << z2; //its works! Point3D pt1(x1,y1,z1); Point3D pt2(x2,y2,z2); line3d.setPt1(pt1); line3d.setPt2(pt2); line3d.setLength(); but if i take away the cout it dont work. how do i change my cin.ignore() so the data can be handle properly , consider number range is -999 to 999

    Read the article

  • Manipulating values from database table with php

    - by charliecodex23
    I currently have 5 tables in MySQL database. Some of them share foreign keys and are interdependent of each other. I am displaying classes accordingly to their majors. Each class is taught during the fall, spring or all_year. In my database I have a table named semester which has an id, year, and semester fields. The semester field in particular is a tinyint that has three values 0, 1, 2. This signifies the fall, spring or all_year. When I display the query instead of having it show 0 or 1 or 2 can I have it show fall, spring etc? Extra: How can I add space to the end of each loop so the data doesn't look clustered? Key 0 Fall 1 Spring 2 All-year PHP <? try { $pdo = new PDO ("mysql:host=$hostname;dbname=$dbname","$username","$pw"); } catch (PDOException $e) { echo "Failed to get DB handle: " . $e->getMessage() . "\n"; exit; } $query = $pdo->prepare("SELECT course.name, course.code, course.description, course.hours, semester.semester, semester.year FROM course LEFT JOIN major_course_xref ON course.id = major_course_xref.course_id LEFT JOIN major ON major.id = major_course_xref.major_id LEFT JOIN course_semester_xref ON course.id = course_semester_xref.course_id LEFT JOIN semester ON course_semester_xref.semester_id = semester.id"); $query->execute(); if ($query->execute()){ while ($row = $query->fetch(PDO::FETCH_ASSOC)){ print $row['name'] . "<br>"; print $row['code'] . "<br>"; print $row['description'] . "<br>"; print $row['hours'] . " hrs.<br>"; print $row['semester'] . "<br>"; print $row['year'] . "<br>"; } } else echo 'Could not fetch results.'; unset($pdo); unset($query); ?> Current Display Computer Programming I CPSC1400 Introduction to disciplined, object-oriented program development. 4 hrs. 0 2013 Desire Display Computer Programming I CPSC1400 Introduction to disciplined, object-oriented program development. 4 hrs. Fall 2013

    Read the article

  • Split a binary file into chunks c++

    - by L4nce0
    I've been bashing my head against trying to first divide up a file into chunks, for the purpose of sending over sockets. I can read / write a file easily without splitting it into chunks. The code below runs, works, kinda. It will write a textfile and has a garbage character. Which if this was just for txt, no problem. Jpegs aren't working with said garbage. Been at it for a few days, so I've done my research, and it's time to get some help. I do want to stick strictly to binary readers, as this need to handle any file. I've seen a lot of slick examples out there. (none of them worked for me with jpgs) Mostly something along the lines of while(file)... I subscribe to the, if you know the size, use a for-loop, not a while-loop camp. Thank you for the help!! vector<char*> readFile(const char* fn){ vector<char*> v; ifstream::pos_type size; char * memblock; ifstream file; file.open(fn,ios::in|ios::binary|ios::ate); if (file.is_open()) { size = fileS(fn); file.seekg (0, ios::beg); int bs = size/3; // arbitrary. Actual program will use the socket send size int ws = 0; int i = 0; for(i = 0; i < size; i+=bs){ if(i+bs > size) ws = size%bs; else ws = bs; memblock = new char [ws]; file.read (memblock, ws); v.push_back(memblock); } } else{ exit(-4); } return v; } int main(int argc, char **argv) { vector<char*> v = readFile("foo.txt"); ofstream myFile ("bar.txt", ios::out | ios::binary); for(vector<char*>::iterator it = v.begin(); it!=v.end(); ++it ){ myFile.write(*it,strlen(*it)); } }

    Read the article

  • Optimal template for change content via XMLHTTPRequest with JQuery,PHP,SQL [closed]

    - by B.F.
    This is my method to handle XMLHTTPRequests. Avoids mysql request, foreign access, nerves user, double requests. jquery var allow=true; var is_loaded=""; $(document).ready(function(){ .... $(".xx").on("click",functio(){ if(allow){ allow=false; if(is_loaded!="that"){ $.post("job.php", {job:"that",word:"aaa",number:"123"},function(data){ $(".aaa").html(data); is_loaded="that"; }); } setTimeout(function(){allow=true},500); } .... }); job.php <?PHP ob_start('ob_gzhandler'); if(!isset($_SERVER['HTTP_X_REQUESTED_WITH']) or strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) != 'xmlhttprequest')exit("bad boy!"); if($_POST['job']=="that"){ include "includes/that.inc; } elseif($_POST['job']== .... ob_end_flush(); ?> that.inc if(!preg_match("/\w/",$_POST['word'])exit("bad boy!"); if(!is_numeric($_POST['number'])exit("bad boy!"); //exclude more. $path="temp/that_".$row['word']."txt"; if(file_exists($path) and filemtime("includes/that.inc")<$filemtime($path)){ readfile($path); } else{ include "includes/openSql.inc"; $call=sql_query("SELECT * FROM that WHERE name='".mysql_real_escape_string($_POST['word'])."'"); if(!$call)exit("ups"); $out=""; while($row=mysql_fetch_assoc($call)){ $out.=$_POST['word']." loves the color ".$row['color'].".<br/>"; } echo $out; $fn=fopen($path,"wb"); fputs($fn,$out); fclose($fn); } if something change at the database, you just have to delete involved files. Hope it was English.

    Read the article

  • How to avoid my script to freeze

    - by jemz
    I have socket server script that continuously running listening for the GPS device that runs in PHP CLI,my problem is that my socket will freeze if it executes long time,how do I prevent this so that my script will not freeze.I have no idea on this socket.this is the first time that I use socket connection. I appreciate someone can help my problem. <?php error_reporting(-1); ini_set('display_errors', 1); set_time_limit (0); $address_server = 'xxx.xxx.xx.xx'; $port_server = xxxx; $isTrue = true; socketfunction($address_server,$port_server,$isTrue); function socketfunction($address,$port,$done){ $sock = socket_create(AF_INET, SOCK_STREAM, SOL_TCP); socket_set_option($sock, SOL_SOCKET, SO_REUSEADDR, 1); socket_bind($sock, $address, $port); socket_listen($sock); $clients = array($sock); while ($done ) { $file = fopen('txt.log','a'); $read = $clients; $write = NULL; $except = NULL; $tv_sec = 0; if (socket_select($read, $write , $except, $tv_sec) < 1){ continue; } // checking client if (in_array($sock, $read)) { $clients[] = $newsock = socket_accept($sock); $key = array_search($sock, $read); unset($read[$key]); } //handle client for reading foreach ($read as $read_sock) { $data = @socket_read($read_sock, 1024, PHP_NORMAL_READ); if ($data === false) { $key = array_search($read_sock, $clients); unset($clients[$key]); echo "client disconnected.\n"; echo "Remaining ".(count($clients) - 1)."client(s) connected\r\n"; continue; } $data = trim($data); if (!empty($data)) { echo("Returning stripped input\n"); fwrite($file,$data."\n"); } } // end of reading foreach fclose($file); }//end while socket_close($sock); } ?> Thank you in advance.

    Read the article

  • javascript ul button dropdown crashes after if doing more than 1 button

    - by Apprentice Programmer
    So i have a button drop down list that i want users to select a choice, and basically the button will return the users selection.Pretty straight forward, tested on jsFiddle and works great. Using ruby on rails btw, so i'm not sure if it might conflicting the way rails handle javascript actions. Heres the code: <%= form_for(@user, :html => {:class => 'form-horizontal'}) do |f| %> <fieldset> <p>Do you have experience in Business? If yes, select one of the following: <div class="input-group"> <div class="input-group-btn btn-group"> <button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown">Select one <span class="caret"></span></button> <ul class="dropdown-menu"> <li ><a href="#">Entrepreneurship</a></li> <li ><a href="#">Investments</a></li> <li ><a href="#">Management</a></li> <li class="divider"></li> <li ><a href="#">All of the Above</a></li> </ul> </div><!-- /btn-group --> <%= f.text_field :years_business , :class => "form-control", :placeholder => "Years of experience" %> </div> Now there are 2 more of these, and basically what happens is that if I select an item for the first time from the dropdown list, everything works great. But the moment I select the same button/or new button, the page immediately kind of refreshes, they selected value will not show up after the list drops down and user selects a value. I viewed the page source and added additional javascript src and types, but still doesnt work. the jquery code: jQuery(function ($) { $('.input-group-btn .dropdown-menu > li:not(.divider)').click(function(){ $(this).closest('ul').prev().text($(this).text()) }) }); Any suggestions what is causing the problem?? The jsfiddle link is here: http://jsfiddle.net/w4s8u/7/

    Read the article

  • Users and roles in context

    - by Eric W.
    I'm trying to get a sense of how to implement the user/role relationships for an application I'm writing. The persistence layer is Google App Engine's datastore, which places some interesting (but generally beneficial) constraints on what can be done. Any thoughts are appreciated. It might be helpful to keep things very concrete. I would like there to be organizations, users, test content and test administrations (records of tests that have been taken). A user can have the role of participant (test-taker), contributor of test material or both. A user can also be a member of zero or more organizations. In the role of participant, the user can see the previous administrations of tests he or she has taken. The user can also see a test administration of another participant if that participant has given the user authorization. The user can see test material that has been made public, and he or she can see restricted content as a participant during a specific administration of a test for which that user has been authorized by an organization. As a member of an organization, the user can see restricted content in the role of contributor, and he or she might or might not also be able to edit the content. Each organization should have one or more administrators that can determine whether a member can see and edit content and determine who has admin privileges. There should also be one or more application-wide superusers that can troubleshoot and solve problems. Members of organizations can see the administrations of tests that the participants concerned have authorized them to see, and they can see anonymous data if no authorization has been given. A user cannot see the test results of another user in any other circumstances. Since there are no joins in the App Engine datastore, it might be necessary to have things less normalized than usual for the typical SQL database in order to ensure that queries that check permissions are fast (e.g., ones that determine whether a link is to be displayed). My questions are: How do I move forward on this? Should I spend a lot of time up front in order to get the model right, or can I iterate several times and gradually roll in additional complexity? Does anyone have some general ideas about how to break things up in this instance? Are there any GAE libraries that handle roles in a way that is compatible with this arrangement?

    Read the article

  • PHP array performance

    - by dfo
    Hi, this is my first question on Stackoverflow, please bear with me. I'm testing an algorithm for 2d bin packing and I've chosen PHP to mock it up as it's my bread-and-butter language nowadays. As you can see on http://themworks.com/pack_v0.2/oopack.php?ol=1 it works pretty well, but you need to wait around 10-20 seconds for 100 rectangles to pack. For some hard to handle sets it would hit the php's 30s runtime limit. I did some profiling and it shows that most of the time my script goes through different parts of a small 2d array with 0's and 1's in it. It either checks if certain cell equals to 0/1 or sets it to 0/1. It can do such operations million times and each times it takes few microseconds. I guess I could use an array of booleans in a statically typed language and things would be faster. Or even make an array of 1 bit values. I'm thinking of converting the whole thing to some compiled language. Is PHP just not good for it? If I do need to convert it to let's say C++, how good are the automatic converters? My script is just a lot of for loops with basic arrays and objects manipulations. Thank you! Edit. This function gets called more than any other. It reads few properties of a very simple object, and goes through a very small part of a smallish array to check if there's any element not equal to 0. function fits($bin, $file, $x, $y) { $flag = true; $xw = $x + $file->get_width();; $yh = $y + $file->get_height(); for ($i = $x; $i < $xw; $i++) { for ($j = $y; $j < $yh; $j++) { if ($bin[$i][$j] !== 0) { $flag = false; break; } } if (!$flag) break; } return $flag; }

    Read the article

  • DNS lookup failures while accessing my website some proxy error

    - by Bond
    Here is a situation until today morning,every thing has been working perfectly fine with me. From past 6 months many of my domains wer accessible as http://site1.myserver.com http://site2.myserver.com http://site3.myserver.com http://site4.myserver.com All these were Reverse Proxy configurations. I have some applications on each of them. until today morning some people reported me that http://site1.myserver.com/app1 is not working but http://site1.myserver.com is accessible but http://site2.myserver.com is accessible but http://site3.myserver.com is accessible but http://site4.myserver.com not accessible In past 6 months I have not changed any of these Apache configurations (things were working perfectly so) The error which can be seen in browser are while accessing http://site1.myserver.com/app1 Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /app1. Reason: DNS lookup failure for: myserver.com and same is the error for http://site4.myserver.com So what should I check in I have checked all the apache logs to an extent which I could see and 192.168.1.25 - - [10/Jan/2011:14:50:48 +0530] "GET /app1 HTTP/1.1" 502 531 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" Mon Jan 10 14:27:42 2011] [error] (113)No route to host: proxy: HTTP: attempt to connect to 192.168.1.3:80 (192.168.1.3) failed [Mon Jan 10 14:27:42 2011] [error] ap_proxy_connect_backend disabling worker for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:44 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:45 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:46 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:47 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:27:48 2011] [error] proxy: HTTP: disabled connection for (192.168.1.3) [Mon Jan 10 14:35:29 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:35:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:30 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 [Mon Jan 10 14:50:48 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: myserver.com returned by /app1 and for site4.myserver.com I get [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:40 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 14:57:43 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:02:38 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:04 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:03:08 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /favicon.ico [Mon Jan 10 15:03:10 2011] [error] [client <some external IP>] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:06:21 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by / [Mon Jan 10 15:06:31 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /, referer: http://site4.myserver.com/ [Mon Jan 10 15:26:03 2011] [error] [client 192.168.1.25] proxy: DNS lookup failure for: site4.myserver.com returned by /

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Can't send mail from Windows Phone (Postfix server)

    - by Dominic Williams
    Some background: I have a Dovecot/Postfix setup to handle email for a few domains. We have imap and smtp setup on various devices (Macs, iPhones, PCs, etc) and it works no problem. I've recently bought a Windows Phone and I'm trying to setup the mail account on there. I've got the imap part working great but for some reason it won't send mail. mail.log with debug_peer_list I've put this on pastebin because its quite long: http://pastebin.com/KdvMDxTL dovecot.log with verbose_ssl Apr 14 22:43:50 imap-login: Warning: SSL: where=0x10, ret=1: before/accept initialization [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: before/accept initialization [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read client hello A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write server hello A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write server done A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 flush data [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=-1: SSLv3 read client certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=-1: SSLv3 read client certificate A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read client key exchange A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 read finished A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write change cipher spec A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 write finished A [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2001, ret=1: SSLv3 flush data [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x20, ret=1: SSL negotiation finished successfully [109.151.23.129] Apr 14 22:43:50 imap-login: Warning: SSL: where=0x2002, ret=1: SSL negotiation finished successfully [109.151.23.129] Apr 14 22:43:51 imap-login: Info: Login: user=<pixelfolio>, method=PLAIN, rip=109.151.23.129, lip=94.23.254.175, mpid=24390, TLS Apr 14 22:43:53 imap(pixelfolio): Info: Disconnected: Logged out bytes=9/331 Apr 14 22:43:53 imap-login: Warning: SSL alert: where=0x4008, ret=256: warning close notify [109.151.23.129] postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix debug_peer_list = 109.151.23.129 inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 message_size_limit = 50240000 milter_default_action = accept milter_protocol = 2 mydestination = ks383809.kimsufi.com, localhost.kimsufi.com, localhost myhostname = ks383809.kimsufi.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname non_smtpd_milters = inet:127.0.0.1:8891,inet:localhost:8892 readme_directory = no recipient_delimiter = + smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_milters = inet:127.0.0.1:8891,inet:localhost:8892 smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_domains = domz.co.uk ruck.in vjgary.co.uk scriptees.co.uk pixelfolio.co.uk filmtees.co.uk nbsbar.co.uk virtual_alias_maps = hash:/etc/postfix/alias_maps doveconf -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 2.6.38.2-grsec-xxxx-grs-ipv6-64 x86_64 Ubuntu 11.10 auth_mechanisms = plain login log_path = /var/log/dovecot.log mail_location = mbox:~/mail/:INBOX=/var/mail/%u passdb { driver = pam } protocols = imap service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } verbose_ssl = yes Any suggestions or help greatly appreciated. I've been pulling my hair out with this for hours! EDIT This seems to be my exact problem, but I already have broken_sasl set to yes and the 'login' auth mechanism added? http://forums.gentoo.org/viewtopic-t-898610-start-0.html

    Read the article

  • cf3 Can't stat ... in files.copyfrom promise

    - by Xerxes
    On the client: # cf-agent -KIv ... cf3 -> Handling file existence constraints on /etc/cfengine3 cf3 -> Copy file /etc/cfengine3 from /srv/cfengine/sysconf/server/inputs check cf3 No existing connection to 172.31.69.83 is established... cf3 Set cfengine port number to 5308 = 5308 cf3 -> Connect to 172.31.69.83 = 172.31.69.83 on port 5308 cf3 LastSaw host 172.31.69.83 now cf3 Loaded /var/lib/cfengine3/ppkeys/root-172.31.69.83.pub cf3 .....................[.h.a.i.l.]................................. cf3 Strong authentication of server=172.31.69.83 connection confirmed cf3 Server returned error: Unspecified server refusal (see verbose server output) cf3 Can't stat /srv/cfengine/sysconf/server/inputs in files.copyfrom promise cf3 ?> defining promise result class Cfengine_Inputs_Updated_Failed .... cf3 ......................................................... cf3 Promise handle: cf3 Promise made by: [cf-agent.cf ] FAILED 172.31.69.83:///srv/cfengine/sysconf/server/inputs -> localhost:///etc/cfengine3 However, on the server (172.31.69.83), there's no reason why it can't stat the directory: cyrus:/srv/cfengine/sysconf/server# ls -l /srv/cfengine/sysconf/server/inputs total 52 -rw-r--r-- 1 root root 2142 Sep 6 21:54 cf-agent.cf -rw-r--r-- 1 root root 831 Sep 6 18:31 cf-execd.cf -rw-r--r-- 1 root root 4517 Sep 6 21:44 cf-serverd.cf -rw-r--r-- 1 root root 3082 Sep 6 21:44 dns.cf -rw-r--r-- 1 root root 2028 Sep 6 15:12 failsafe.cf -rw-r--r-- 1 root root 5966 Sep 6 21:44 ldap-masters.cf -rw-r--r-- 1 root root 4380 Sep 6 18:31 ldap-security.cf -rw-r--r-- 1 root root 2735 Sep 6 08:21 lib-core.cf -rw-r--r-- 1 root root 1506 Sep 6 21:45 lib-utils.cf -rw-r--r-- 1 root root 2635 Sep 6 20:27 lib-vars.cf -rw-r--r-- 1 root root 2057 Sep 3 17:46 nss.cf -rw-r--r-- 1 root root 1472 Sep 6 18:31 packages.cf -rw-r--r-- 1 root root 1257 Sep 6 18:01 pam-security.cf -rw-r--r-- 1 root root 4019 Sep 6 19:32 promises.cf -rw-r--r-- 1 root root 2808 Sep 3 17:22 site.cf -rw-r--r-- 1 root root 1670 Sep 6 18:31 sudo-security.cf -rw-r--r-- 1 root root 831 Sep 6 18:31 sys-security.cf -rw-r--r-- 1 root root 890 Sep 6 18:31 sys-users.cf cyrus:/srv/cfengine/sysconf/server# I don't see anything interesting server side either when running: /usr/sbin/cf-serverd -d4 --verbose --no-fork And the following does not have any complaints: /usr/sbin/cf-promises -v Any ideas? I'm running cfengine3 on debian, v3.0.5+dfsg-1 - and the cf-agent.cf file is as follows: bundle agent Update { files: linux:: "${cf3.path[inputs]}" action => immediate, move_obstructions => "true", depth_search => Recursive, copy_from => MirrorFrom( "${cf3.host[server]}", "${cf3.path[scm-inputs]}", "true", "0400" ), classes => DefineSoftClass("Cfengine_Inputs_Updated") ; "${cf3.path[sbin]}" comment => "Setting cf3 client sbin scripts: ${cf3.path[sbin]}/", action => immediate, depth_search => Recursive, copy_from => MirrorFrom( "${cf3.host[server]}", "${cf3.path[scm-cnt-scripts]}", "false", "0555" ) ; reports: Cfengine_Inputs_Updated:: "[cf-agent.cf ] Services:CFAgent:Inputs:Updated"; Cfengine_Inputs_Updated_Failed:: "[cf-agent.cf ] FAILED ${cf3.host[server]}://${cf3.path[scm-inputs]} -> localhost://${cf3.path[inputs]}"; } I lie, there is something interesting with a little more debugging... AccessControl(/srv/cfengine/sysconf/server/inputs) AccessControl, match(/srv/cfengine/sysconf/server/inputs,client.com.au) encrypt request=1 Examining rule in access list (/srv/cfengine/sysconf/server/inputs,/home/cfengine)? cf3 Host client.com.au denied access to /srv/cfengine/sysconf/server/inputs Unappending Host client.com.au denied access to /srv/cfengine/sysconf/server/inputs cf3 Access control in sync Unappending Access control in sync Transaction Send[t 59][Packed text] Attempting to send 67 bytes SendSocketStream, sent 67 cf3 From (host=client.com.au,user=root,ip=172.31.69.3) Unappending From (host=client.com.au,user=root,ip=172.31.69.3) cf3 REFUSAL of request from connecting host: (SYNCH 1283777156 STAT /srv/cfengine/sysconf/server/inputs) Unappending REFUSAL of request from connecting host: (SYNCH 1283777156 STAT /srv/cfengine/sysconf/server/inputs) RecvSocketStream(8) cf3 -> Accepting a connection I'll keep looking.

    Read the article

  • ufw portforwarding to virtualbox guest

    - by user85116
    My goal is to be able to connect using remote desktop on my desktop machine, to windows xp running in virtualbox on my linux server. My setup: server = debian squeeze, 64 bit, with a public IP address (host) virtualbox-ose 3.2.10 (from debian repo) windows xp running inside VBox as a guest; bridged networking mode in VBox, ip = 192.168.1.100 ufw as the firewall on debian, 3 ports are opened: 22 / ssh, 80 / apache, and 3389 for remote desktop My problem: If I try to use remote desktop on my home computer, I am unable to connect to the windows guest. If I first "ssh -X -C" into the debian server, then run "rdesktop 192.168.1.100", I am able to connect without issue. The windows firewall was configured to allow remote desktop connections, and I've even turned it off (as it is redundant here) to see if that was the problem but it made no difference. Since I am able to connect from inside the local subnet, I suspect that I have not setup my debian firewall correctly to handle connections from outside the LAN. Here is what I've done... First my ufw status: ufw status Status: active To Action From -- ------ ---- 22 ALLOW Anywhere 80 ALLOW Anywhere 3389 ALLOW Anywhere I edited /etc/ufw/sysctl.conf and added: net/ipv4/ip_forward=1 Edited /etc/default/ufw and added: DEFAULT_FORWARD_POLICY="ACCEPT" Edited /etc/ufw/before.rules and added: # setup port forwarding to forward rdp to windows VM *nat :PREROUTING - [0:0] -A PREROUTING -i eth0 -p tcp --dport 3389 -j DNAT --to-destination 192.168.1.100 -A PREROUTING -i eth0 -p udp --dport 3389 -j DNAT --to-destination 192.168.1.100 COMMIT # Don't delete these required lines, otherwise there will be errors *filter <snip> Restarted the firewall etc., but no connection. My log files on the debian host show this (my public ip address was removed for this posting but it is correct in the actual log): Feb 6 11:11:21 localhost kernel: [171991.856941] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:21 localhost kernel: [171991.856963] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856701] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856723] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856656] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856678] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Although this is the current setup / configuration, I've also tried several variations of this; I thought maybe the ISP would be blocking 3389 for some reason and tried using different ports, but again there was no connection. Any ideas...? Did I forget to modify some file somewhere?

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • How can I switch an existing set of Subversion repositories to use ActiveDirectory?

    - by jpierson
    I have a set of private Subversion repositories on a Windows Server 2003 box which developers access via SVNServe over the svn:// protocol. Currently we have been using the authz and passwd files for each repository to control access however with the growing number of repositories and developers I'm considering switching to using their credentials from ActiveDirectory. We run in an all Microsoft shop and use IIS instead of Apache on all of our web servers so I would prefer to continue to use SVNServe if possible. Besides it being possible, I'm also concerned about how to migrate our repositories so that the history for the existing users map to the correct ActiveDirectory accounts. Keep in mind also that I'm not the network administrator and I'm not terrible familiar with ActiveDirectory so I'll probably have to go through some other people to get the changes made in ActiveDirectory if necessary. What are my options? UPDATE 1: It appears from the SVN documentation that by using SASL I should be able to get SVNServe to authenticate using ActiveDirectory. To clarify, the answer that I'm looking for is how to go about configuring SVNServe (if possible) to use ActiveDirectory for authentication and then how to modify an existing repository to remap existing svn users to their ActiveDirectory domain login accounts. UPDATE 2: It appears that the SASL support in SVNServe works off of a plugin model and the documentation only shows as an example. Looking at the Cyrus SASL Library it looks like a number of authentication "mechanisms" are supported but I'm not sure which one is to be used for ActiveDirectory support nor can I find any documentation about such matters. UPDATE 3: Ok, well it looks like in order to communication with ActiveDirectory I'm looking to use saslauthd instead of sasldb for the *auxprop_plugin* property. Unfortunately it appears that according to some posts (possibly outdated and inaccurate) saslauthd does not build on Windows and such endeavors are considered a work in progress. UPDATE 4: The lastest post I've found on this topic makes it sound as though the proper binaries () are available through the MIT Kerberos Library but it sounds like the author of this post on Nabble.com is still having issues getting things working. UPDATE 5: It looks like from the TortoiseSVN discussions and also this post on svn.haxx.se that even if saslgssapi.dll or whatever necessary binaries are available and configured on the Windows server that the clients will also need the same customization in order to work with these repositories. If this is true, we will only be able to get ActiveDirectory support from a windows client only if changes are made in these clients such as TortoiseSVN and CollabNet build of the client binaries to support such authentication schemes. Although thats what these posts suggest, this is contradictory from what I originally assumed from other reading in that being SASL compatible should require no changes on the client but instead only that the server be setup to handle the authentication mechanism. After reading a bit more carefully in the document about Cyrus SASL in Subversion section 5 states "1.5+ clients with Cyrus SASL support will be able to authenticate against 1.5+ servers with SASL enabled, provided at least one of the mechanisms supported by the server is also supported by the client." So clearly GSSAPI support (which I understand is required for Active Directory) must be available within the client and the server. I have to say, I'm learning way too much about the internals of how Subversion handles authentication than I ever wanted to and I juts simply want to get an answer about whether I can have Active Directory authentication support when using SVNServe on a Windows server and accessing this from Windows clients. According to the official documentation it seems that this is possible however you can see that the configuration is not trivial if even possible at all.

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • httpd.conf configuration - for internal/external access

    - by tom smith
    hey. after a lot of trail/error/research, i've decided to post here in the hopes that i can get clarification on what i've screwed up... i've got a situation where i have multiple servers behind a router/firewall. i want to be able to access the sites i have from an internal and external url/address, and get the same site. i have to use portforwarding on the router, so i need to be able to use proxyreverse to redirect the user to the approriate server, running the apache/web app... my setup the external urls joomla.gotdns.com forge.gotdns.com both of these point to my router's external ip address (67.168.2.2) (not really) the router forwards port 80 to my server lserver6 192.168.1.56 lserver6 - 192.168.1.56 lserver9 - 192.168.1.59 lserver6 - joomla app lserver9 - forge app i want to be able to have the httpd process (httpd.conf) configured on lserver6 to be able to allow external users accessing the system (foo.gotdns.com) be able to access the joomla app on lserver6 and the same for the forge app running on lserver9 at the same time, i would also like to be able to access the apps from the internal servers, so i'd need to be able to somehow configure the vhost setup/proxyreverse setup to handle the internal access... i've tried setting up multiple vhosts with no luck.. i've looked at the different examples online.. so there must be something subtle that i'm missing... the section of my httpd.conf file that deals with the vhost is below... if there's something else that's needed, let me know and i can post it as well.. thanks -tom ##joomla - file /etc/httpd/conf.d/joomla.conf Alias /joomla /var/www/html/joomla <Directory /var/www/html/joomla> </Directory> # Use name-based virtual hosting. #NameVirtualHost *:80 # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> NameVirtualHost 192.168.1.56:80 <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] #DocumentRoot /var/www/html #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName fforge.gotdns.com:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off ProxyPass / http://192.168.1.81:80/ ProxyPassReverse / http://192.168.1.81:80/ </VirtualHost> <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] DocumentRoot /var/www/html/joomla #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName 192.168.1.56:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off </VirtualHost>

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • Nginx + PHP-FPM = "Random" 502 Bad Gateway

    - by david
    I am running Nginx and proxying php requests via FastCGI to PHP-FPM for processing. I will randomly receive 502 Bad Gateway error pages - I can reproduce this issue by clicking around my PHP websites very rapidly/refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. Here is my setup: nginx/0.7.64 PHP 5.3.2 (fpm-fcgi) (built: Apr 1 2010 06:42:04) Ubuntu 9.10 (Latest 2.6 Paravirt) I compiled PHP-FPM using this ./configure directive ./configure --enable-fpm --sysconfdir=/etc/php5/conf.d --with-config-file-path=/etc/php5/conf.d/php.ini --with-zlib --with-openssl --enable-zip --enable-exif --enable-ftp --enable-mbstring --enable-mbregex --enable-soap --enable-sockets --disable-cgi --with-curl --with-curlwrappers --with-gd --with-mcrypt --enable-memcache --with-mhash --with-jpeg-dir=/usr/local/lib --with-mysql=/usr/bin/mysql --with-mysqli=/usr/bin/mysql_config --enable-pdo --with-pdo-mysql=/usr/bin/mysql --with-pdo-sqlite --with-pspell --with-snmp --with-sqlite --with-tidy --with-xmlrpc --with-xsl My php-fpm.conf looks like this (the relevant parts): ... <value name="pm"> <value name="max_children">3</value> ... <value name="request_terminate_timeout">60s</value> <value name="request_slowlog_timeout">30s</value> <value name="slowlog">/var/log/php-fpm.log.slow</value> <value name="rlimit_files">1024</value> <value name="rlimit_core">0</value> <value name="chroot"></value> <value name="chdir"></value> <value name="catch_workers_output">yes</value> <value name="max_requests">500</value> ... I've tried increasing the max_children to 10 and it makes no difference. I've also tried setting it to 'dynamic' and setting max_children to 50, and start_server to '5' without any difference. I have tried using both 1 and 5 nginx worker processes. My fastcgi_params conf looks like: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Nginx logs the error as: [error] 3947#0: *10530 connect() failed (111: Connection refused) while connecting to upstream, client: 68.40.xxx.xxx, server: www.domain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.domain.com" PHP-FPM logs the follow at the time of the error: [NOTICE] pid 17161, fpm_unix_init_main(), line 255: getrlimit(nofile): max:1024, cur:1024 [NOTICE] pid 17161, fpm_event_init_main(), line 93: libevent: using epoll [NOTICE] pid 17161, fpm_init(), line 50: fpm is running, pid 17161 [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17162 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17163 started [DEBUG] pid 17161, fpm_children_make(), line 403: [pool default] child 17164 started [NOTICE] pid 17161, fpm_event_loop(), line 111: ready to handle connections My CPU usage maxes out around 10-15% when I recreate the issue. My Free mem (free -m) is 130MB I had this intermittent 502 Bad Gateway issue when in was using php5-cgi to service my php requests as well. Does anyone know how to fix this?

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • Improving TCP performance over a gigabit network with lots of connections and high traffic of small packets

    - by MinimeDJ
    I’m trying to improve my TCP throughput over a “gigabit network with lots of connections and high traffic of small packets”. My server OS is Ubuntu 11.10 Server 64bit. There are about 50.000 (and growing) clients connected to my server through TCP Sockets (all on the same port). 95% of of my packets have size of 1-150 bytes (TCP header and payload). The rest 5% vary from 150 up to 4096+ bytes. With the config below my server can handle traffic up to 30 Mbps (full duplex). Can you please advice best practice to tune OS for my needs? My /etc/sysctl.cong looks like this: kernel.pid_max = 1000000 net.ipv4.ip_local_port_range = 2500 65000 fs.file-max = 1000000 # net.core.netdev_max_backlog=3000 net.ipv4.tcp_sack=0 # net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.somaxconn = 2048 # net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_mem = 50576 64768 98152 # net.core.wmem_default = 65536 net.core.rmem_default = 65536 net.ipv4.tcp_window_scaling=1 # net.ipv4.tcp_mem= 98304 131072 196608 # net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_rfc1337 = 1 net.ipv4.ip_forward = 0 net.ipv4.tcp_congestion_control=cubic net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 0 # net.ipv4.tcp_orphan_retries = 1 net.ipv4.tcp_fin_timeout = 25 net.ipv4.tcp_max_orphans = 8192 Here are my limits: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 193045 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1000000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1000000 [ADDED] My NICs are the following: $ dmesg | grep Broad [ 2.473081] Broadcom NetXtreme II 5771x 10Gigabit Ethernet Driver bnx2x 1.62.12-0 (2011/03/20) [ 2.477808] bnx2x 0000:02:00.0: eth0: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fb000000, IRQ 28, node addr d8:d3:85:bd:23:08 [ 2.482556] bnx2x 0000:02:00.1: eth1: Broadcom NetXtreme II BCM57711E XGb (A0) PCI-E x4 5GHz (Gen2) found at mem fa000000, IRQ 40, node addr d8:d3:85:bd:23:0c [ADDED 2] ethtool -k eth0 Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp-segmentation-offload: on udp-fragmentation-offload: off generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: on rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off receive-hashing: off [ADDED 3] sudo ethtool -S eth0|grep -vw 0 NIC statistics: [1]: rx_bytes: 17521104292 [1]: rx_ucast_packets: 118326392 [1]: tx_bytes: 35351475694 [1]: tx_ucast_packets: 191723897 [2]: rx_bytes: 16569945203 [2]: rx_ucast_packets: 114055437 [2]: tx_bytes: 36748975961 [2]: tx_ucast_packets: 194800859 [3]: rx_bytes: 16222309010 [3]: rx_ucast_packets: 109397802 [3]: tx_bytes: 36034786682 [3]: tx_ucast_packets: 198238209 [4]: rx_bytes: 14884911384 [4]: rx_ucast_packets: 104081414 [4]: rx_discards: 5828 [4]: rx_csum_offload_errors: 1 [4]: tx_bytes: 35663361789 [4]: tx_ucast_packets: 194024824 [5]: rx_bytes: 16465075461 [5]: rx_ucast_packets: 110637200 [5]: tx_bytes: 43720432434 [5]: tx_ucast_packets: 202041894 [6]: rx_bytes: 16788706505 [6]: rx_ucast_packets: 113123182 [6]: tx_bytes: 38443961940 [6]: tx_ucast_packets: 202415075 [7]: rx_bytes: 16287423304 [7]: rx_ucast_packets: 110369475 [7]: rx_csum_offload_errors: 1 [7]: tx_bytes: 35104168638 [7]: tx_ucast_packets: 184905201 [8]: rx_bytes: 12689721791 [8]: rx_ucast_packets: 87616037 [8]: rx_discards: 2638 [8]: tx_bytes: 36133395431 [8]: tx_ucast_packets: 196547264 [9]: rx_bytes: 15007548011 [9]: rx_ucast_packets: 98183525 [9]: rx_csum_offload_errors: 1 [9]: tx_bytes: 34871314517 [9]: tx_ucast_packets: 188532637 [9]: tx_mcast_packets: 12 [10]: rx_bytes: 12112044826 [10]: rx_ucast_packets: 84335465 [10]: rx_discards: 2494 [10]: tx_bytes: 36562151913 [10]: tx_ucast_packets: 195658548 [11]: rx_bytes: 12873153712 [11]: rx_ucast_packets: 89305791 [11]: rx_discards: 2990 [11]: tx_bytes: 36348541675 [11]: tx_ucast_packets: 194155226 [12]: rx_bytes: 12768100958 [12]: rx_ucast_packets: 89350917 [12]: rx_discards: 2667 [12]: tx_bytes: 35730240389 [12]: tx_ucast_packets: 192254480 [13]: rx_bytes: 14533227468 [13]: rx_ucast_packets: 98139795 [13]: tx_bytes: 35954232494 [13]: tx_ucast_packets: 194573612 [13]: tx_bcast_packets: 2 [14]: rx_bytes: 13258647069 [14]: rx_ucast_packets: 92856762 [14]: rx_discards: 3509 [14]: rx_csum_offload_errors: 1 [14]: tx_bytes: 35663586641 [14]: tx_ucast_packets: 189661305 rx_bytes: 226125043936 rx_ucast_packets: 1536428109 rx_bcast_packets: 351 rx_discards: 20126 rx_filtered_packets: 8694 rx_csum_offload_errors: 11 tx_bytes: 548442367057 tx_ucast_packets: 2915571846 tx_mcast_packets: 12 tx_bcast_packets: 2 tx_64_byte_packets: 35417154 tx_65_to_127_byte_packets: 2006984660 tx_128_to_255_byte_packets: 373733514 tx_256_to_511_byte_packets: 378121090 tx_512_to_1023_byte_packets: 77643490 tx_1024_to_1522_byte_packets: 43669214 tx_pause_frames: 228 Some info about SACK: When to turn TCP SACK off?

    Read the article

  • How to serve static files for multiple Django projects via nginx to same domain

    - by thanley
    I am trying to setup my nginx conf so that I can serve the relevant files for my multiple Django projects. Ultimately I want each app to be available at www.example.com/app1, www.example.com/app2 etc. They all serve static files from a 'static-files' directory located in their respective project root. The project structure: Home Ubuntu Web www.example.com ref logs app app1 app1 static bower_components templatetags app1_project templates static-files app2 app2 static templates templatetags app2_project static-files app3 tests templates static-files static app3_project app3 venv When I use the conf below, there are no problems for serving the static-files for the app that I designate in the /static/ location. I can also access the different apps found at their locations. However, I cannot figure out how to serve all of the static files for all the apps at the same time. I have looked into using the 'try_files' command for the static location, but cannot figure out how to see if it is working or not. Nginx Conf - Only serving static files for one app: server { listen 80; server_name example.com; server_name www.example.com; access_log /home/ubuntu/web/www.example.com/logs/access.log; error_log /home/ubuntu/web/www.example.com/logs/error.log; root /home/ubuntu/web/www.example.com/; location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; } location /media/ { alias /home/ubuntu/web/www.example.com/media/; } location /app1/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app1; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app1.sock; } location /app2/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app2; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app2.sock; } location /app3/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app3; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app3.sock; } # what to serve if upstream is not available or crashes error_page 400 /static/400.html; error_page 403 /static/403.html; error_page 404 /static/404.html; error_page 500 502 503 504 /static/500.html; # Compression gzip on; gzip_http_version 1.0; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Some version of IE 6 don't handle compression well on some mime-types, # so just disable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; # Set a vary header so downstream proxies don't send cached gzipped # content to IE6 gzip_vary on; } Essentially I want to have something like (I know this won't work) location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; alias /home/ubuntu/web/www.example.com/app/app2/static-files/; alias /home/ubuntu/web/www.example.com/app/app3/static-files/; } or (where it can serve the static files based on the uri) location /static/ { try_files $uri $uri/ =404; } So basically, if I use try_files like above, is the problem in my project directory structure? Or am I totally off base on this and I need to put each app in a subdomain instead of going this route? Thanks for any suggestions TLDR: I want to go to: www.example.com/APP_NAME_HERE And have nginx serve the static location: /home/ubuntu/web/www.example.com/app/APP_NAME_HERE/static-files/;

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >