Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 417/1952 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • How to have struct members accessible in different ways

    - by Paul J. Lucas
    I want to have a structure token that has start/end pairs for position, sentence, and paragraph information. I also want the members to be accessible in two different ways: as a start/end pair and individually. Given: struct token { struct start_end { int start; int end; }; start_end pos; start_end sent; start_end para; typedef start_end token::*start_end_ptr; }; I can write a function, say distance(), that computes the distance between any of the three start/end pairs like: int distance( token const &i, token const &j, token::start_end_ptr mbr ) { return (j.*mbr).start - (i.*mbr).end; } and call it like: token i, j; int d = distance( i, j, &token::pos ); that will return the distance of the pos pair. But I can also pass &token::sent or &token::para and it does what I want. Hence, the function is flexible. However, now I also want to write a function, say max(), that computes the maximum value of all the pos.start or all the pos.end or all the sent.start, etc. If I add: typedef int token::start_end::*int_ptr; I can write the function like: int max( list<token> const &l, token::int_ptr p ) { int m = numeric_limits<int>::min(); for ( list<token>::const_iterator i = l.begin(); i != l.end(); ++i ) { int n = (*i).pos.*p; // NOT WHAT I WANT: It hard-codes 'pos' if ( n > m ) m = n; } return m; } and call it like: list<token> l; l.push_back( i ); l.push_back( j ); int m = max( l, &token::start_end::start ); However, as indicated in the comment above, I do not want to hard-code pos. I want the flexibility of accessible the start or end of any of pos, sent, or para that will be passed as a parameter to max(). I've tried several things to get this to work (tried using unions, anonymous unions, etc.) but I can't come up with a data structure that allows the flexibility both ways while having each value stored only once. Any ideas how to organize the token struct so I can have what I want? Attempt at clarification Given struct of pairs of integers, I want to be able to "slice" the data in two distinct ways: By passing a pointer-to-member of a particular start/end pair so that the called function operates on any pair without knowing which pair. The caller decides which pair. By passing a pointer-to-member of a particular int (i.e., only one int of any pair) so that the called function operates on any int without knowing either which int or which pair said int is from. The caller decides which int of which pair. Another example for the latter would be to sum, say, all para.end or all sent.start. Also, and importantly: for #2 above, I'd ideally like to pass only a single pointer-to-member to reduce the burden on the caller. Hence, me trying to figure something out using unions.

    Read the article

  • mySQL to XLS using PHP?

    - by kielie
    Hi guys, how can I create a .XLS document from a mySQL table using PHP? I have tried just about everything, with no success. Basically, I need to take form data, and input it into a database, which I have done, and then I need to retrieve that table data and parse it into a microsoft excel file, which needs to be saved automatically onto the web server. <?php // DB TABLE Exporter // // How to use: // // Place this file in a safe place, edit the info just below here // browse to the file, enjoy! // CHANGE THIS STUFF FOR WHAT YOU NEED TO DO $dbhost = "-"; $dbuser = "-"; $dbpass = "-"; $dbname = "-"; $dbtable = "-"; // END CHANGING STUFF $cdate = date("Y-m-d"); // get current date // first thing that we are going to do is make some functions for writing out // and excel file. These functions do some hex writing and to be honest I got // them from some where else but hey it works so I am not going to question it // just reuse // This one makes the beginning of the xls file function xlsBOF() { echo pack("ssssss", 0x809, 0x8, 0x0, 0x10, 0x0, 0x0); return; } // This one makes the end of the xls file function xlsEOF() { echo pack("ss", 0x0A, 0x00); return; } // this will write text in the cell you specify function xlsWriteLabel($Row, $Col, $Value ) { $L = strlen($Value); echo pack("ssssss", 0x204, 8 + $L, $Row, $Col, 0x0, $L); echo $Value; return; } // make the connection an DB query $dbc = mysql_connect( $dbhost , $dbuser , $dbpass ) or die( mysql_error() ); mysql_select_db( $dbname ); $q = "SELECT * FROM ".$dbtable." WHERE date ='$cdate'"; $qr = mysql_query( $q ) or die( mysql_error() ); // start the file xlsBOF(); // these will be used for keeping things in order. $col = 0; $row = 0; // This tells us that we are on the first row $first = true; while( $qrow = mysql_fetch_assoc( $qr ) ) { // Ok we are on the first row // lets make some headers of sorts if( $first ) { foreach( $qrow as $k => $v ) { // take the key and make label // make it uppper case and replace _ with ' ' xlsWriteLabel( $row, $col, strtoupper( ereg_replace( "_" , " " , $k ) ) ); $col++; } // prepare for the first real data row $col = 0; $row++; $first = false; } // go through the data foreach( $qrow as $k => $v ) { // write it out xlsWriteLabel( $row, $col, $v ); $col++; } // reset col and goto next row $col = 0; $row++; } xlsEOF(); exit(); ?> I just can't seem to figure out how to integrate fwrite into all that to write the generated data into a .xls file, how would I go about doing that? I need to get this working quite urgently, so any help would be greatly appreciated. Thanx guys.

    Read the article

  • Faster method for Matrix vector product for large matrix in C or C++ for use in GMRES

    - by user35959
    I have a large, dense matrix A, and I aim to find the solution to the linear system Ax=b using an iterative method (in MATLAB was the plan using its built in GMRES). For more than 10,000 rows, this is too much for my computer to store in memory, but I know that the entries in A are constructed by two known vectors x and y of length N and the entries satisfy: A(i,j) = .5*(x[i]-x[j])^2+([y[i]-y[j])^2 * log(x[i]-x[j])^2+([y[i]-y[j]^2). MATLAB's GMRES command accepts as input a function call that can compute the matrix vector product A*x, which allows me to handle larger matrices than I can store in memory. To write the matrix-vecotr product function, I first tried this in matlab by going row by row and using some vectorization, but I avoid spawning the entire array A (since it would be too large). This was fairly slow unfortnately in my application for GMRES. My plan was to write a mex file for MATLAB to, which is in C, and ideally should be significantly faster than the matlab code. I'm rather new to C, so this went rather poorly and my naive attempt at writing the code in C was slower than my partially vectorized attempt in Matlab. #include <math.h> #include "mex.h" void Aproduct(double *x, double *ctrs_x, double *ctrs_y, double *b, mwSize n) { mwSize i; mwSize j; double val; for (i=0; i<n; i++) { for (j=0; j<i; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } for (j=i+1; j<n; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } } } The above is the computational portion of the code for the matlab mex file (which is slightly modified C, if I understand correctly). Please note that I skip the case i=j, since in that case the variable val will be a 0*log(0), which should be interpreted as 0 for me, so I just skip it. Is there a more efficient or faster way to write this? When I call this C function via the mex file in matlab, it is quite slow, slower even than the matlab method I used. This surprises me since I suspected that C code should be much faster than matlab. The alternative matlab method which is partially vectorized that I am comparing it with is function Ax = Aprod(x,ctrs) n = length(x); Ax = zeros(n,1); for j=1:(n-3) v = .5*((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2).*log((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2); v(j)=0; Ax(j) = dot(v,x(1:n-3); end (the n-3 is because there is actually 3 extra components, but they are dealt with separately,so I excluded that code). This is partly vectorized and only needs one for loop, so it makes some sense that it is faster. However, I was hoping I could go even faster with C+mex file. Any suggestions or help would be greatly appreciated! Thanks! EDIT: I should be more clear. I am open to any faster method that can help me use GMRES to invert this matrix that I am interested in, which requires a faster way of doing the matrix vector product without explicitly loading the array into memory. Thanks!

    Read the article

  • HQL over ternary map with subcollection

    - by Diego Mijelshon
    I'm stuck with a query I need to write. Given the following model: public class A : Entity<Guid> { public virtual IDictionary<B, C> C { get; set; } } public class B : Entity<Guid> { } public class C : Entity<Guid> { public virtual int Data1 { get; set; } public virtual ICollection<D> D { get; set; } } public class D : Entity<Guid> { public virtual int Data2 { get; set; } } I need to get a list of A instances that have a D containing some data for the specified B (parameter) In the object model, that would be: listOfA.Where(a => a.C[b].D.Any(d => d.Data2 == 0)) But I wasn't able to write a working HQL. I'm able to write something like the following, which filters on C.Data1: from A a where a.C[:b].Data1 = 0 But I'm unable to do anything with the elements of a.C[:b].D (I get various parsing exceptions) Here are the mappings, in case you're interested (nothing special, generated by ConfORM): <class name="A"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> <map name="C"> <key column="a_key" /> <map-key-many-to-many class="B" /> <one-to-many class="C" /> </map> </class> <class name="B"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> </class> <class name="C"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> <property name="Data1" /> <bag name="D"> <key column="c_key" /> <one-to-many class="D" /> </bag> </class> <class name="D"> <id name="Id" type="Guid"> <generator class="guid.comb" /> </id> <property name="Data2" /> </class> Thanks!

    Read the article

  • Help with Silverlight Sockets and Message delivery

    - by pixel3cs
    There are 4 months since I stopped developing my Silverlight Multiplayer Chess game. The problem was a bug wich I couldn't reproduce. Sice I got some free time this week I managed to discover the problem and I am now able to reproduce the bug. It seems that if I send 10 messages from client, one after another, with no delay between them, just like in the below example // when I press Enter, the client will 10 messages with no delay between them private void textBox_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter && textBox.Text.Length > 0) { for (int i = 0; i < 10; i++) { MessageBuilder mb = new MessageBuilder(); mb.Writer.Write((byte)GameCommands.NewChatMessageInTable); mb.Writer.Write(string.Format("{0}{2}: {1}", ClientVars.PlayerNickname, textBox.Text, i)); SendChatMessageEvent(mb.GetMessage()); //System.Threading.Thread.Sleep(100); } textBox.Text = string.Empty; } } // the method used by client to send a message to server public void SendData(Message message) { if (socket.Connected) { SocketAsyncEventArgs myMsg = new SocketAsyncEventArgs(); myMsg.RemoteEndPoint = socket.RemoteEndPoint; byte[] buffer = message.Buffer; myMsg.SetBuffer(buffer, 0, buffer.Length); socket.SendAsync(myMsg); } else { string err = "Server does not respond. You are disconnected."; socket.Close(); uiContext.Post(this.uiClient.ProcessOnErrorData, err); } } // the method used by server to receive data from client private void OnDataReceived(IAsyncResult async) { ClientSocketPacket client = async.AsyncState as ClientSocketPacket; int count = 0; try { if (client.Socket.Connected) count = client.Socket.EndReceive(async); // THE PROBLEM IS HERE // IF SERVER WAS RECEIVE ALL MESSAGES SEPARATELY, ONE BY ONE, THE COUNT // WAS ALWAYS 15, BUT BECAUSE THE SERVER RECEIVE 3 MESSAGES IN 1, THE COUNT // IS SOMETIME 45 } catch { HandleException(client); } client.MessageStream.Write(client.Buffer, 0, count); Message message; while (client.MessageStream.Read(out message)) { message.Tag = client; ThreadPool.QueueUserWorkItem(new WaitCallback(this.processingThreadEvent.ServerGotData), message); totalReceivedBytes += message.Buffer.Length; } try { if (client.Socket.Connected) client.Socket.BeginReceive(client.Buffer, 0, client.Buffer.Length, 0, new AsyncCallback(OnDataReceived), client); } catch { HandleException(client); } } there are sent only 3 big messages, and every big message contain 3 or 4 small messages. This is not the behavior I want. If I put a 100 milliseconds delay between message delivery, everything is work fine, but in a real world scenario users can send messages to server even at 1 millisecond between them. Are there any settings to be done in order to make the client send only one message at a time, or Even if I receive 3 messages in 1, are they full messages all the time (I dont't want to receive 2.5 messages in one big message) ? because if they are, I can read them and treat this new situation

    Read the article

  • Delphi7 - How can i copy a file that is being written to

    - by Simon
    I have an application that logs information to a daily text file every second on a master PC. A Slave PC on the network using the same application would like to copy this text file to its local drive. I can see there is going to be file access issues. These files should be no larger than 30-40MB each. the network will be 100MB ethernet. I can see there is potential for the copying process to take longer than 1 second meaning the logging PC will need to open the file for writing while it is being read. What is the best method for the file writing(logging) and file copying procedures? I know there is the standard Windows CopyFile() procedure, however this has given me file access problems. There is also TFileStream using the fmShareDenyNone flag, but this also very occasionally gives me an access problem too (like 1 per week). What is this the best way of accomplishing this task? My current File Logging: procedure FSWriteline(Filename,Header,s : String); var LogFile : TFileStream; line : String; begin if not FileExists(filename) then begin LogFile := TFileStream.Create(FileName, fmCreate or fmShareDenyNone); try LogFile.Seek(0,soFromEnd); line := Header + #13#10; LogFile.Write(line[1],Length(line)); line := s + #13#10; LogFile.Write(line[1],Length(line)); finally logfile.Free; end; end else begin line := s + #13#10; Logfile:=tfilestream.Create(Filename,fmOpenWrite or fmShareDenyNone); try logfile.Seek(0,soFromEnd); Logfile.Write(line[1], length(line)); finally Logfile.free; end; end; end; My file copy procedure: procedure DoCopy(infile, Outfile : String); begin ForceDirectories(ExtractFilePath(outfile)); //ensure folder exists if FileAge(inFile) = FileAge(OutFile) then Exit; //they are the same modified time try { Open existing destination } fo := TFileStream.Create(Outfile, fmOpenReadWrite or fmShareDenyNone); fo.Position := 0; except { otherwise Create destination } fo := TFileStream.Create(OutFile, fmCreate or fmShareDenyNone); end; try { open source } fi := TFileStream.Create(InFile, fmOpenRead or fmShareDenyNone); try cnt:= 0; fi.Position := cnt; max := fi.Size; {start copying } Repeat dod := BLOCKSIZE; // Block size if cnt+dod>max then dod := max-cnt; if dod>0 then did := fo.CopyFrom(fi, dod); cnt:=cnt+did; Percent := Round(Cnt/Max*100); until (dod=0) finally fi.free; end; finally fo.free; end; end;

    Read the article

  • Python and mechanize login script

    - by Perun
    Hi fellow programmers! I am trying to write a script to login into my universities "food balance" page using python and the mechanize module... This is the page I am trying to log into: http://www.wcu.edu/11407.asp The website has the following form to login: <FORM method=post action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx><INPUT value=https://cf.wcu.edu/busafrs/catcard/idsearch.cfm type=hidden name=wcuirs_uri> <P><B>WCU ID Number<BR></B><INPUT maxLength=12 size=12 type=password name=id> </P> <P><B>PIN<BR></B><INPUT maxLength=20 type=password name=PIN> </P> <P></P> <P><INPUT value="Request Access" type=submit name=submit> </P></FORM> From this we know that I need to fill in the following fields: 1. name=id 2. name=PIN With the action: action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx This is the script I have written thus far: #!/usr/bin/python2 -W ignore import mechanize, cookielib from time import sleep url = 'http://www.wcu.edu/11407.asp' myId = '11111111111' myPin = '22222222222' # Browser #br = mechanize.Browser() #br = mechanize.Browser(factory=mechanize.DefaultFactory(i_want_broken_xhtml_support=True)) br = mechanize.Browser(factory=mechanize.RobustFactory()) # Use this because of bad html tags in the html... # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) # Follows refresh 0 but not hangs on refresh > 0 br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) # User-Agent (fake agent to google-chrome linux x86_64) br.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] # The site we will navigate into br.open(url) # Go though all the forms (for debugging only) for f in br.forms(): print f # Select the first (index two) form br.select_form(nr=2) # User credentials br.form['id'] = myId br.form['PIN'] = myPin br.form.action = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' # Login br.submit() # Wait 10 seconds sleep(10) # Save to a file f = file('mycatpage.html', 'w') f.write(br.response().read()) f.close() Now the problem... For some odd reason the page I get back (in mycatpage.html) is the login page and not the expected page that displays my "cat cash balance" and "number of block meals" left... Does anyone have any idea why? Keep in mind that everything is correct with the header files and while the id and pass are not really 111111111 and 222222222, the correct values do work with the website (using a browser...) Thanks in advance EDIT Another script I tried: from urllib import urlopen, urlencode import urllib2 import httplib url = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' myId = 'xxxxxxxx' myPin = 'xxxxxxxx' data = { 'id':myId, 'PIN':myPin, 'submit':'Request Access', 'wcuirs_uri':'https://cf.wcu.edu/busafrs/catcard/idsearch.cfm' } opener = urllib2.build_opener() opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] request = urllib2.Request(url, urlencode(data)) open("mycatpage.html", 'w').write(opener.open(request)) This has the same behavior...

    Read the article

  • Restart program from a certain line with an if statement?

    - by user1744093
    could anyone help me restart my program from line 46 if the user enters 1 (just after the comment where it states that the next code is going to ask the user for 2 inputs) and if the user enters -1 end it. I cannot think how to do it. I'm new to C# any help you could give would be great! class Program { static void Main(string[] args) { //Displays data in correct Format List<float> inputList = new List<float>(); TextReader tr = new StreamReader("c:/users/tom/documents/visual studio 2010/Projects/DistanceCalculator3/DistanceCalculator3/TextFile1.txt"); String input = Convert.ToString(tr.ReadToEnd()); String[] items = input.Split(','); Console.WriteLine("Point Latitude Longtitude Elevation"); for (int i = 0; i < items.Length; i++) { if (i % 3 == 0) { Console.Write((i / 3) + "\t\t"); } Console.Write(items[i]); Console.Write("\t\t"); if (((i - 2) % 3) == 0) { Console.WriteLine(); } } Console.WriteLine(); Console.WriteLine(); // Ask for two inputs from the user which is then converted into 6 floats and transfered in class Coordinates Console.WriteLine("Please enter the two points that you wish to know the distance between:"); string point = Console.ReadLine(); string[] pointInput = point.Split(' '); int pointNumber = Convert.ToInt16(pointInput[0]); int pointNumber2 = Convert.ToInt16(pointInput[1]); Coordinates distance = new Coordinates(); distance.latitude = (Convert.ToDouble(items[pointNumber * 3])); distance.longtitude = (Convert.ToDouble(items[(pointNumber * 3) + 1])); distance.elevation = (Convert.ToDouble(items[(pointNumber * 3) + 2])); distance.latitude2 = (Convert.ToDouble(items[pointNumber2 * 3])); distance.longtitude2 = (Convert.ToDouble(items[(pointNumber2 * 3) + 1])); distance.elevation2 = (Convert.ToDouble(items[(pointNumber2 * 3) + 2])); //Calculate the distance between two points const double PIx = 3.141592653589793; const double RADIO = 6371; double dlat = ((distance.latitude2) * (PIx / 180)) - ((distance.latitude) * (PIx / 180)); double dlon = ((distance.longtitude2) * (PIx / 180)) - ((distance.longtitude) * (PIx / 180)); double a = (Math.Sin(dlat / 2) * Math.Sin(dlat / 2)) + Math.Cos((distance.latitude) * (PIx / 180)) * Math.Cos((distance.latitude2) * (PIx / 180)) * (Math.Sin(dlon / 2) * Math.Sin(dlon / 2)); double angle = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a)); double ultimateDistance = (angle * RADIO); Console.WriteLine("The distance between your two points is..."); Console.WriteLine(ultimateDistance); //Repeat the program if the user enters 1, end the program if the user enters -1 Console.WriteLine("If you wish to calculate another distance type 1 and return, if you wish to end the program, type -1."); Console.ReadLine(); if (Convert.ToInt16(Console.ReadLine()) == 1); { //here is where I need it to repeat }

    Read the article

  • RSA encryption/ Decryption in a client server application

    - by user308806
    Hi guys, probably missing something very straight forward on this, but please forgive me, I'm very naive! Have a client server application where the client identifies its self with an RSA encrypted username & password. Unfortunately I'm getting a "bad padding exception: data must start with zero" when i try to decrypt with the public key on the client side. I'm fairly sure the key is correct as I have tested encrypting with public key then decrypting with private key on the client side with no problems at all. Just seems when I transfer it over the connection it messses it up somehow?! Using PrintWriter & BufferedReader on the sockets if thats of importance. EncodeBASE64 & DecodeBASE64 encode byte[] to 64base and vice versa respectively. Any ideas guys?? Client side: Socket connectionToServer = new Socket("127.0.0.1", 7050); InputStream in = connectionToServer.getInputStream(); DataInputStream dis = new DataInputStream(in); int length = dis.readInt(); byte[] data = new byte[length]; // dis.readFully(data); dis.read(data); System.out.println("The received Data*****************************************"); System.out.println("The length of bits "+ length); System.out.println(data); System.out.println("***********************************************************"); Decryption d = new Decryption(); byte [] ttt = d.decrypt(data); System.out.print(data); String ss = new String(ttt); System.out.println("***********************"); System.out.println(ss); System.out.println("************************"); Server Side: in = connectionFromClient.getInputStream(); OutputStream out = connectionFromClient.getOutputStream(); DataOutputStream dataOut = new DataOutputStream(out); LicenseList licenses = new LicenseList(); String ValidIDs = licenses.getAllIDs(); System.out.println(ValidIDs); Encryption enc = new Encryption(); byte[] encrypted = enc.encrypt(ValidIDs); byte[] dd = enc.encrypt(ValidIDs); String tobesent = new String(dd); //byte[] rsult = enc.decrypt(dd); //String tt = String(rsult); System.out.println("The sent data**********************************************"); System.out.println(dd); String temp = new String(dd); System.out.println(temp); System.out.println("*************************************************************"); //BufferedWriter bf = new BufferedWriter(OutputStreamWriter(out)); //dataOut.write(ValidIDs.getBytes().length); dataOut.writeInt(ValidIDs.getBytes().length); dataOut.flush(); dataOut.write(encrypted); dataOut.flush(); System.out.println("********Testing**************"); System.out.println("Here are the ids:::"); System.out.println(licenses.getAllIDs()); System.out.println("**********************"); //bw.write("it is working well\n");

    Read the article

  • Convert Java program to C

    - by imicrothinking
    I need a bit of guidance with writing a C program...a bit of quick background as to my level, I've programmed in Java previously, but this is my first time programming in C, and we've been tasked to translate a word count program from Java to C that consists of the following: Read a file from memory Count the words in the file For each occurrence of a unique word, keep a word counter variable Print out the top ten most frequent words and their corresponding occurrences Here's the source program in Java: package lab0; import java.io.File; import java.io.FileReader; import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; public class WordCount { private ArrayList<WordCountNode> outputlist = null; public WordCount(){ this.outputlist = new ArrayList<WordCountNode>(); } /** * Read the file into memory. * * @param filename name of the file. * @return content of the file. * @throws Exception if the file is too large or other file related exception. */ public char[] readFile(String filename) throws Exception{ char [] result = null; File file = new File(filename); long size = file.length(); if (size > Integer.MAX_VALUE){ throw new Exception("File is too large"); } result = new char[(int)size]; FileReader reader = new FileReader(file); int len, offset = 0, size2read = (int)size; while(size2read > 0){ len = reader.read(result, offset, size2read); if(len == -1) break; size2read -= len; offset += len; } return result; } /** * Make article word by word. * * @param article the content of file to be counted. * @return string contains only letters and "'". */ private enum SPLIT_STATE {IN_WORD, NOT_IN_WORD}; /** * Go through article, find all the words and add to output list * with their count. * * @param article the content of the file to be counted. * @return words in the file and their counts. */ public ArrayList<WordCountNode> countWords(char[] article){ SPLIT_STATE state = SPLIT_STATE.NOT_IN_WORD; if(null == article) return null; char curr_ltr; int curr_start = 0; for(int i = 0; i < article.length; i++){ curr_ltr = Character.toUpperCase( article[i]); if(state == SPLIT_STATE.IN_WORD){ article[i] = curr_ltr; if ((curr_ltr < 'A' || curr_ltr > 'Z') && curr_ltr != '\'') { article[i] = ' '; //printf("\nthe word is %s\n\n",curr_start); if(i - curr_start < 0){ System.out.println("i = " + i + " curr_start = " + curr_start); } addWord(new String(article, curr_start, i-curr_start)); state = SPLIT_STATE.NOT_IN_WORD; } }else{ if (curr_ltr >= 'A' && curr_ltr <= 'Z') { curr_start = i; article[i] = curr_ltr; state = SPLIT_STATE.IN_WORD; } } } return outputlist; } /** * Add the word to output list. */ public void addWord(String word){ int pos = dobsearch(word); if(pos >= outputlist.size()){ outputlist.add(new WordCountNode(1L, word)); }else{ WordCountNode tmp = outputlist.get(pos); if(tmp.getWord().compareTo(word) == 0){ tmp.setCount(tmp.getCount() + 1); }else{ outputlist.add(pos, new WordCountNode(1L, word)); } } } /** * Search the output list and return the position to put word. * @param word is the word to be put into output list. * @return position in the output list to insert the word. */ public int dobsearch(String word){ int cmp, high = outputlist.size(), low = -1, next; // Binary search the array to find the key while (high - low > 1) { next = (high + low) / 2; // all in upper case cmp = word.compareTo((outputlist.get(next)).getWord()); if (cmp == 0) return next; else if (cmp < 0) high = next; else low = next; } return high; } public static void main(String args[]){ // handle input if (args.length == 0){ System.out.println("USAGE: WordCount <filename> [Top # of results to display]\n"); System.exit(1); } String filename = args[0]; int dispnum; try{ dispnum = Integer.parseInt(args[1]); }catch(Exception e){ dispnum = 10; } long start_time = Calendar.getInstance().getTimeInMillis(); WordCount wordcount = new WordCount(); System.out.println("Wordcount: Running..."); // read file char[] input = null; try { input = wordcount.readFile(filename); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } // count all word ArrayList<WordCountNode> result = wordcount.countWords(input); long end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordcount: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); System.out.println("wordsort: running ..."); start_time = Calendar.getInstance().getTimeInMillis(); Collections.sort(result); end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordsort: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); Collections.reverse(result); System.out.println("\nresults (TOP "+ dispnum +" from "+ result.size() +"):\n" ); // print out result String str ; for (int i = 0; i < result.size() && i < dispnum; i++){ if(result.get(i).getWord().length() > 15) str = result.get(i).getWord().substring(0, 14); else str = result.get(i).getWord(); System.out.println(str + " - " + result.get(i).getCount()); } } public class WordCountNode implements Comparable{ private String word; private long count; public WordCountNode(long count, String word){ this.count = count; this.word = word; } public String getWord() { return word; } public void setWord(String word) { this.word = word; } public long getCount() { return count; } public void setCount(long count) { this.count = count; } public int compareTo(Object arg0) { // TODO Auto-generated method stub WordCountNode obj = (WordCountNode)arg0; if( count - obj.getCount() < 0) return -1; else if( count - obj.getCount() == 0) return 0; else return 1; } } } Here's my attempt (so far) in C: #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <string.h> // Read in a file FILE *readFile (char filename[]) { FILE *inputFile; inputFile = fopen (filename, "r"); if (inputFile == NULL) { printf ("File could not be opened.\n"); exit (EXIT_FAILURE); } return inputFile; } // Return number of words in an array int wordCount (FILE *filePointer, char filename[]) {//, char *words[]) { // count words int count = 0; char temp; while ((temp = getc(filePointer)) != EOF) { //printf ("%c", temp); if ((temp == ' ' || temp == '\n') && (temp != '\'')) count++; } count += 1; // counting method uses space AFTER last character in word - the last space // of the last character isn't counted - off by one error // close file fclose (filePointer); return count; } // Print out the frequencies of the 10 most frequent words in the console int main (int argc, char *argv[]) { /* Step 1: Read in file and check for errors */ FILE *filePointer; filePointer = readFile (argv[1]); /* Step 2: Do a word count to prep for array size */ int count = wordCount (filePointer, argv[1]); printf ("Number of words is: %i\n", count); /* Step 3: Create a 2D array to store words in the file */ // open file to reset marker to beginning of file filePointer = fopen (argv[1], "r"); // store words in character array (each element in array = consecutive word) char allWords[count][100]; // 100 is an arbitrary size - max length of word int i,j; char temp; for (i = 0; i < count; i++) { for (j = 0; j < 100; j++) { // labels are used with goto statements, not loops in C temp = getc(filePointer); if ((temp == ' ' || temp == '\n' || temp == EOF) && (temp != '\'') ) { allWords[i][j] = '\0'; break; } else { allWords[i][j] = temp; } printf ("%c", allWords[i][j]); } printf ("\n"); } // close file fclose (filePointer); /* Step 4: Use a simple selection sort algorithm to sort 2D char array */ // PStep 1: Compare two char arrays, and if // (a) c1 > c2, return 2 // (b) c1 == c2, return 1 // (c) c1 < c2, return 0 qsort(allWords, count, sizeof(char[][]), pstrcmp); /* int k = 0, l = 0, m = 0; char currentMax, comparedElement; int max; // the largest element in the current 2D array int elementToSort = 0; // elementToSort determines the element to swap with starting from the left // Outer a iterates through number of swaps needed for (k = 0; k < count - 1; k++) { // times of swaps max = k; // max element set to k // Inner b iterates through successive elements to fish out the largest element for (m = k + 1; m < count - k; m++) { currentMax = allWords[k][l]; comparedElement = allWords[m][l]; // Inner c iterates through successive chars to set the max vars to the largest for (l = 0; (currentMax != '\0' || comparedElement != '\0'); l++) { if (currentMax > comparedElement) break; else if (currentMax < comparedElement) { max = m; currentMax = allWords[m][l]; break; } else if (currentMax == comparedElement) continue; } } // After max (count and string) is determined, perform swap with temp variable char swapTemp[1][20]; int y = 0; do { swapTemp[0][y] = allWords[elementToSort][y]; allWords[elementToSort][y] = allWords[max][y]; allWords[max][y] = swapTemp[0][y]; } while (swapTemp[0][y++] != '\0'); elementToSort++; } */ int a, b; for (a = 0; a < count; a++) { for (b = 0; (temp = allWords[a][b]) != '\0'; b++) { printf ("%c", temp); } printf ("\n"); } // Copy rows to different array and print results /* char arrayCopy [count][20]; int ac, ad; char tempa; for (ac = 0; ac < count; ac++) { for (ad = 0; (tempa = allWords[ac][ad]) != '\0'; ad++) { arrayCopy[ac][ad] = tempa; printf("%c", arrayCopy[ac][ad]); } printf("\n"); } */ /* Step 5: Create two additional arrays: (a) One in which each element contains unique words from char array (b) One which holds the count for the corresponding word in the other array */ /* Step 6: Sort the count array in decreasing order, and print the corresponding array element as well as word count in the console */ return 0; } // Perform housekeeping tasks like freeing up memory and closing file I'm really stuck on the selection sort algorithm. I'm currently using 2D arrays to represent strings, and that worked out fine, but when it came to sorting, using three level nested loops didn't seem to work, I tried to use qsort instead, but I don't fully understand that function as well. Constructive feedback and criticism greatly welcome (...and needed)!

    Read the article

  • The best JSF coding pattern for editing JPA entities using @RequestScoped only

    - by AlanObject
    I am in a project where I will write a lot of pages like this, so I want to use the most efficient (to write) coding pattern. Background: In the past I have used CODI's @ViewAccessScoped to preserve state between requests, and more recently I have started using flash scoped objects to save state. I can't use JSF @ViewScoped because I use CDI and they don't play well together. So I want to see if I can do this with only @RequestScoped backing beans. The page is designed like this (the p namespace is Primefaces): <f:metadata> <f:viewParam name="ID" value="#{backing.id}" /> </f:metadata> .... <h1>Edit Object Page</h1> <h:form id="formObj" rendered="#{backing.accessOK}"> <p:panelGrid columns="2"> <h:outputLabel value="Field #1:"/> <p:inputText value="#{backing.record.field1}" /> (more input fields) <h:outputLabel value="Action:" /> <h:panelGroup> <p:commandButton value="Save" action="#{backing.save}" /> <p:commandButton value="Cancel" action="backing.cancel" /> </h:panelGroup> </p:panelGrid> <p:messages showDetail="true" showSummary="true" /> </h:form> If the page is requested, the method accessOK() has the ability to keep the h:form from being rendered. Instead, the p:messages is shown with whatever FacesMessage(s) the accessOK() method cares to set. The pattern for the bean backing looks like this: @Named @RequestScoped public class Backing { private long id; private SomeJPAEntity record; private Boolean accessOK; public long getId() { return id; } public void setId(long value) { id = value; } public boolean accessOK() { if (accessOK != null) return accessOK; if (getRecord() == null) { // add a FacesMessage that explains the record // does not exist return accessOK = false; // note single = } // do any other access checks, such as write permissions return accessOK = true; } public SomeJPAEntity getRecord() { if (record != null) return record; if (getId() > 0) record = // get the record from DB else record = new SomeJPAEntity(); return record; } public String execute() { if (!accessOK()) return null; // bad edit // do other integrity checks here. If fail, set FacesMessages // and return null; if (getId() > 0) // merge the record back into the data base else // persist the record } } Here is what goes wrong with this model. When the Save button is clicked, a new instance of Backing is built, and then there are a lot of calls to the getRecord() getter before the setID() setter is called. So the logic in getRecord() breaks because it cannot rely on the id property being valid when it is called. When this was a @ViewAccessScoped (or ViewScoped) backing bean, then both the id and record properties are already set when the form is processed with the commandButton. Alternatively you can save those properties in flash storage but that has its own problems I want to avoid. So is there a way to make this programming model work within the specification?

    Read the article

  • How can I modify my Shunting-Yard Algorithm so it accepts unary operators?

    - by KingNestor
    I've been working on implementing the Shunting-Yard Algorithm in JavaScript for class. Here is my work so far: var userInput = prompt("Enter in a mathematical expression:"); var postFix = InfixToPostfix(userInput); var result = EvaluateExpression(postFix); document.write("Infix: " + userInput + "<br/>"); document.write("Postfix (RPN): " + postFix + "<br/>"); document.write("Result: " + result + "<br/>"); function EvaluateExpression(expression) { var tokens = expression.split(/([0-9]+|[*+-\/()])/); var evalStack = []; while (tokens.length != 0) { var currentToken = tokens.shift(); if (isNumber(currentToken)) { evalStack.push(currentToken); } else if (isOperator(currentToken)) { var operand1 = evalStack.pop(); var operand2 = evalStack.pop(); var result = PerformOperation(parseInt(operand1), parseInt(operand2), currentToken); evalStack.push(result); } } return evalStack.pop(); } function PerformOperation(operand1, operand2, operator) { switch(operator) { case '+': return operand1 + operand2; case '-': return operand1 - operand2; case '*': return operand1 * operand2; case '/': return operand1 / operand2; default: return; } } function InfixToPostfix(expression) { var tokens = expression.split(/([0-9]+|[*+-\/()])/); var outputQueue = []; var operatorStack = []; while (tokens.length != 0) { var currentToken = tokens.shift(); if (isNumber(currentToken)) { outputQueue.push(currentToken); } else if (isOperator(currentToken)) { while ((getAssociativity(currentToken) == 'left' && getPrecedence(currentToken) <= getPrecedence(operatorStack[operatorStack.length-1])) || (getAssociativity(currentToken) == 'right' && getPrecedence(currentToken) < getPrecedence(operatorStack[operatorStack.length-1]))) { outputQueue.push(operatorStack.pop()) } operatorStack.push(currentToken); } else if (currentToken == '(') { operatorStack.push(currentToken); } else if (currentToken == ')') { while (operatorStack[operatorStack.length-1] != '(') { if (operatorStack.length == 0) throw("Parenthesis balancing error! Shame on you!"); outputQueue.push(operatorStack.pop()); } operatorStack.pop(); } } while (operatorStack.length != 0) { if (!operatorStack[operatorStack.length-1].match(/([()])/)) outputQueue.push(operatorStack.pop()); else throw("Parenthesis balancing error! Shame on you!"); } return outputQueue.join(" "); } function isOperator(token) { if (!token.match(/([*+-\/])/)) return false; else return true; } function isNumber(token) { if (!token.match(/([0-9]+)/)) return false; else return true; } function getPrecedence(token) { switch (token) { case '^': return 9; case '*': case '/': case '%': return 8; case '+': case '-': return 6; default: return -1; } } function getAssociativity(token) { switch(token) { case '+': case '-': case '*': case '/': return 'left'; case '^': return 'right'; } } It works fine so far. If I give it: ((5+3) * 8) It will output: Infix: ((5+3) * 8) Postfix (RPN): 5 3 + 8 * Result: 64 However, I'm struggling with implementing the unary operators so I could do something like: ((-5+3) * 8) What would be the best way to implement unary operators (negation, etc)? Also, does anyone have any suggestions for handling floating point numbers as well? One last thing, if anyone sees me doing anything weird in JavaScript let me know. This is my first JavaScript program and I'm not used to it yet.

    Read the article

  • MVC: returning multiple results on stream connection to implement HTML5 SSE

    - by eddo
    I am trying to set up a lightweight HTML5 Server-Sent Event implementation on my MVC 4 Web, without using one of the libraries available to implement sockets and similars. The lightweight approach I am trying is: Client side: EventSource (or jquery.eventsource for IE) Server side: long polling with AsynchController (sorry for dropping here the raw test code but just to give an idea) public class HTML5testAsyncController : AsyncController { private static int curIdx = 0; private static BlockingCollection<string> _data = new BlockingCollection<string>(); static HTML5testAsyncController() { addItems(10); } //adds some test messages static void addItems(int howMany) { _data.Add("started"); for (int i = 0; i < howMany; i++) { _data.Add("HTML5 item" + (curIdx++).ToString()); } _data.Add("ended"); } // here comes the async action, 'Simple' public void SimpleAsync() { AsyncManager.OutstandingOperations.Increment(); Task.Factory.StartNew(() => { var result = string.Empty; var sb = new StringBuilder(); string serializedObject = null; //wait up to 40 secs that a message arrives if (_data.TryTake(out result, TimeSpan.FromMilliseconds(40000))) { JavaScriptSerializer ser = new JavaScriptSerializer(); serializedObject = ser.Serialize(new { item = result, message = "MSG content" }); sb.AppendFormat("data: {0}\n\n", serializedObject); } AsyncManager.Parameters["serializedObject"] = serializedObject; AsyncManager.OutstandingOperations.Decrement(); }); } // callback which returns the results on the stream public ActionResult SimpleCompleted(string serializedObject) { ServerSentEventResult sar = new ServerSentEventResult(); sar.Content = () => { return serializedObject; }; return sar; } //pushes the data on the stream in a format conforming HTML5 SSE public class ServerSentEventResult : ActionResult { public ServerSentEventResult() { } public delegate string GetContent(); public GetContent Content { get; set; } public int Version { get; set; } public override void ExecuteResult(ControllerContext context) { if (context == null) { throw new ArgumentNullException("context"); } if (this.Content != null) { HttpResponseBase response = context.HttpContext.Response; // this is the content type required by chrome 6 for server sent events response.ContentType = "text/event-stream"; response.BufferOutput = false; // this is important because chrome fails with a "failed to load resource" error if the server attempts to put the char set after the content type response.Charset = null; string[] newStrings = context.HttpContext.Request.Headers.GetValues("Last-Event-ID"); if (newStrings == null || newStrings[0] != this.Version.ToString()) { string value = this.Content(); response.Write(string.Format("data:{0}\n\n", value)); //response.Write(string.Format("id:{0}\n", this.Version)); } else { response.Write(""); } } } } } The problem is on the server side as there is still a big gap between the expected result and what's actually going on. Expected result: EventSource opens a stream connection to the server, the server keeps it open for a safe time (say, 2 minutes) so that I am protected from thread leaking from dead clients, as new message events are received by the server (and enqueued to a thread safe collection such as BlockingCollection) they are pushed in the open stream to the client: message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+200ms Actual behaviour: EventSource opens a stream connection to the server, the server keeps it open until a message event arrives (thanks to long polling) once a message is received, MVC pushes the message and closes the connection. EventSource has to reopen the connection and this happens after a couple of seconds. message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+3200ms This is not OK as it defeats the purpose of using SSE as the clients start again reconnecting as in normal polling and message delivery gets delayed. Now, the question: is there a native way to keep the connection open after sending the first message and sending further messages on the same connection?

    Read the article

  • What Good way to keep some different data in Cookies in asp.net?

    - by Dmitriy
    Hello! I want to keep some different data in one cookie file and write this class, and want to know - is this good? For example - user JS enable.When user open his first page on my site, i write to session his GMT time and write with this manager JS state. (GMT time is ajax request with js). And i want to keep some data in this cookie (up to 10 values). Have any advices or tips? /// <summary> /// CookiesSettings /// </summary> internal enum CookieSetting { IsJsEnable = 1, } internal class CookieSettingValue { public CookieSetting Type { get; set; } public string Value { get; set; } } /// <summary> /// Cookies to long time of expire /// </summary> internal class CookieManager { //User Public Settings private const string CookieValueName = "UPSettings"; private string[] DelimeterValue = new string[1] { "#" }; //cookie daat private List<CookieSettingValue> _data; public CookieManager() { _data = LoadFromCookies(); } #region Save and load /// <summary> /// Load from cookie string value /// </summary> private List<CookieSettingValue> LoadFromCookies() { if (!CookieHelper.RequestCookies.Contains(CookieValueName)) return new List<CookieSettingValue>(); _data = new List<CookieSettingValue>(); string data = CookieHelper.RequestCookies[CookieValueName].ToString(); string[] dels = data.Split(DelimeterValue, StringSplitOptions.RemoveEmptyEntries); foreach (string delValue in dels) { int eqIndex = delValue.IndexOf("="); if (eqIndex == -1) continue; int cookieType = ValidationHelper.GetInteger(delValue.Substring(0, eqIndex), 0); if (!Enum.IsDefined(typeof(CookieSetting), cookieType)) continue; CookieSettingValue value = new CookieSettingValue(); value.Type = (CookieSetting)cookieType; value.Value = delValue.Substring(eqIndex + 1, delValue.Length - eqIndex-1); _data.Add(value); } return _data; } public void Save() { CookieHelper.SetValue(CookieValueName, ToCookie(), DateTime.UtcNow.AddMonths(6)); } #endregion #region Get value public bool Bool(CookieSetting type, bool defaultValue) { CookieSettingValue inList = _data.SingleOrDefault(x => x.Type == type); if (inList == null) return defaultValue; return ValidationHelper.GetBoolean(inList.Value, defaultValue); } #endregion #region Set value public void SetValue(CookieSetting type, int value) { CookieSettingValue inList = _data.SingleOrDefault(x => x.Type == type); if (inList == null) { inList = new CookieSettingValue(); inList.Type = type; inList.Value = value.ToString(); _data.Add(inList); } else { inList.Value = value.ToString(); } } public void SetValue(CookieSetting type, bool value) { CookieSettingValue inList = _data.SingleOrDefault(x => x.Type == type); if (inList == null) { inList = new CookieSettingValue(); inList.Type = type; inList.Value = value.ToString(); _data.Add(inList); } else { inList.Value = value.ToString(); } } #endregion #region Private methods private string ToCookie() { StringBuilder sb = new StringBuilder(); for (int i = 0; i < _data.Count; i++) { sb.Append((int)_data[i].Type); sb.Append("="); sb.Append(_data[i].Value); sb.Append(DelimeterValue[0]); } return sb.ToString(); } /// <summary> /// Cookie length in bytes. Max - 4 bytes /// </summary> /// <returns></returns> private int GetLength() { return System.Text.Encoding.UTF8.GetByteCount(ToCookie()); } #endregion } P.S. i want to keep many data in one cookies file to compress data and decrease cookies count.

    Read the article

  • Servlet response wrapper has encoding problem

    - by John O
    A servlet response wrapper is being used in a Servlet Filter. The idea is that the response is manipulated, with a 'nonce' value being injected into forms, as part of defence against CSRF attacks. The web app is using UTF-8 everywhere. When the Servlet Filter is absent, no problems. When the filter is added, encoding issues occur. (It seems as if the response is reverting to 8859-1.) The guts of the code : final class CsrfResponseWrapper extends AbstractResponseWrapper { ... byte[] modifyResponse(byte[] aInputResponse){ ... String originalInput = new String(aInputResponse, encoding); String modifiedResult = addHiddenParamToPostedForms(originalInput); result = modifiedResult.getBytes(encoding); ... } ... } As I understand it, the transition between byte-land and String-land should specify an encoding. That is done here, as you can see, in two places. The value of the 'encoding' variable is 'UTF-8'; the alteration of the String itself is standard string manipulation (with a regex), and never specifies an encoding (addHiddenParamToPostedForms). Where am I in error about the encoding? EDIT: Here is the base class (sorry it's rather long): package hirondelle.web4j.security; import javax.servlet.ServletOutputStream; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletResponse; import javax.servlet.http.HttpServletResponseWrapper; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.PrintWriter; /** Abstract Base Class for altering response content. (May be useful in future contexts as well. For now, keep package-private.) */ abstract class AbstractResponseWrapper extends HttpServletResponseWrapper { AbstractResponseWrapper(ServletResponse aServletResponse) throws IOException { super((HttpServletResponse)aServletResponse); fOutputStream = new ModifiedOutputStream(aServletResponse.getOutputStream()); fWriter = new PrintWriter(fOutputStream); } /** Return the modified response. */ abstract byte[] modifyResponse(byte[] aInputResponse); /** Standard servlet method. */ public final ServletOutputStream getOutputStream() { //fLogger.fine("Modified Response : Getting output stream."); if ( fWriterReturned ) { throw new IllegalStateException(); } fOutputStreamReturned = true; return fOutputStream; } /** Standard servlet method. */ public final PrintWriter getWriter() { //fLogger.fine("Modified Response : Getting writer."); if ( fOutputStreamReturned ) { throw new IllegalStateException(); } fWriterReturned = true; return fWriter; } // PRIVATE /* Well-behaved servlets return either an OutputStream or a PrintWriter, but not both. */ private PrintWriter fWriter; private ModifiedOutputStream fOutputStream; /* These items are used to implement conformance to the javadoc for ServletResponse, regarding exceptions being thrown. */ private boolean fWriterReturned; private boolean fOutputStreamReturned; /** Modified low level output stream. */ private class ModifiedOutputStream extends ServletOutputStream { public ModifiedOutputStream(ServletOutputStream aOutputStream) { fServletOutputStream = aOutputStream; fBuffer = new ByteArrayOutputStream(); } /** Must be implemented to make this class concrete. */ public void write(int aByte) { fBuffer.write(aByte); } public void close() throws IOException { if ( !fIsClosed ){ processStream(); fServletOutputStream.close(); fIsClosed = true; } } public void flush() throws IOException { if ( fBuffer.size() != 0 ){ if ( !fIsClosed ) { processStream(); fBuffer = new ByteArrayOutputStream(); } } } /** Perform the core processing, by calling the abstract method. */ public void processStream() throws IOException { fServletOutputStream.write(modifyResponse(fBuffer.toByteArray())); fServletOutputStream.flush(); } // PRIVATE // private ServletOutputStream fServletOutputStream; private ByteArrayOutputStream fBuffer; /** Tracks if this stream has been closed. */ private boolean fIsClosed = false; } }

    Read the article

  • non blocking client server chat application in java using nio

    - by Amith
    I built a simple chat application using nio channels. I am very much new to networking as well as threads. This application is for communicating with server (Server / Client chat application). My problem is that multiple clients are not supported by the server. How do I solve this problem? What's the bug in my code? public class Clientcore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void coreClient() { System.out.println("please enter the text"); BufferedReader stdin=new BufferedReader(new InputStreamReader(System.in)); SocketChannel sc = null; try { sc = SocketChannel.open(); sc.configureBlocking(false); sc.connect(new InetSocketAddress(8888)); int i=0; while (!sc.finishConnect()) { } for(int ii=0;ii>-22;ii++) { System.out.println("Enter the text"); String HELLO_REQUEST =stdin.readLine().toString(); if(HELLO_REQUEST.equalsIgnoreCase("end")) { break; } System.out.println("Sending a request to HelloServer"); ByteBuffer buffer = ByteBuffer.wrap(HELLO_REQUEST.getBytes()); sc.write(buffer); } } catch (IOException e) { e.printStackTrace(); } finally { if (sc != null) { try { sc.close(); } catch (Exception e) { e.printStackTrace(); } } } } public void run() { try { coreClient(); } catch(Exception ej) { ej.printStackTrace(); }}} public class ServerCore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void run() { try { coreServer(); } catch(Exception ej) { ej.printStackTrace(); } } private void coreServer() { try { ServerSocketChannel ssc = ServerSocketChannel.open(); try { ssc.socket().bind(new InetSocketAddress(8888)); while (true) { sckt_manager=SelectorProvider.provider().openSelector(); ssc.configureBlocking(false); SocketChannel sc = ssc.accept(); register_server(ssc,SelectionKey.OP_ACCEPT); if (sc == null) { } else { System.out.println("Received an incoming connection from " + sc.socket().getRemoteSocketAddress()); printRequest(sc); System.err.println("testing 1"); String HELLO_REPLY = "Sample Display"; ByteBuffer buffer = ByteBuffer.wrap(HELLO_REPLY.getBytes()); System.err.println("testing 2"); sc.write(buffer); System.err.println("testing 3"); sc.close(); }}} catch (IOException e) { e.printStackTrace(); } finally { if (ssc != null) { try { ssc.close(); } catch (IOException e) { e.printStackTrace(); } } } } catch(Exception E) { System.out.println("Ex in servCORE "+E); } } private static void printRequest(SocketChannel sc) throws IOException { ReadableByteChannel rbc = Channels.newChannel(sc.socket().getInputStream()); WritableByteChannel wbc = Channels.newChannel(System.out); ByteBuffer b = ByteBuffer.allocate(1024); // read 1024 bytes while (rbc.read(b) != -1) { b.flip(); while (b.hasRemaining()) { wbc.write(b); System.out.println(); } b.clear(); } } public void register_server(ServerSocketChannel ssc,int selectionkey_ops)throws Exception { ssc.register(sckt_manager,selectionkey_ops); }} public class HelloClient { public void coreClientChat() { Clientcore t=new Clientcore(); new Thread(t).start(); } public static void main(String[] args)throws Exception { HelloClient cl= new HelloClient(); cl.coreClientChat(); }} public class HelloServer { public void coreServerChat() { ServerCore t=new ServerCore(); new Thread(t).start(); } public static void main(String[] args) { HelloServer st= new HelloServer(); st.coreServerChat(); }}

    Read the article

  • how to create text file in window service

    - by angel ansari
    Hi, I have an XML file <config> <ServiceName>autorunquery</ServiceName> <DBConnection> <server>servername</server> <user>xyz</user> <password>klM#2bs</password> <initialcatelog>TEST</initialcatelog> </DBConnection> <Log> <logfilename>d:\testlogfile.txt</logfilename> </Log> <Frequency> <value>10</value> <unit>minute</unit> </Frequency> <CheckQuery>select * from credit_debit1 where station='Corporate'</CheckQuery> <Queries total="3"> <Query id="1">Update credit_debit1 set station='xxx' where id=2</Query> <Query id="2">Update credit_debit1 set station='xxx' where id=4</Query> <Query id="3">Update credit_debit1 set station='xxx' where id=9</Query> </Queries> </config> using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.Text; using System.IO; using System.Xml; namespace Service1 { public partial class Service1 : ServiceBase { XmlTextReader reader = null; string path = null; FileStream fs = null; StreamWriter sw = null; public Service1() { InitializeComponent(); } protected override void OnStart(string[] args) { timer1.Enabled = true; timer1.Interval = 10000; timer1.Start(); logfile("start service"); } protected override void OnStop() { timer1.Enabled = false; timer1.Stop(); logfile("stop service"); } private void logfile(string content) { try { reader = new XmlTextReader("queryconfig.xml");//xml file name which is in current directory if (reader.ReadToFollowing("logfilename")) { path = reader.ReadElementContentAsString(); } fs = new FileStream(path, FileMode.Append, FileAccess.Write); sw = new StreamWriter(fs); sw.Write(content); sw.WriteLine(DateTime.Now.ToString()); } catch (Exception ex) { sw.Write(ex.ToString()); throw; } finally { if (reader != null) reader.Close(); if (sw != null) sw.Close(); if (fs != null) fs.Close(); } } } } My problem is that the file is not created.

    Read the article

  • Get Image Source URLs from a Different Page Using JS

    - by SDD
    Everyone: I'm trying to grab the source URLs of images from one page and use them in some JavaScript in another page. I know how to pull in images using JQuery .load(). However, rather than load all the images and display them on the page, I want to just grab the source URLs so I can use them in a JS array. Page 1 is just a page with images: <html> <head> </head> <body> <img id="image0" src="image0.jpg" /> <img id="image1" src="image1.jpg" /> <img id="image2" src="image2.jpg" /> <img id="image3" src="image3.jpg" /> </body> </html> Page 2 contains my JS. (Please note that the end goal is to load images into an array, randomize them, and using cookies, show a new image on page load every 10 seconds. All this is working. However, rather than hard code the image paths into my javascript as shown below, I'd prefer to take the paths from Page 1 based on their IDs. This way, the images won't always need to be titled "image1.jpg," etc.) <script type = "text/javascript"> var days = 730; var rotator = new Object(); var currentTime = new Date(); var currentMilli = currentTime.getTime(); var images = [], index = 0; images[0] = "image0.jpg"; images[1] = "image1.jpg"; images[2] = "image2.jpg"; images[3] = "image3.jpg"; rotator.getCookie = function(Name) { var re = new RegExp(Name+"=[^;]+", "i"); if (document.cookie.match(re)) return document.cookie.match(re)[0].split("=")[1]; return''; } rotator.setCookie = function(name, value, days) { var expireDate = new Date(); var expstring = expireDate.setDate(expireDate.getDate()+parseInt(days)); document.cookie = name+"="+value+"; expires="+expireDate.toGMTString()+"; path=/"; } rotator.randomize = function() { index = Math.floor(Math.random() * images.length); randomImageSrc = images[index]; } rotator.check = function() { if (rotator.getCookie("randomImage") == "") { rotator.randomize(); document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenTime = parseInt(rotator.getCookie("timeClock"),10); if ( currentMilli > writtenTime + 10000 ) { rotator.randomize(); var writtenImage = rotator.getCookie("randomImage") while ( randomImageSrc == writtenImage ) { rotator.randomize(); } document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenImage = rotator.getCookie("randomImage") document.write("<img src=" + writtenImage + ">"); } } } rotator.check() </script> Can anyone point me in the right direction? My hunch is to use JQuery .get(), but I've been unsuccessful so far. Please let me know if I can clarify!

    Read the article

  • How can I remove this query from within a loop?

    - by Chris
    I am currently designing a forum as a personal project. One of the recurring issues I've come across is database queries in loops. I've managed to avoid doing that so far by using table joins or caching of data in arrays for later use. Right now though I've come across a situation where I'm not sure how I can write the code in such a way that I can use either of those methods easily. However I'd still prefer to do at most 2 queries for this operation rather than 1 + 1 per group of forums, which so far has resulted in 5 per page. So while 5 isn't a huge number (though it will increase for each forum group I add) it's the principle that's important to me here, I do NOT want to write queries in loops What I'm doing is displaying forum index groupings (eg admin forums, user forums etc) and then each forum within that group on a single page index, it's the combination of both in one page that's causing me issue. If it had just been a single group per page, I'd use a table join and problem solved. But if I use a table join here, although I can potentially get all the data I need it'll be in one mass of results and it needs displaying properly. Here's the code (I've removed some of the html for clarity) <?php $sql= "select * from forum_groups"; //query 1 $result1 = $database->query($sql); while($group = mysql_fetch_assoc($result1)) //first loop {?> <table class="threads"> <tr> <td class="forumgroupheader"> <?php echo $group['group_name']; ?> </td> </tr> <tr> <td class="forumgroupheader2"> <?php echo $group['group_desc']; ?> </td> </tr> </table> <table> <tr> <th class="thforum"> Forum Name</th> <th class="thforum"> Forum Decsription</th> <th class="thforum"> Last Post </th> <tr> <?php $group_id = $group['id']; $sql = "SELECT forums.id, forums.forum_group_id, forums.forum_name, forums.forum_desc, forums.visible_rank, forums.locked, forums.lock_rank, forums.topics, forums.posts, forums.last_post, forums.last_post_id, users.username FROM forums LEFT JOIN users on forums.last_post_id=users.id WHERE forum_group_id='{$group_id}'"; //query 2 $result2 = $database->query($sql); while($forum = mysql_fetch_assoc($result2)) //second loop {?> So how can I either a) write the SQL in such a way as to remove the second query from inside the loop or b) combine the results in an array either way I need to be able to access the data as an when so I can format it properly for the page output, ie within the loops still.

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Resque Runtime Error at /workers: wrong number of arguments for 'exists' command

    - by Superflux
    I'm having a runtime errror when i'm looking at the "workers" tab on resque-web (localhost). Everything else works. Edit: when this error occurs, i also have some (3 or 4) unknown workers 'not working'. I think they are responsible for the error but i don't understand how they got here Can you help me on this ? Did i do something wrong ? Config: Resque 1.8.5 as a gem in a rails 2.3.8 app on Snow Leopard redis 1.0.7 / rack 1.1 / sinatra 1.0 / vegas 0.1.7 file: client.rb location: format_error_reply line: 558 BACKTRACE: * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_error_reply * 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end 556. 557. def format_error_reply(line) 558. raise "-" + line.strip 559. end 560. 561. def format_status_reply(line) 562. line.strip 563. end 564. 565. def format_integer_reply(line) * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_reply * 541. 542. def reconnect 543. disconnect && connect_to_server 544. end 545. 546. def format_reply(reply_type, line) 547. case reply_type 548. when MINUS then format_error_reply(line) 549. when PLUS then format_status_reply(line) 550. when COLON then format_integer_reply(line) 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in read_reply * 478. disconnect 479. 480. raise Errno::EAGAIN, "Timeout reading from the socket" 481. end 482. 483. raise Errno::ECONNRESET, "Connection lost" unless reply_type 484. 485. format_reply(reply_type, @sock.gets) 486. end 487. 488. 489. if "".respond_to?(:bytesize) 490. def get_size(string) 491. string.bytesize 492. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in map * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end 447. 448. return pipeline ? results : results[0] 449. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in synchronize * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in maybe_lock * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 432. end 433. # When in Pub/Sub mode we don't read replies synchronously. 434. if @pubsub 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in call_command * 336. # try to reconnect just one time, otherwise let the error araise. 337. def call_command(argv) 338. log(argv.inspect, :debug) 339. 340. connect_to_server unless connected? 341. 342. begin 343. raw_call_command(argv.dup) 344. rescue Errno::ECONNRESET, Errno::EPIPE, Errno::ECONNABORTED 345. if reconnect 346. raw_call_command(argv.dup) 347. else 348. raise Errno::ECONNRESET 349. end 350. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in method_missing * 385. connect_to(@host, @port) 386. call_command([:auth, @password]) if @password 387. call_command([:select, @db]) if @db != 0 388. @sock 389. end 390. 391. def method_missing(*argv) 392. call_command(argv) 393. end 394. 395. def raw_call_command(argvp) 396. if argvp[0].is_a?(Array) 397. argvv = argvp 398. pipeline = true 399. else * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in send * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in method_missing * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/worker.rb in state * 416. def idle? 417. state == :idle 418. end 419. 420. # Returns a symbol representing the current worker state, 421. # which can be either :working or :idle 422. def state 423. redis.exists("worker:#{self}") ? :working : :idle 424. end 425. 426. # Is this worker the same as another worker? 427. def ==(other) 428. to_s == other.to_s 429. end 430. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * 46. <tr> 47. <th>&nbsp;</th> 48. <th>Where</th> 49. <th>Queues</th> 50. <th>Processing</th> 51. </tr> 52. <% for worker in (workers = resque.workers.sort_by { |w| w.to_s }) %> 53. <tr class="<%=state = worker.state%>"> 54. <td class='icon'><img src="<%=u state %>.png" alt="<%= state %>" title="<%= state %>"></td> 55. 56. <% host, pid, queues = worker.to_s.split(':') %> 57. <td class='where'><a href="<%=u "workers/#{worker}"%>"><%= host %>:<%= pid %></a></td> 58. <td class='queues'><%= queues.split(',').map { |q| '<a class="queue-tag" href="' + u("/queues/#{q}") + '">' + q + '</a>'}.join('') %></td> 59. 60. <td class='process'> * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in each * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in send * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in evaluate * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in erb * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in show * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in GET /workers * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in each * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in dispatch! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in synchronize * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/content_length.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/chunked.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in process * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in each * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in run * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in run! * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in start * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in new * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in nil * /usr/bin/resque-web in load

    Read the article

  • Conceal packet loss in PCM stream

    - by ZeroDefect
    I am looking to use 'Packet Loss Concealment' to conceal lost PCM frames in an audio stream. Unfortunately, I cannot find a library that is accessible without all the licensing restrictions and code bloat (...up for some suggestions though). I have located some GPL code written by Steve Underwood for the Asterisk project which implements PLC. There are several limitations; although, as Steve suggests in his code, his algorithm can be applied to different streams with a bit of work. Currently, the code works with 8kHz 16-bit signed mono streams. Variations of the code can be found through a simple search of Google Code Search. My hope is that I can adapt the code to work with other streams. Initially, the goal is to adjust the algorithm for 8+ kHz, 16-bit signed, multichannel audio (all in a C++ environment). Eventually, I'm looking to make the code available under the GPL license in hopes that it could be of benefit to others... Attached is the code below with my efforts. The code includes a main function that will "drop" a number of frames with a given probability. Unfortunately, the code does not quite work as expected. I'm receiving EXC_BAD_ACCESS when running in gdb, but I don't get a trace from gdb when using 'bt' command. Clearly, I'm trampimg on memory some where but not sure exactly where. When I comment out the *amdf_pitch* function, the code runs without crashing... int main (int argc, char *argv[]) { std::ifstream fin("C:\\cc32kHz.pcm"); if(!fin.is_open()) { std::cout << "Failed to open input file" << std::endl; return 1; } std::ofstream fout_repaired("C:\\cc32kHz_repaired.pcm"); if(!fout_repaired.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } std::ofstream fout_lossy("C:\\cc32kHz_lossy.pcm"); if(!fout_lossy.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } audio::PcmConcealer Concealer; Concealer.Init(1, 16, 32000); //Generate random numbers; srand( time(NULL) ); int value = 0; int probability = 5; while(!fin.eof()) { char arr[2]; fin.read(arr, 2); //Generate's random number; value = rand() % 100 + 1; if(value <= probability) { char blank[2] = {0x00, 0x00}; fout_lossy.write(blank, 2); //Fill in data; Concealer.Fill((int16_t *)blank, 1); fout_repaired.write(blank, 2); } else { //Write data to file; fout_repaired.write(arr, 2); fout_lossy.write(arr, 2); Concealer.Receive((int16_t *)arr, 1); } } fin.close(); fout_repaired.close(); fout_lossy.close(); return 0; } PcmConcealer.hpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #ifndef __PCMCONCEALER_HPP__ #define __PCMCONCEALER_HPP__ /** 1. What does it do? The packet loss concealment module provides a suitable synthetic fill-in signal, to minimise the audible effect of lost packets in VoIP applications. It is not tied to any particular codec, and could be used with almost any codec which does not specify its own procedure for packet loss concealment. Where a codec specific concealment procedure exists, the algorithm is usually built around knowledge of the characteristics of the particular codec. It will, therefore, generally give better results for that particular codec than this generic concealer will. 2. How does it work? While good packets are being received, the plc_rx() routine keeps a record of the trailing section of the known speech signal. If a packet is missed, plc_fillin() is called to produce a synthetic replacement for the real speech signal. The average mean difference function (AMDF) is applied to the last known good signal, to determine its effective pitch. Based on this, the last pitch period of signal is saved. Essentially, this cycle of speech will be repeated over and over until the real speech resumes. However, several refinements are needed to obtain smooth pleasant sounding results. - The two ends of the stored cycle of speech will not always fit together smoothly. This can cause roughness, or even clicks, at the joins between cycles. To soften this, the 1/4 pitch period of real speech preceeding the cycle to be repeated is blended with the last 1/4 pitch period of the cycle to be repeated, using an overlap-add (OLA) technique (i.e. in total, the last 5/4 pitch periods of real speech are used). - The start of the synthetic speech will not always fit together smoothly with the tail of real speech passed on before the erasure was identified. Ideally, we would like to modify the last 1/4 pitch period of the real speech, to blend it into the synthetic speech. However, it is too late for that. We could have delayed the real speech a little, but that would require more buffer manipulation, and hurt the efficiency of the no-lost-packets case (which we hope is the dominant case). Instead we use a degenerate form of OLA to modify the start of the synthetic data. The last 1/4 pitch period of real speech is time reversed, and OLA is used to blend it with the first 1/4 pitch period of synthetic speech. The result seems quite acceptable. - As we progress into the erasure, the chances of the synthetic signal being anything like correct steadily fall. Therefore, the volume of the synthesized signal is made to decay linearly, such that after 50ms of missing audio it is reduced to silence. - When real speech resumes, an extra 1/4 pitch period of sythetic speech is blended with the start of the real speech. If the erasure is small, this smoothes the transition. If the erasure is long, and the synthetic signal has faded to zero, the blending softens the start up of the real signal, avoiding a kind of "click" or "pop" effect that might occur with a sudden onset. 3. How do I use it? Before audio is processed, call plc_init() to create an instance of the packet loss concealer. For each received audio packet that is acceptable (i.e. not including those being dropped for being too late) call plc_rx() to record the content of the packet. Note this may modify the packet a little after a period of packet loss, to blend real synthetic data smoothly. When a real packet is not available in time, call plc_fillin() to create a sythetic substitute. That's it! */ /*! Minimum allowed pitch (66 Hz) */ #define PLC_PITCH_MIN(SAMPLE_RATE) ((double)(SAMPLE_RATE) / 66.6) /*! Maximum allowed pitch (200 Hz) */ #define PLC_PITCH_MAX(SAMPLE_RATE) ((SAMPLE_RATE) / 200) /*! Maximum pitch OLA window */ //#define PLC_PITCH_OVERLAP_MAX(SAMPLE_RATE) ((PLC_PITCH_MIN(SAMPLE_RATE)) >> 2) /*! The length over which the AMDF function looks for similarity (20 ms) */ #define CORRELATION_SPAN(SAMPLE_RATE) ((20 * (SAMPLE_RATE)) / 1000) /*! History buffer length. The buffer must also be at leat 1.25 times PLC_PITCH_MIN, but that is much smaller than the buffer needs to be for the pitch assessment. */ //#define PLC_HISTORY_LEN(SAMPLE_RATE) ((CORRELATION_SPAN(SAMPLE_RATE)) + (PLC_PITCH_MIN(SAMPLE_RATE))) namespace audio { typedef struct { /*! Consecutive erased samples */ int missing_samples; /*! Current offset into pitch period */ int pitch_offset; /*! Pitch estimate */ int pitch; /*! Buffer for a cycle of speech */ float *pitchbuf;//[PLC_PITCH_MIN]; /*! History buffer */ short *history;//[PLC_HISTORY_LEN]; /*! Current pointer into the history buffer */ int buf_ptr; } plc_state_t; class PcmConcealer { public: PcmConcealer(); ~PcmConcealer(); void Init(int channels, int bit_depth, int sample_rate); //Process a block of received audio samples. int Receive(short amp[], int frames); //Fill-in a block of missing audio samples. int Fill(short amp[], int frames); void Destroy(); private: int amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames); void save_history(plc_state_t *s, short *buf, int channel_index, int frames); void normalise_history(plc_state_t *s); /** Holds the states of each of the channels **/ std::vector< plc_state_t * > ChannelStates; int plc_pitch_min; int plc_pitch_max; int plc_pitch_overlap_max; int correlation_span; int plc_history_len; int channel_count; int sample_rate; bool Initialized; }; } #endif PcmConcealer.cpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #include "audio/PcmConcealer.hpp" /* We do a straight line fade to zero volume in 50ms when we are filling in for missing data. */ #define ATTENUATION_INCREMENT 0.0025 /* Attenuation per sample */ #if !defined(INT16_MAX) #define INT16_MAX (32767) #define INT16_MIN (-32767-1) #endif #ifdef WIN32 inline double rint(double x) { return floor(x + 0.5); } #endif inline short fsaturate(double damp) { if (damp > 32767.0) return INT16_MAX; if (damp < -32768.0) return INT16_MIN; return (short)rint(damp); } namespace audio { PcmConcealer::PcmConcealer() : Initialized(false) { } PcmConcealer::~PcmConcealer() { Destroy(); } void PcmConcealer::Init(int channels, int bit_depth, int sample_rate) { if(Initialized) return; if(channels <= 0 || bit_depth != 16) return; Initialized = true; channel_count = channels; this->sample_rate = sample_rate; ////////////// double min = PLC_PITCH_MIN(sample_rate); int imin = (int)min; double max = PLC_PITCH_MAX(sample_rate); int imax = (int)max; plc_pitch_min = imin; plc_pitch_max = imax; plc_pitch_overlap_max = (plc_pitch_min >> 2); correlation_span = CORRELATION_SPAN(sample_rate); plc_history_len = correlation_span + plc_pitch_min; ////////////// for(int i = 0; i < channel_count; i ++) { plc_state_t *t = new plc_state_t; memset(t, 0, sizeof(plc_state_t)); t->pitchbuf = new float[plc_pitch_min]; t->history = new short[plc_history_len]; ChannelStates.push_back(t); } } void PcmConcealer::Destroy() { if(!Initialized) return; while(ChannelStates.size()) { plc_state_t *s = ChannelStates.at(0); if(s) { if(s->history) delete s->history; if(s->pitchbuf) delete s->pitchbuf; memset(s, 0, sizeof(plc_state_t)); delete s; } ChannelStates.erase(ChannelStates.begin()); } ChannelStates.clear(); Initialized = false; } //Process a block of received audio samples. int PcmConcealer::Receive(short amp[], int frames) { if(!Initialized) return 0; int j = 0; for(int k = 0; k < ChannelStates.size(); k++) { int i; int overlap_len; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples) { /* Although we have a real signal, we need to smooth it to fit well with the synthetic signal we used for the previous block */ /* The start of the real data is overlapped with the next 1/4 cycle of the synthetic data. */ pitch_overlap = s->pitch >> 2; if (pitch_overlap > frames) pitch_overlap = frames; gain = 1.0 - s->missing_samples * ATTENUATION_INCREMENT; if (gain < 0.0) gain = 0.0; new_step = 1.0/pitch_overlap; old_step = new_step*gain; new_weight = new_step; old_weight = (1.0 - new_step)*gain; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->pitchbuf[s->pitch_offset] + new_weight * amp[index]); if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->missing_samples = 0; } save_history(s, amp, j, frames); j++; } return frames; } //Fill-in a block of missing audio samples. int PcmConcealer::Fill(short amp[], int frames) { if(!Initialized) return 0; int j =0; for(int k = 0; k < ChannelStates.size(); k++) { short *tmp = new short[plc_pitch_overlap_max]; int i; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; short *orig_amp; int orig_len; orig_amp = amp; orig_len = frames; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples == 0) { // As the gap in real speech starts we need to assess the last known pitch, //and prepare the synthetic data we will use for fill-in normalise_history(s); s->pitch = amdf_pitch(plc_pitch_min, plc_pitch_max, s->history + plc_history_len - correlation_span - plc_pitch_min, j, correlation_span); // We overlap a 1/4 wavelength pitch_overlap = s->pitch >> 2; // Cook up a single cycle of pitch, using a single of the real signal with 1/4 //cycle OLA'ed to make the ends join up nicely // The first 3/4 of the cycle is a simple copy for (i = 0; i < s->pitch - pitch_overlap; i++) s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]; // The last 1/4 of the cycle is overlapped with the end of the previous cycle new_step = 1.0/pitch_overlap; new_weight = new_step; for ( ; i < s->pitch; i++) { s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]*(1.0 - new_weight) + s->history[plc_history_len - 2*s->pitch + i]*new_weight; new_weight += new_step; } // We should now be ready to fill in the gap with repeated, decaying cycles // of what is in pitchbuf // We need to OLA the first 1/4 wavelength of the synthetic data, to smooth // it into the previous real data. To avoid the need to introduce a delay // in the stream, reverse the last 1/4 wavelength, and OLA with that. gain = 1.0; new_step = 1.0/pitch_overlap; old_step = new_step; new_weight = new_step; old_weight = 1.0 - new_step; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->history[plc_history_len - 1 - i] + new_weight * s->pitchbuf[i]); new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->pitch_offset = i; } else { gain = 1.0 - s->missing_samples*ATTENUATION_INCREMENT; i = 0; } for ( ; gain > 0.0 && i < frames; i++) { int index = (i * channel_count) + j; amp[index] = s->pitchbuf[s->pitch_offset]*gain; gain -= ATTENUATION_INCREMENT; if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; } for ( ; i < frames; i++) { int index = (i * channel_count) + j; amp[i] = 0; } s->missing_samples += orig_len; save_history(s, amp, j, frames); delete [] tmp; j++; } return frames; } void PcmConcealer::save_history(plc_state_t *s, short *buf, int channel_index, int frames) { if (frames >= plc_history_len) { /* Just keep the last part of the new data, starting at the beginning of the buffer */ //memcpy(s->history, buf + len - plc_history_len, sizeof(short)*plc_history_len); int frames_to_copy = plc_history_len; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + frames - plc_history_len)) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = 0; return; } if (s->buf_ptr + frames > plc_history_len) { /* Wraps around - must break into two sections */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*(plc_history_len - s->buf_ptr)); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = plc_history_len - s->buf_ptr; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } frames -= (plc_history_len - s->buf_ptr); //memcpy(s->history, buf + (plc_history_len - s->buf_ptr), sizeof(short)*len); frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + (plc_history_len - s->buf_ptr))) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = frames; return; } /* Can use just one section */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*len); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } s->buf_ptr += frames; } void PcmConcealer::normalise_history(plc_state_t *s) { short *tmp = new short[plc_history_len]; if (s->buf_ptr == 0) return; memcpy(tmp, s->history, sizeof(short)*s->buf_ptr); memcpy(s->history, s->history + s->buf_ptr, sizeof(short)*(plc_history_len - s->buf_ptr)); memcpy(s->history + plc_history_len - s->buf_ptr, tmp, sizeof(short)*s->buf_ptr); s->buf_ptr = 0; delete [] tmp; } int PcmConcealer::amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames) { int i; int j; int acc; int min_acc; int pitch; pitch = min_pitch; min_acc = INT_MAX; for (i = max_pitch; i <= min_pitch; i++) { acc = 0; for (j = 0; j < frames; j++) { int index1 = (channel_count * (i+j)) + channel_index; int index2 = (channel_count * j) + channel_index; //std::cout << "Index 1: " << index1 << ", Index 2: " << index2 << std::endl; acc += abs(amp[index1] - amp[index2]); } if (acc < min_acc) { min_acc = acc; pitch = i; } } std::cout << "Pitch: " << pitch << std::endl; return pitch; } } P.S. - I must confess that digital audio is not my forte...

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • How to allow bind in app armor?

    - by WitchCraft
    Question: I did setup bind9 as described here: http://ubuntuforums.org/showthread.php?p=12149576#post12149576 Now I have a little problem with apparmor: If I switch it off, it works. If apparmor runs, it doesn't work, and I get the following dmesg output: [ 23.809767] type=1400 audit(1344097913.519:11): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1540 comm="apparmor_parser" [ 23.811537] type=1400 audit(1344097913.519:12): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1540 comm="apparmor_parser" [ 23.812514] type=1400 audit(1344097913.523:13): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1540 comm="apparmor_parser" [ 23.821999] type=1400 audit(1344097913.531:14): apparmor="STATUS" operation="profile_load" name="/usr/sbin/mysqld" pid=1544 comm="apparmor_parser" [ 23.845085] type=1400 audit(1344097913.555:15): apparmor="STATUS" operation="profile_load" name="/usr/sbin/libvirtd" pid=1543 comm="apparmor_parser" [ 23.849051] type=1400 audit(1344097913.559:16): apparmor="STATUS" operation="profile_load" name="/usr/sbin/named" pid=1545 comm="apparmor_parser" [ 23.849509] type=1400 audit(1344097913.559:17): apparmor="STATUS" operation="profile_load" name="/usr/lib/libvirt/virt-aa-helper" pid=1542 comm="apparmor_parser" [ 23.851597] type=1400 audit(1344097913.559:18): apparmor="STATUS" operation="profile_load" name="/usr/sbin/tcpdump" pid=1547 comm="apparmor_parser" [ 24.415193] type=1400 audit(1344097914.123:19): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=1625 comm="apparmor_parser" [ 24.738631] ip_tables: (C) 2000-2006 Netfilter Core Team [ 25.005242] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 25.187939] ADDRCONF(NETDEV_UP): virbr0: link is not ready [ 26.004282] Ebtables v2.0 registered [ 26.068783] ip6_tables: (C) 2000-2006 Netfilter Core Team [ 28.158848] postgres (1900): /proc/1900/oom_adj is deprecated, please use /proc/1900/oom_score_adj instead. [ 29.840079] xenbr0: no IPv6 routers present [ 31.502916] type=1400 audit(1344097919.088:20): apparmor="DENIED" operation="mknod" parent=1984 profile="/usr/sbin/named" name="/var/log/query.log" pid=1989 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 34.336141] xenbr0: port 1(eth0) entering forwarding state [ 38.424359] Event-channel device installed. [ 38.853077] XENBUS: Unable to read cpu state [ 38.854215] XENBUS: Unable to read cpu state [ 38.855231] XENBUS: Unable to read cpu state [ 38.858891] XENBUS: Unable to read cpu state [ 47.411497] device vif1.0 entered promiscuous mode [ 47.429245] ADDRCONF(NETDEV_UP): vif1.0: link is not ready [ 49.366219] virbr0: port 1(vif1.0) entering disabled state [ 49.366705] virbr0: port 1(vif1.0) entering disabled state [ 49.368873] virbr0: mixed no checksumming and other settings. [ 97.273028] type=1400 audit(1344097984.861:21): apparmor="DENIED" operation="mknod" parent=3076 profile="/usr/sbin/named" name="/var/log/query.log" pid=3078 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 277.790627] type=1400 audit(1344098165.377:22): apparmor="DENIED" operation="mknod" parent=3384 profile="/usr/sbin/named" name="/var/log/query.log" pid=3389 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 287.812986] type=1400 audit(1344098175.401:23): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/root/tmp-gjnX0c0dDa" pid=3400 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 287.818466] type=1400 audit(1344098175.405:24): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/root/tmp-CpOtH52qU5" pid=3400 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 323.166228] type=1400 audit(1344098210.753:25): apparmor="DENIED" operation="mknod" parent=3422 profile="/usr/sbin/named" name="/var/log/query.log" pid=3427 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 386.512586] type=1400 audit(1344098274.101:26): apparmor="DENIED" operation="mknod" parent=3456 profile="/usr/sbin/named" name="/var/log/query.log" pid=3459 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 808.549049] type=1400 audit(1344098696.137:27): apparmor="DENIED" operation="mknod" parent=3872 profile="/usr/sbin/named" name="/var/log/query.log" pid=3877 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 894.671081] type=1400 audit(1344098782.257:28): apparmor="DENIED" operation="mknod" parent=3922 profile="/usr/sbin/named" name="/var/log/query.log" pid=3927 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 968.514669] type=1400 audit(1344098856.101:29): apparmor="DENIED" operation="mknod" parent=3978 profile="/usr/sbin/named" name="/var/log/query.log" pid=3983 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1021.814582] type=1400 audit(1344098909.401:30): apparmor="DENIED" operation="mknod" parent=4010 profile="/usr/sbin/named" name="/var/log/query.log" pid=4012 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1063.856633] type=1400 audit(1344098951.445:31): apparmor="DENIED" operation="mknod" parent=4041 profile="/usr/sbin/named" name="/var/log/query.log" pid=4043 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1085.404001] type=1400 audit(1344098972.989:32): apparmor="DENIED" operation="mknod" parent=4072 profile="/usr/sbin/named" name="/var/log/query.log" pid=4077 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1108.207402] type=1400 audit(1344098995.793:33): apparmor="DENIED" operation="mknod" parent=4102 profile="/usr/sbin/named" name="/var/log/query.log" pid=4107 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1156.947189] type=1400 audit(1344099044.533:34): apparmor="DENIED" operation="mknod" parent=4134 profile="/usr/sbin/named" name="/var/log/query.log" pid=4136 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1166.768005] type=1400 audit(1344099054.353:35): apparmor="DENIED" operation="mknod" parent=4150 profile="/usr/sbin/named" name="/var/log/query.log" pid=4155 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1168.873385] type=1400 audit(1344099056.461:36): apparmor="DENIED" operation="mknod" parent=4162 profile="/usr/sbin/named" name="/var/log/query.log" pid=4167 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1181.558946] type=1400 audit(1344099069.145:37): apparmor="DENIED" operation="mknod" parent=4177 profile="/usr/sbin/named" name="/var/log/query.log" pid=4182 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 1199.349265] type=1400 audit(1344099086.937:38): apparmor="DENIED" operation="mknod" parent=4191 profile="/usr/sbin/named" name="/var/log/query.log" pid=4196 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 1296.805604] type=1400 audit(1344099184.393:39): apparmor="DENIED" operation="mknod" parent=4232 profile="/usr/sbin/named" name="/var/log/query.log" pid=4237 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1317.730568] type=1400 audit(1344099205.317:40): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/tmp-nuBes0IXwi" pid=4251 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1317.730744] type=1400 audit(1344099205.317:41): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/tmp-ZDJA06ZOkU" pid=4252 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1365.072687] type=1400 audit(1344099252.661:42): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/tmp-EnsuYUrGOC" pid=4290 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1365.074520] type=1400 audit(1344099252.661:43): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/tmp-LVCnpWOStP" pid=4287 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1380.336984] type=1400 audit(1344099267.925:44): apparmor="DENIED" operation="mknod" parent=4617 profile="/usr/sbin/named" name="/var/log/query.log" pid=4622 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1437.924534] type=1400 audit(1344099325.513:45): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/tmp-Uyf1dHIZUU" pid=4648 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1437.924626] type=1400 audit(1344099325.513:46): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/tmp-OABXWclII3" pid=4647 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1526.334959] type=1400 audit(1344099413.921:47): apparmor="DENIED" operation="mknod" parent=4749 profile="/usr/sbin/named" name="/var/log/query.log" pid=4754 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 1601.292548] type=1400 audit(1344099488.881:48): apparmor="DENIED" operation="mknod" parent=4835 profile="/usr/sbin/named" name="/var/log/query.log" pid=4840 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 1639.543733] type=1400 audit(1344099527.129:49): apparmor="DENIED" operation="mknod" parent=4905 profile="/usr/sbin/named" name="/var/log/query.log" pid=4907 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1916.381179] type=1400 audit(1344099803.969:50): apparmor="DENIED" operation="mknod" parent=4959 profile="/usr/sbin/named" name="/var/log/query.log" pid=4961 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 1940.816898] type=1400 audit(1344099828.405:51): apparmor="DENIED" operation="mknod" parent=4991 profile="/usr/sbin/named" name="/var/log/query.log" pid=4996 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 2043.010898] type=1400 audit(1344099930.597:52): apparmor="DENIED" operation="mknod" parent=5048 profile="/usr/sbin/named" name="/var/log/query.log" pid=5053 comm="named" requested_mask="c" denied_mask="c" fsuid=107 ouid=107 [ 2084.956230] type=1400 audit(1344099972.545:53): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/var/log/tmp-XYgr33RqUt" pid=5069 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 2084.959120] type=1400 audit(1344099972.545:54): apparmor="DENIED" operation="mknod" parent=3325 profile="/usr/sbin/named" name="/var/log/tmp-vO24RHwL14" pid=5066 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 2088.169500] type=1400 audit(1344099975.757:55): apparmor="DENIED" operation="mknod" parent=5076 profile="/usr/sbin/named" name="/var/log/query.log" pid=5078 comm="named" requested_mask="c" denied_mask="c" fsuid=0 ouid=0 [ 2165.625096] type=1400 audit(1344100053.213:56): apparmor="STATUS" operation="profile_remove" name="/sbin/dhclient" pid=5124 comm="apparmor" [ 2165.625401] type=1400 audit(1344100053.213:57): apparmor="STATUS" operation="profile_remove" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=5124 comm="apparmor" [ 2165.625608] type=1400 audit(1344100053.213:58): apparmor="STATUS" operation="profile_remove" name="/usr/lib/connman/scripts/dhclient-script" pid=5124 comm="apparmor" [ 2165.625782] type=1400 audit(1344100053.213:59): apparmor="STATUS" operation="profile_remove" name="/usr/lib/libvirt/virt-aa-helper" pid=5124 comm="apparmor" [ 2165.625931] type=1400 audit(1344100053.213:60): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/libvirtd" pid=5124 comm="apparmor" [ 2165.626057] type=1400 audit(1344100053.213:61): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/mysqld" pid=5124 comm="apparmor" [ 2165.626181] type=1400 audit(1344100053.213:62): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/named" pid=5124 comm="apparmor" [ 2165.626319] type=1400 audit(1344100053.213:63): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/tcpdump" pid=5124 comm="apparmor" [ 3709.583927] type=1400 audit(1344101597.169:64): apparmor="STATUS" operation="profile_load" name="/usr/sbin/libvirtd" pid=7484 comm="apparmor_parser" [ 3709.839895] type=1400 audit(1344101597.425:65): apparmor="STATUS" operation="profile_load" name="/usr/sbin/mysqld" pid=7485 comm="apparmor_parser" [ 3710.008892] type=1400 audit(1344101597.597:66): apparmor="STATUS" operation="profile_load" name="/usr/lib/libvirt/virt-aa-helper" pid=7483 comm="apparmor_parser" [ 3710.545232] type=1400 audit(1344101598.133:67): apparmor="STATUS" operation="profile_load" name="/usr/sbin/named" pid=7486 comm="apparmor_parser" [ 3710.655600] type=1400 audit(1344101598.241:68): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=7481 comm="apparmor_parser" [ 3710.656013] type=1400 audit(1344101598.241:69): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=7481 comm="apparmor_parser" [ 3710.656786] type=1400 audit(1344101598.245:70): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=7481 comm="apparmor_parser" [ 3710.832624] type=1400 audit(1344101598.421:71): apparmor="STATUS" operation="profile_load" name="/usr/sbin/tcpdump" pid=7488 comm="apparmor_parser" [ 3717.573123] type=1400 audit(1344101605.161:72): apparmor="DENIED" operation="open" parent=7505 profile="/usr/sbin/named" name="/var/log/query.log" pid=7510 comm="named" requested_mask="ac" denied_mask="ac" fsuid=107 ouid=0 [ 3743.667808] type=1400 audit(1344101631.253:73): apparmor="STATUS" operation="profile_remove" name="/sbin/dhclient" pid=7552 comm="apparmor" [ 3743.668338] type=1400 audit(1344101631.257:74): apparmor="STATUS" operation="profile_remove" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=7552 comm="apparmor" [ 3743.668625] type=1400 audit(1344101631.257:75): apparmor="STATUS" operation="profile_remove" name="/usr/lib/connman/scripts/dhclient-script" pid=7552 comm="apparmor" [ 3743.668834] type=1400 audit(1344101631.257:76): apparmor="STATUS" operation="profile_remove" name="/usr/lib/libvirt/virt-aa-helper" pid=7552 comm="apparmor" [ 3743.668991] type=1400 audit(1344101631.257:77): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/libvirtd" pid=7552 comm="apparmor" [ 3743.669127] type=1400 audit(1344101631.257:78): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/mysqld" pid=7552 comm="apparmor" [ 3743.669282] type=1400 audit(1344101631.257:79): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/named" pid=7552 comm="apparmor" [ 3743.669520] type=1400 audit(1344101631.257:80): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/tcpdump" pid=7552 comm="apparmor" [ 3873.572336] type=1400 audit(1344101761.161:81): apparmor="STATUS" operation="profile_load" name="/usr/sbin/libvirtd" pid=7722 comm="apparmor_parser" [ 3873.826209] type=1400 audit(1344101761.413:82): apparmor="STATUS" operation="profile_load" name="/usr/sbin/mysqld" pid=7723 comm="apparmor_parser" [ 3873.988181] type=1400 audit(1344101761.577:83): apparmor="STATUS" operation="profile_load" name="/usr/lib/libvirt/virt-aa-helper" pid=7721 comm="apparmor_parser" [ 3874.520305] type=1400 audit(1344101762.109:84): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=7719 comm="apparmor_parser" [ 3874.520736] type=1400 audit(1344101762.109:85): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=7719 comm="apparmor_parser" [ 3874.521000] type=1400 audit(1344101762.109:86): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=7719 comm="apparmor_parser" [ 3874.528878] type=1400 audit(1344101762.117:87): apparmor="STATUS" operation="profile_load" name="/usr/sbin/named" pid=7724 comm="apparmor_parser" [ 3874.930712] type=1400 audit(1344101762.517:88): apparmor="STATUS" operation="profile_load" name="/usr/sbin/tcpdump" pid=7726 comm="apparmor_parser" [ 3971.744599] type=1400 audit(1344101859.333:89): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/libvirtd" pid=7899 comm="apparmor_parser" [ 3972.009857] type=1400 audit(1344101859.597:90): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=7900 comm="apparmor_parser" [ 3972.165297] type=1400 audit(1344101859.753:91): apparmor="STATUS" operation="profile_replace" name="/usr/lib/libvirt/virt-aa-helper" pid=7898 comm="apparmor_parser" [ 3972.587766] type=1400 audit(1344101860.173:92): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/named" pid=7901 comm="apparmor_parser" [ 3972.847189] type=1400 audit(1344101860.433:93): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=7896 comm="apparmor_parser" [ 3972.847705] type=1400 audit(1344101860.433:94): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=7896 comm="apparmor_parser" [ 3972.848150] type=1400 audit(1344101860.433:95): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=7896 comm="apparmor_parser" [ 3973.147889] type=1400 audit(1344101860.733:96): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/tcpdump" pid=7903 comm="apparmor_parser" [ 3988.863999] type=1400 audit(1344101876.449:97): apparmor="DENIED" operation="open" parent=7939 profile="/usr/sbin/named" name="/var/log/query.log" pid=7944 comm="named" requested_mask="ac" denied_mask="ac" fsuid=107 ouid=0 [ 4025.826132] type=1400 audit(1344101913.413:98): apparmor="STATUS" operation="profile_remove" name="/sbin/dhclient" pid=7975 comm="apparmor" [ 4025.826627] type=1400 audit(1344101913.413:99): apparmor="STATUS" operation="profile_remove" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=7975 comm="apparmor" [ 4025.826861] type=1400 audit(1344101913.413:100): apparmor="STATUS" operation="profile_remove" name="/usr/lib/connman/scripts/dhclient-script" pid=7975 comm="apparmor" [ 4025.827059] type=1400 audit(1344101913.413:101): apparmor="STATUS" operation="profile_remove" name="/usr/lib/libvirt/virt-aa-helper" pid=7975 comm="apparmor" [ 4025.827214] type=1400 audit(1344101913.413:102): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/libvirtd" pid=7975 comm="apparmor" [ 4025.827352] type=1400 audit(1344101913.413:103): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/mysqld" pid=7975 comm="apparmor" [ 4025.827485] type=1400 audit(1344101913.413:104): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/named" pid=7975 comm="apparmor" [ 4025.827624] type=1400 audit(1344101913.413:105): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/tcpdump" pid=7975 comm="apparmor" [ 4027.862198] type=1400 audit(1344101915.449:106): apparmor="STATUS" operation="profile_load" name="/usr/sbin/libvirtd" pid=8090 comm="apparmor_parser" [ 4039.500920] audit_printk_skb: 21 callbacks suppressed [ 4039.500932] type=1400 audit(1344101927.089:114): apparmor="STATUS" operation="profile_remove" name="/sbin/dhclient" pid=8114 comm="apparmor" [ 4039.501413] type=1400 audit(1344101927.089:115): apparmor="STATUS" operation="profile_remove" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=8114 comm="apparmor" [ 4039.501672] type=1400 audit(1344101927.089:116): apparmor="STATUS" operation="profile_remove" name="/usr/lib/connman/scripts/dhclient-script" pid=8114 comm="apparmor" [ 4039.501861] type=1400 audit(1344101927.089:117): apparmor="STATUS" operation="profile_remove" name="/usr/lib/libvirt/virt-aa-helper" pid=8114 comm="apparmor" [ 4039.502033] type=1400 audit(1344101927.089:118): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/libvirtd" pid=8114 comm="apparmor" [ 4039.502170] type=1400 audit(1344101927.089:119): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/mysqld" pid=8114 comm="apparmor" [ 4039.502305] type=1400 audit(1344101927.089:120): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/named" pid=8114 comm="apparmor" [ 4039.502442] type=1400 audit(1344101927.089:121): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/tcpdump" pid=8114 comm="apparmor" [ 4041.425405] type=1400 audit(1344101929.013:122): apparmor="STATUS" operation="profile_load" name="/usr/lib/libvirt/virt-aa-helper" pid=8240 comm="apparmor_parser" [ 4041.425952] type=1400 audit(1344101929.013:123): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=8238 comm="apparmor_parser" [ 4058.910390] audit_printk_skb: 18 callbacks suppressed [ 4058.910401] type=1400 audit(1344101946.497:130): apparmor="STATUS" operation="profile_remove" name="/sbin/dhclient" pid=8264 comm="apparmor" [ 4058.910757] type=1400 audit(1344101946.497:131): apparmor="STATUS" operation="profile_remove" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=8264 comm="apparmor" [ 4058.910969] type=1400 audit(1344101946.497:132): apparmor="STATUS" operation="profile_remove" name="/usr/lib/connman/scripts/dhclient-script" pid=8264 comm="apparmor" [ 4058.911185] type=1400 audit(1344101946.497:133): apparmor="STATUS" operation="profile_remove" name="/usr/lib/libvirt/virt-aa-helper" pid=8264 comm="apparmor" [ 4058.911335] type=1400 audit(1344101946.497:134): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/libvirtd" pid=8264 comm="apparmor" [ 4058.911595] type=1400 audit(1344101946.497:135): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/mysqld" pid=8264 comm="apparmor" [ 4058.911856] type=1400 audit(1344101946.497:136): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/named" pid=8264 comm="apparmor" [ 4058.912001] type=1400 audit(1344101946.497:137): apparmor="STATUS" operation="profile_remove" name="/usr/sbin/tcpdump" pid=8264 comm="apparmor" [ 4060.266700] type=1400 audit(1344101947.853:138): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=8391 comm="apparmor_parser" [ 4060.268356] type=1400 audit(1344101947.857:139): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=8391 comm="apparmor_parser" [ 5909.432749] audit_printk_skb: 18 callbacks suppressed [ 5909.432759] type=1400 audit(1344103797.021:146): apparmor="DENIED" operation="open" parent=8800 profile="/usr/sbin/named" name="/var/log/query.log" pid=8805 comm="named" requested_mask="ac" denied_mask="ac" fsuid=107 ouid=0 root@zotac:~# What can I do that it still works and I don't have to disable apparmor ?

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >