Search Results

Search found 3392 results on 136 pages for 'average joe'.

Page 99/136 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Getting AveragePower and PeakPower for a Channel in AVAudioRecorder

    - by Biranchi
    Hi all, I am annoyed with this piece of code. I am trying to get the averagePowerForChannel and peakPowerForChannel while recording Audio, but every time i am getting it as 0.0 Below is my code for recording audio : NSMutableDictionary *recordSetting =[[NSDictionary alloc] initWithObjectsAndKeys:[NSNumber numberWithFloat: 22050.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, [NSNumber numberWithInt:32],AVLinearPCMBitDepthKey, [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey, [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey, nil]; recorder1 = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:audioFilePath] settings:recordSetting error:&err]; recorder1.meteringEnabled = YES; recorder1.delegate=self; [recorder1 prepareToRecord]; [recorder1 record]; levelTimer = [NSTimer scheduledTimerWithTimeInterval: 0.3f target: self selector: @selector(levelTimerCallback:) userInfo: nil repeats: YES]; - (void)levelTimerCallback:(NSTimer *)timer { [recorder1 updateMeters]; NSLog(@"Peak Power : %f , %f", [recorder1 peakPowerForChannel:0], [recorder1 peakPowerForChannel:1]); NSLog(@"Average Power : %f , %f", [recorder1 averagePowerForChannel:0], [recorder1 averagePowerForChannel:1]); } What is the error in the code ???

    Read the article

  • Filtering documents against a dictionary key in MongoDB

    - by Thomas
    I have a collection of articles in MongoDB that has the following structure: { 'category': 'Legislature', 'updated': datetime.datetime(2010, 3, 19, 15, 32, 22, 107000), 'byline': None, 'tags': { 'party': ['Peter Hoekstra', 'Virg Bernero', 'Alma Smith', 'Mike Bouchard', 'Tom George', 'Rick Snyder'], 'geography': ['Michigan', 'United States', 'North America'] }, 'headline': '2 Mich. gubernatorial candidates speak to students', 'text': [ 'BEVERLY HILLS, Mich. (AP) \u2014 Two Democratic and Republican gubernatorial candidates found common ground while speaking to private school students in suburban Detroit', "Democratic House Speaker state Rep. Andy Dillon and Republican U.S. Rep. Pete Hoekstra said Friday a more business-friendly government can help reduce Michigan's nation-leading unemployment rate.", "The candidates were invited to Detroit Country Day Upper School in Beverly Hills to offer ideas for Michigan's future.", 'Besides Dillon, the Democratic field includes Lansing Mayor Virg Bernero and state Rep. Alma Wheeler Smith. Other Republicans running are Oakland County Sheriff Mike Bouchard, Attorney General Mike Cox, state Sen. Tom George and Ann Arbor business leader Rick Snyder.', 'Former Republican U.S. Rep. Joe Schwarz is considering running as an independent.' ], 'dateline': 'BEVERLY HILLS, Mich.', 'published': datetime.datetime(2010, 3, 19, 8, 0, 31), 'keywords': "Governor's Race", '_id': ObjectId('4ba39721e0e16cb25fadbb40'), 'article_id': 'urn:publicid:ap.org:0611e36fb084458aa620c0187999db7e', 'slug': "BC-MI--Governor's Race,2nd Ld-Writethr" } If I wanted to write a query that looked for all articles that had at least 1 geography tag, how would I do that? I have tried writing db.articles.find( {'tags': 'geography'} ), but that doesn't appear to work. I've also thought about changing the search parameter to 'tags.geography', but am having a devil of a time figuring out what the search predicate would be.

    Read the article

  • Building v8 without JIT

    - by rames
    Hello, I would like to run some tests on v8 with and without JIT to compare performances. I know JIT will improve my average speed performance, but it would be nice for me to have some actual more detailed tests results as I want to work with mobile platforms. I haven't found how to enable or disable JIT like it exists on Squirrelfish (cf. ENABLE_JIT in JavaScriptCore/wtf/Platform.h). Does somebody knows how to do that with v8? Thanks. Alexandre

    Read the article

  • C# COM objects with VB6/asp error

    - by Ken
    I'm trying to expose a C# class library via COM so that I can use it in a classic asp web site. I've used sn - k, regasm and gacutil. About all I can do now though is echo back strings. Methods which take Class variables as input are not working for me. ie my test method EchoPerson(Person p) which returns a string of the first and last name doesn't work. I get a runtime error 5 - Invalid procedure call or argument. Please let me know what I am missing. Also I have no intellisence in VB. What do I need to do to get the intellisence working. Below is my C# test code namespace MrdcToFastCom { public class Person : MrdcToFastCom.IPerson { public string FirstName { get; set; } public string LastName { get; set; } } public class ComTester : MrdcToFastCom.IComTester { public string EchoString(string s) { return ("Echo: " + s); } public string Hello() { return "Hello"; } public string EchoPerson(ref Person p) { return string.Format("{0} {1}", p.FirstName, p.LastName); } } } and VB6 call Private Sub btnClickMe_Click() Dim ct Set ct = New MrdcToFastCom.ComTester Dim p Set p = New MrdcToFastCom.Person p.FirstName = "Joe" p.LastName = "Test" Dim s s = ct.EchoPerson(p) 'Error on this line tbx1.Text = s End Sub

    Read the article

  • How to implement SCORM in Objective C

    - by iranjan
    Hey, Do you know how to implement SCORM (Sharable Content Object Reference Model) in Objective C for eLearning content? Let me explain you what exactly I am looking for. I have one MCQ (multiple choice question) application which has 4 questions. On attempting each question I want my application to interact with a SCORM compatible server with result (whether the user has attempted correct one or not). The communication channel should be to and fro. May be at the end of the MCQ I want to show result which will come from the server with some calculations*(like Score : 85% number of attempts : 16 average score:16.7% etc.)*. How should I go about it? Please guide if you have already achieved it regards Ranjan.

    Read the article

  • Metaprograming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into prectice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decitions - experience - good decitions)? Thank you!

    Read the article

  • Extract Email Attachments from Outlook (exchange server) using C#

    - by ChokkaMedex
    Extract Email Attachments from Outlook (exchange server) using C# I need to run a script or Service - Which can automatically deduct the attachment file from a Specific Email Id ( [email protected]). Attachment file will be .zip format. I need to Unzip this file.. I need to do this task completely in an automated format. On an average, I will receive only one email in a week. I need to write the program in C#.Net...! Kindly help me by sharing your logics ... Many thanks in advance..!

    Read the article

  • Algorithm to determine thread "hotness"

    - by nickf
    I'm trying to come up with a way to determine how "hot" certain threads are in a forum. What criteria would you use and why? How would these come together to give a hotness score? The criteria I'm thinking of include: how many replies how long since the last reply average time between replies The problems this algorithm must solve: A thread which has 500 replies is clearly hot, unless the last reply was over a year ago. A thread with 500 replies that was replied to a second ago is clearly hot, unless it's taken 4 years to reach 500 replies. A thread with 15 replies in the last 4 minutes is really hot! Any ideas, thoughts or complete solutions out there?

    Read the article

  • How to extract frequency information from samples from PortAudio using FFTW in C

    - by houbysoft
    Hi all, I want to make a program that would record audio data using PortAudio (I have this part done) and then display the frequency information of that recorded audio (for now, I'd like to display the average frequency of each of the group of samples as they come in). From some research I've done, I know that I need to do an FFT. So I googled for a library to do that, in C, and found FFTW. However, now I am a little lost. What exactly am I supposed to do with the samples I recorded to extract some frequency information from them? What kind of FFT should I use (I assume I'd need a real data 1D?)? And once I'd do the FFT, how do I get the frequency information from the data it gives me? Thanks a lot in advance.

    Read the article

  • Unexpected Blank lines in python output

    - by Martlark
    I have a bit of code that runs through a dictionary and outputs the values from it in a CSV format. Strangely I'm getting a couple of blank lines where all the output of all of the dictionary entries is blank. I've read the code and can't understand has anything except lines with commas can be output. The blank line should have values in it, so extra \n is not the cause. Can anyone advise why I'd be getting blank lines? Other times I run the missing line appears. Missing line: 6415, 6469, -4.60, clerical, 2, ,,,joe,030193027org,joelj,030155640dup Using python 2.6.5 Bit of code: tfile = file(path, 'w') tfile.write('Rec_ID_A, Rec_ID_B, Weight, Assigned, Run, By, On, Comment\n') rec_num_a = 0 while (rec_num_a <= max_rec_num_a): try: value = self.dict['DA'+str(rec_num_a)] except: value = [0,0,0,'rejected'] if (value[3]!='rejected'): weightValue = "%0.2f" % value[2] line = value[0][1:] + ', ' + value[1][1:] + ', ' + weightValue \ + ', ' + str(value[3]) + ', ' + str(value[4]) if (len(value)>5): line = line + ', ' + value[5] + ',' + value[6] + ',' + value[7] (a_pkey, b_pkey) = self.derive_pkeys(value) line = line + a_pkey + b_pkey tfile.write( line + '\n') rec_num_a +=1 Sample output 6388, 2187, 76.50, clerical, 1, ,,,cameron,030187639org,cameron,030187639org 6398, 2103, 70.79, clerical, 1, ,,,caleb,030189225org,caldb,030189225dup 6402, 2205, 1.64, clerical, 2, ,,,jenna,030190334org,cameron,020305169dup 6409, 7892, 79.09, clerical, 1, ,,,liam,030191863org,liam,030191863org 6416, 11519, 79.09, clerical, 1, ,,,thomas,030193156org,thomas,030193156org 6417, 8854, 6.10, clerical, 2, ,,,ruby,030193713org,mia,020160397org 6421, 2864, -0.84, clerical, 2, ,,,kristin,030194394org,connou,020023478dup 6423, 413, 75.63, clerical, 1, ,,,adrian,030194795org,adriah,030194795dup

    Read the article

  • Linq Query Performance , comparing Compiled query vs Non-Compiled.

    - by AG.
    Hello Guys, I was wondering if i extract the common where clause query into a common expression would it make my query much faster, if i have say something like 10 linq queries on a collection with exact same 1st part of the where clause. I have done a small example to explain a bit more . public class Person { public string First { get; set; } public string Last { get; set; } public int Age { get; set; } public String Born { get; set; } public string Living { get; set; } } public sealed class PersonDetails : List<Person> { } PersonDetails d = new PersonDetails(); d.Add(new Person() {Age = 29, Born = "Timbuk Tu", First = "Joe", Last = "Bloggs", Living = "London"}); d.Add(new Person() { Age = 29, Born = "Timbuk Tu", First = "Foo", Last = "Bar", Living = "NewYork" }); Expression<Func<Person, bool>> exp = (a) => a.Age == 29; Func<Person, bool> commonQuery = exp.Compile(); var lx = from y in d where commonQuery.Invoke(y) && y.Living == "London" select y; var bx = from y in d where y.Age == 29 && y.Living == "NewYork" select y; Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", lx.Single().Age, lx.Single().First , lx.Single().Last, lx.Single().Living, lx.Single().Born ); Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", bx.Single().Age, bx.Single().First, bx.Single().Last, bx.Single().Living, bx.Single().Born); So can some of the guru's here give me some advice if it would be a good practice to write query like var lx = "Linq Expression " or var bx = "Linq Expression" ? Any inputs would be highly appreciated. Thanks, AG

    Read the article

  • Testing a scoped find in a Rails controller with RSpec

    - by Joseph DelCioppio
    I've got a controller called SolutionsController whose index action is different depending on the value of params[:user_id]. If its nil, then it simply displays all of the solutions on my site, but if its not nil, then it displays all of the solutions for the given user id. Here it is: def index if(params[:user_id]) @solutions = @user.solutions.find(:all) else @solutions = Solution.find(:all) end end and @user is determined like this: private def load_user if(params[:user_id ]) @user = User.find(params[:user_id]) end end I've got an Rspec test to test the index action if the user is nil: describe "GET index" do context "when user_id is nil" do it "should find all of the solutions" do Solution.should_receive(:find).with(:all).and_return(@solutions) get :index end end end however, can someone tell me how I write a similar test for the other half of my controller, when the user id isn't nil? Something like: describe "GET index" do context "when user_id isn't nil" do before(:each) do @user = Factory.create(:user) @solutions = 7.times{Factory.build(:solution, :user => @user)} @user.stub!(:solutions).and_return(@solutions) end it "should find all of the solutions owned by a user" do @user.should_receive(:solutions).and_return(@solutions) get :index, :user_id => @user.id end end end But that doesn't work. Can someone help me out? Joe

    Read the article

  • Testing a scoped find in a Rails controller with RSpec

    - by Joseph DelCioppio
    I've got a controller called SolutionsController whose index action is different depending on the value of params[:user_id]. If its nil, then it simply displays all of the solutions on my site, but if its not nil, then it displays all of the solutions for the given user id. Here it is: def index if(params[:user_id]) @solutions = @user.solutions.find(:all) else @solutions = Solution.find(:all) end end and @user is determined like this: private def load_user if(params[:user_id ]) @user = User.find(params[:user_id]) end end I've got an Rspec test to test the index action if the user is nil: describe "GET index" do context "when user_id is nil" do it "should find all of the solutions" do Solution.should_receive(:find).with(:all).and_return(@solutions) get :index end end end however, can someone tell me how I write a similar test for the other half of my controller, when the user id isn't nil? Something like: describe "GET index" do context "when user_id isn't nil" do before(:each) do @user = Factory.create(:user) @solutions = 7.times{Factory.build(:solution, :user => @user)} @user.stub!(:solutions).and_return(@solutions) end it "should find all of the solutions owned by a user" do @user.should_receive(:solutions).and_return(@solutions) get :index, :user_id => @user.id end end end But that doesn't work. Can someone help me out? Joe

    Read the article

  • why cacti is showing empty graph.??.even if rrd file created..

    - by Divya mohan Singh
    hii, i have develop my own snmp service..and i want to plot a graph of an OID provided. so, i have create graph in cacti. -) Its is showing device up. -) It is creating rrd file.(RRDTool says OK). -) showing the graph but its empty. but when i check it say rrdtool fetch AVERAGE it showing me all the values nan only..the monitored OID is having value 47 and i have set min=0 and max=100 i am using cacti appliance by rpath http://www.rpath.org/ui/#/appliances?id=http://www.rpath.org/api/products/cacti-appliance still i can show value on graph.. where is the problem??can anyone plz tell me??

    Read the article

  • What's the right way to calculate derived data in a Flex AdvancedDataGrid using summaries?

    - by Chris R
    Here's the gist of the problem: I have a set of rows of data with (say) field1 to field4 in them. I'm using a GroupingCollection to group on field1 and field2. So, I have something like this: f1.1 f2.1 f3.1 f4.1 f3.2 f4.2 f2.2 f3.3 f4.3 f3.4 f4.4 f3.5 f4.5 f1.2 f2.1 f3.6 f4.6 f2.2 f3.7 f4.7 f3.8 f4.8 f3.9 f4.9 (or at least, I hope that's clear enough) I need to calculate some derived values for each leaf row, for example f3, that is the ratio of f3 to the average of all f3 in that particular part of the tree. So, for f3.7 I need to calculate f3.7 / avg(f3.7..f3.9) and fill that into the f3_index property on the row, displaying that in lieu of f3 itself. So, basically, what it looks like I have to do is add source field values in the summarizeFunction implementation. It seems to me that there must be a better way of doing this. Is there?

    Read the article

  • How do I regenerate statistics in Openx?

    - by Martin Bauer
    ue to faulty hardware, statistics generated over a 2 week period were significantly higher than normal (10000 times higher than normal). After moving the application to a new server, the problem rectified itself. The issue I have is that there are 2 weeks of stats that are clearly wrong. I have checked the raw impressions table for the affected fortnight and it seems to be correct (ie. stats per banner per day match the average for the previous month). Looking at the intermediate & summary impressions tables, the values are inflated. I understand from the openx forum (http://forum.openx.org/index.php?s=7796fd9dae40e020a010773746f3ada9&showtopic=503424297) it's possible to regenerate stats from the raw data but it will only regenerate stats per hour, meaning regenerating stats for 2 weeks would be very time consuming. Is there another, more efficient way to regenerate the stats from the raw data for the affected fortnight?

    Read the article

  • flow 2 columns of text automatically with CSS

    - by Joseph Mastey
    I have the code similar to the following: <p>This is paragraph 1. Lorem ipsum ... </p> <p>This is paragraph 2. Lorem ipsum ... </p> <p>This is paragraph 3. Lorem ipsum ... </p> <p>This is paragraph 4. Lorem ipsum ... </p> <p>This is paragraph 5. Lorem ipsum ... </p> <p>This is paragraph 6. Lorem ipsum ... </p> I'd like to, without markup if possible, cause this text to flow into two columns (1-3 on the left, 4-6 on the right). The reason for my hesitation to add a column using a <div> is that this text is entered by the client via a WYSIWYG editor, so any elements I inject are likely to be killed later on inexplicably. Thanks, Joe

    Read the article

  • zlib memory usage / performance. With 500kb of data.

    - by unixman83
    Is zLib Worth it? Are there other better suited compressors? I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however. The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments? However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify? My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • Starting a Blog using Microsoft.Net technologies

    - by manav inder
    I want to start a blog using Microsoft technologies. My primary reason is to get more in-sync with technologies which are very much in demand. It does not matter how steep is the learning curve as long I am willing to devote all the time in the world. There are lot going on like Microsoft WebAPI, Dot net nuke MVC SPA etc. Let me tell you what i know I have very good experience in developing database driven .net application using winforms and wpf. Average experience in asp.net and asp.net mvc. Good in entity framework, ado.net and wcf rest services. Good in IoC/DI.

    Read the article

  • Entity Framework and Sql Server view question

    - by Sergio Romero
    Hi to all, For several reasons that I don't have the liberty to talk about, we are defining a view on our Sql Server 2005 database like so: CREATE VIEW [dbo].[MeterProvingStatisticsPoint] AS SELECT CAST(0 AS BIGINT) AS 'RowNumber', CAST(0 AS BIGINT) AS 'ProverTicketId', CAST(0 AS INT) AS 'ReportNumber', GETDATE() AS 'CompletedDateTime', CAST(1.1 AS float) AS 'MeterFactor', CAST(1.1 AS float) AS 'Density', CAST(1.1 AS float) AS 'FlowRate', CAST(1.1 AS float) AS 'Average', CAST(1.1 AS float) AS 'StandardDeviation', CAST(1.1 AS float) AS 'MeanPlus2XStandardDeviation', CAST(1.1 AS float) AS 'MeanMinus2XStandardDeviation' WHERE 0 = 1 The idea is that the Entity Framework will create an entity based on this query, which it does, but it generates it with an error that states the following: "warning 6002: The table/view 'Keystone_Local.dbo.MeterProvingStatisticsPoint' does not have a primary key defined. The key has been inferred and the definition was created as a read-only table/view." And it decides that the CompletedDateTime field will be this entity primary key. We are using EdmGen to generate the model. Is there a way not to have the entity framework include any field of this view as a primary key? Thanks for help.

    Read the article

  • How do I make "simple" throughput servlet-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • Active Directory - Query Group for all machines

    - by Ben Cawley
    Hi, I'm trying to obtain a list of all Machines that are members of a known group. I have the group GUID and am constructing a query using the "memberof=" format and filtering by ObjectClass. This works fine but doesn't return machines if the PrimaryGroup attribute of a machine is set to be the known group. In this case, that machine won't be returned. I've found the explanation of why this is in the following link (See Joe Kaplan's response) http://www.eggheadcafe.com/software/aspnet/29773581/active-directory-query-c.aspx Unfortunately the outlined answer is how to obtain the list of groups from a given user. I'd like to do the reverse and from a given group obtain the list of machines. It seems that the PrimaryGroup information is stored on the Machine/User side so I'm not sure if what I want to do is even possible. I had thought I would be able to query the TokenGroup attribute of the known group and then construct a query to return all machines that have the TokenGroup attribute set but it seems that not all groups have this attribute. Does anyone have any ideas or suggestions? If any clarification is needed let me know! Cheers, Ben

    Read the article

  • Create a HTML table from nested maps (and vectors)

    - by Kenny164
    I'm trying to create a table (a work schedule) I have coded previously using python, I think it would be a nice introduction to the Clojure language for me. I have very little experience in Clojure (or lisp in that matter) and I've done my rounds in google and a good bit of trial and error but can't seem to get my head around this style of coding. Here is my sample data (will be coming from an sqlite database in the future): (def smpl2 (ref {"Salaried" [{"John Doe" ["12:00-20:00" nil nil nil "11:00-19:00"]} {"Mary Jane" [nil "12:00-20:00" nil nil nil "11:00-19:00"]}] "Shift Manager" [{"Peter Simpson" ["12:00-20:00" nil nil nil "11:00-19:00"]} {"Joe Jones" [nil "12:00-20:00" nil nil nil "11:00-19:00"]}] "Other" [{"Super Man" ["07:00-16:00" "07:00-16:00" "07:00-16:00" "07:00-16:00" "07:00-16:00"]}]})) I was trying to step through this originally using for then moving onto doseq and finally domap (which seems more successful) and dumping the contents into a html table (my original python program outputed this from a sqlite database into an excel spreadsheet using COM). Here is my attempt (the create-table fn): (defn html-doc [title & body] (html (doctype "xhtml/transitional") [:html [:head [:title title]] [:body body]])) (defn create-table [] [:h1 "Schedule"] [:hr] [:table (:style "border: 0; width: 90%") [:th "Name"][:th "Mon"][:th "Tue"][:th "Wed"] [:th "Thur"][:th "Fri"][:th "Sat"][:th "Sun"] [:tr (domap [ct @smpl2] [:tr [:td (key ct)] (domap [cl (val ct)] (domap [c cl] [:tr [:td (key c)]]))]) ]]) (defroutes tstr (GET "/" ((html-doc "Sample" create-table))) (ANY "*" 404)) That outputs the table with the sections (salaried, manager, etc) and the names in the sections, I just feel like I'm abusing the domap by nesting it too many times as I'll probably need to add more domaps just to get the shift times in their proper columns and the code is getting a 'dirty' feel to it. I apologize in advance if I'm not including enough information, I don't normally ask for help on coding, also this is my 1st SO question :). If you know any better approaches to do this or even tips or tricks I should know as a newbie, they are definitely welcome. Thanks.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >