Search Results

Search found 4689 results on 188 pages for 'no average geek'.

Page 153/188 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • iPhone long plist

    - by Zac Altman
    I have some data i want to add in to my app...about 650 categories (includes a name + id number), each with an average of 85 items (each with a name/id number). Will the iPhone support such a large plist? I want to first display the categories in a UITableView, when a category is selected I want to display all of the associated items. Having such a large plist, im not sure if the iPhone will lag when loading the items. At over 51,000 lines it seems like...it might.

    Read the article

  • Should we use Visual Studio 2010 for all SQL Server Database Development?

    - by Luke
    Our company currently has seven dedicated SQL Server 2008 servers each running an average of 10 databases. All databases have many stored procedures and UDFs that commonly reference other databases both on the same server and also across linked servers. We currently use SSMS for all database related administration and development but we have recently purchased Visual Studio 2010 primarily for ongoing C# WinForms and ASP.NET development. I have used VS2010 to perform schema comparisons when rolling out changes from a development server into production and I'm finding it great for this task. We would like to consider using VS2010 for all database development going forward but as far as I understand, we would have to set up ALL databases as projects because of the dependencies on linked servers etc. My question is, do you have any experience using VS2010 for database development in a similar environment? Is it easy to use in tandem with SSMS or is it a one way street once VS2010 projects have been set up for all databases? Can you make any recommendations/impart any experience with a similar scenario? Thanks, Luke

    Read the article

  • How to use RRDTOOL for one value per day

    - by Octopus
    I have to create a graphical representation for staff salary. The staff is getting there salaries per day and I have there information in below format. This is one month data i.e. 1st March to 31st March <DATE>,<NAME1>,<NAME2>,<NAME3>......<NAME N> YYYY-MM-DD,name1,name2,Name3,.......name4 . . so on.. 1) Is rrdtool a better solution to create graphs and find AVERAGE, MAX, MIN. 2) If yes, How can I use above csv file to create RRD. 3) If no, what else I can use this to automate the graphical information on my website. Any suggestion in perl would be really appreciated.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Getting the uptime of a SunOS UNIX box in seconds only

    - by JF
    How do I determine the uptime on a SunOS UNIX box in seconds only? On Linux, I could simply cat /proc/uptime & take the first argument: cat /proc/uptime | awk '{print $1}' I'm trying to do the same on a SunOS UNIX box, but there is no /proc/uptime. There is an uptime command which presents the following output: $ uptime 12:13pm up 227 day(s), 15:14, 1 user, load average: 0.05, 0.05, 0.05 I don't really want to have to write code to convert the date into seconds only & I'm sure someone must have had this requirement before but I have been unable to find anything on the internet. Can anyone tell me how to get the uptime in just seconds? TIA

    Read the article

  • Fastest Algorithm to scale down 32Bit RGB IMAGE.

    - by Sunny
    which algorithm to use to scale down 32Bit RGB IMAGE to custom resolution? Algorithm should average pixels. for example If I have 100x100 image and I want new Image of size 20x50. Avg of first five pixels of first source row will give first pixel of dest, And avg of first two pixels of first source column will give first dest column pixel. Currently what I do is first scale down in X resolution, and after that I scale down in Y resolution. I need one temp buffer in this method. Is there any optimized method that you know?

    Read the article

  • How can I use RRDTOOL to plot values from a CSV file?

    - by Octopus
    I have to create a graphical representation for staff salary. The staff is getting their salaries per day and I have there information in below format. This is one month data i.e. 1st March to 31st March <DATE>,<NAME1>,<NAME2>,<NAME3>......<NAME N> YYYY-MM-DD,name1,name2,Name3,.......name4 . . so on.. 1) Is rrdtool a better solution to create graphs and find AVERAGE, MAX, MIN. 2) If yes, How can I use above CSV file to create RRD. 3) If no, what else I can use this to automate the graphical information on my website. Any suggestion in Perl would be really appreciated.

    Read the article

  • Visualizing volume of PCM samples

    - by genevincent
    I have several chunks of PCM audio (G.711) in my C++ application. I would like to visualize the different audio volume in each of these chunks. My first attempt was to calculate the average of the sample values for each chunk and use that as an a volume indicator, but this doesn't work well. I do get 0 for chunks with silence and differing values for chunks with audio, but the values only differ slighly and don't seem to resemble the actual volume. What would be a better algorithem calculate the volume ? I hear G.711 audio is logarithmic PCM. How should I take that into account ?

    Read the article

  • using Excel VBA, given the daily price of 50 stocks, choose 10 stocks such that they have the minumu

    - by correl
    The high-level goal is to choose 10 stocks that have the lowest correlation among one another, out of a pool of 50, so that I can have a well-diversified portfolio. I have managed to write some VBA macro to download the past 3 years of daily price data from Yahoo finance, and then compute the 50x50 correlation matrix (using the Correl function), using the daily close as the data. What I have tried so far is just some local-maximum heuristic: - For the two stocks that have the highest correlation with each other, remove one of them. Between the two, remove the one that has the higher average correlation with all the other stocks. - When I remove a stock from the pool, I just delete the correponding row and column, to give a smaller matrix. - Repeat until I have just 10 stocks remaining (a 10x10 matrix). I was wondering if there is some algorithm that already solves such a problem and gives the optimum solution?

    Read the article

  • SumProduct over sets of cells (not contiguous)

    - by Craig
    I have a total data set that is for 4 different groupings. One of the values is the average time, the other is count. For the Total I have to multiply these and then divide by the total of the count. Currently I use: =SUM(D32*D2,D94*D64,D156*D126,D218*D188)/SUM(D32,D94,D156,D218) I would rather use a SumProduct if I can to make it more readable. I tried to do: =SUMPRODUCT((D2,D64,D126,D188),(D32,D94,D156,D218))/SUM(D32,94,D156,D218) But as you can tell by my posting here, that did not work. Is there a way to do SumProduct like I want? Thoughts, Answers, Questions, Comments? Craig

    Read the article

  • jQuery Star Rating plugin - select in callback causes infinite loop

    - by Ian
    Using the jQuery Star Rating plugin everything works well until I select a star rating from the rating's callback handler. Simple example: $('.rating').rating({ ... callback: function(value){ $.ajax({ type: "POST", url: ... data: {rating: value}, success: function(data){ $('.rating').rating('select', 1); } }); } }); I'm guessing this infinite loop occurs because the callback is fired after a manual 'select' as well. Once a user submits their rating I'd like to 'select' the average rating across all users (this value is in data returned to the success handler). How can I do this without triggering an infinite loop?

    Read the article

  • Getting AveragePower and PeakPower for a Channel in AVAudioRecorder

    - by Biranchi
    Hi all, I am annoyed with this piece of code. I am trying to get the averagePowerForChannel and peakPowerForChannel while recording Audio, but every time i am getting it as 0.0 Below is my code for recording audio : NSMutableDictionary *recordSetting =[[NSDictionary alloc] initWithObjectsAndKeys:[NSNumber numberWithFloat: 22050.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, [NSNumber numberWithInt:32],AVLinearPCMBitDepthKey, [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey, [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey, nil]; recorder1 = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:audioFilePath] settings:recordSetting error:&err]; recorder1.meteringEnabled = YES; recorder1.delegate=self; [recorder1 prepareToRecord]; [recorder1 record]; levelTimer = [NSTimer scheduledTimerWithTimeInterval: 0.3f target: self selector: @selector(levelTimerCallback:) userInfo: nil repeats: YES]; - (void)levelTimerCallback:(NSTimer *)timer { [recorder1 updateMeters]; NSLog(@"Peak Power : %f , %f", [recorder1 peakPowerForChannel:0], [recorder1 peakPowerForChannel:1]); NSLog(@"Average Power : %f , %f", [recorder1 averagePowerForChannel:0], [recorder1 averagePowerForChannel:1]); } What is the error in the code ???

    Read the article

  • Building v8 without JIT

    - by rames
    Hello, I would like to run some tests on v8 with and without JIT to compare performances. I know JIT will improve my average speed performance, but it would be nice for me to have some actual more detailed tests results as I want to work with mobile platforms. I haven't found how to enable or disable JIT like it exists on Squirrelfish (cf. ENABLE_JIT in JavaScriptCore/wtf/Platform.h). Does somebody knows how to do that with v8? Thanks. Alexandre

    Read the article

  • How to implement SCORM in Objective C

    - by iranjan
    Hey, Do you know how to implement SCORM (Sharable Content Object Reference Model) in Objective C for eLearning content? Let me explain you what exactly I am looking for. I have one MCQ (multiple choice question) application which has 4 questions. On attempting each question I want my application to interact with a SCORM compatible server with result (whether the user has attempted correct one or not). The communication channel should be to and fro. May be at the end of the MCQ I want to show result which will come from the server with some calculations*(like Score : 85% number of attempts : 16 average score:16.7% etc.)*. How should I go about it? Please guide if you have already achieved it regards Ranjan.

    Read the article

  • Google Maps mashup for notes/househunting

    - by afray
    I'm house-hunting at the moment and I'm trying to geek it out er I mean streamline the decision-making process. I'm currently using google maps's "my maps" feature to store pins to properties. I create one map per estate agent, then put relevant into into the individual pins. The idea being, I can look at the map and quickly choose which property to view next. However the pins don't currently link back to the map they're owned by, so you have to hunt a bit to get the estate agent info, it's a hassle to get all maps displayed in a new session if you have lots of agents, and each pin doesn't automatically show its bubble so you have to do lots of clicking to see all the info you want. I've tried Evernote, but despite its tag system initially showing some promise, I can't find a way to seemlessly integrate maps. A few google searches don't turn anything up either. Even the big sites, like http://www.rightmove.co.uk, don't seem to provide any maps integration by default. You can see an individual house's location, but not all results of a search. So is there a web site or windows program I could use to do something like this? Viewing all properties on a map is a must, as is quick access to contact details.

    Read the article

  • Metaprograming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into prectice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decitions - experience - good decitions)? Thank you!

    Read the article

  • How to extract frequency information from samples from PortAudio using FFTW in C

    - by houbysoft
    Hi all, I want to make a program that would record audio data using PortAudio (I have this part done) and then display the frequency information of that recorded audio (for now, I'd like to display the average frequency of each of the group of samples as they come in). From some research I've done, I know that I need to do an FFT. So I googled for a library to do that, in C, and found FFTW. However, now I am a little lost. What exactly am I supposed to do with the samples I recorded to extract some frequency information from them? What kind of FFT should I use (I assume I'd need a real data 1D?)? And once I'd do the FFT, how do I get the frequency information from the data it gives me? Thanks a lot in advance.

    Read the article

  • Extract Email Attachments from Outlook (exchange server) using C#

    - by ChokkaMedex
    Extract Email Attachments from Outlook (exchange server) using C# I need to run a script or Service - Which can automatically deduct the attachment file from a Specific Email Id ( [email protected]). Attachment file will be .zip format. I need to Unzip this file.. I need to do this task completely in an automated format. On an average, I will receive only one email in a week. I need to write the program in C#.Net...! Kindly help me by sharing your logics ... Many thanks in advance..!

    Read the article

  • Algorithm to determine thread "hotness"

    - by nickf
    I'm trying to come up with a way to determine how "hot" certain threads are in a forum. What criteria would you use and why? How would these come together to give a hotness score? The criteria I'm thinking of include: how many replies how long since the last reply average time between replies The problems this algorithm must solve: A thread which has 500 replies is clearly hot, unless the last reply was over a year ago. A thread with 500 replies that was replied to a second ago is clearly hot, unless it's taken 4 years to reach 500 replies. A thread with 15 replies in the last 4 minutes is really hot! Any ideas, thoughts or complete solutions out there?

    Read the article

  • I need help choosing between two configurations of the Dell Studio 14

    - by Adnan
    There are two configurations of the Dell Studio 14 (1458) which I'm looking at: Config 1: Core i7-720QM @ 1.6 GHz; ATI Mobility Radeon HD 5450 1GB; 4gb DDR3 RAM @ 1066 MHz; 500 GB SATA HDD @ 7200 RPM; Price: $999 Config 2: Core i5-430M; ATI Mobility Radeon HD 4530 512MB; 4GB DDR3 RAM @ 1066 MHz; 500 GV SATA HDD @ 7200 RPM; Price: $874 What I want to know is, would config 1 still be able to do decent gaming (maybe some Starcraft II), and is there a great performance difference between the i5 and i7 processors? Is the $130 extra worth it for the i7 and better graphics card? I do more than just basic computing. I plan on getting into web design (specifically using Photoshop and Dreamweaver), and I wish to do gaming. I know Conifg 1 is the better value, but I want to be sure that the $130 more is truly worth it. I dont have too much money and want to spend wisely as possible, yet I am a computer geek and plan on doing a lot more than the average user.

    Read the article

  • why cacti is showing empty graph.??.even if rrd file created..

    - by Divya mohan Singh
    hii, i have develop my own snmp service..and i want to plot a graph of an OID provided. so, i have create graph in cacti. -) Its is showing device up. -) It is creating rrd file.(RRDTool says OK). -) showing the graph but its empty. but when i check it say rrdtool fetch AVERAGE it showing me all the values nan only..the monitored OID is having value 47 and i have set min=0 and max=100 i am using cacti appliance by rpath http://www.rpath.org/ui/#/appliances?id=http://www.rpath.org/api/products/cacti-appliance still i can show value on graph.. where is the problem??can anyone plz tell me??

    Read the article

  • What's the right way to calculate derived data in a Flex AdvancedDataGrid using summaries?

    - by Chris R
    Here's the gist of the problem: I have a set of rows of data with (say) field1 to field4 in them. I'm using a GroupingCollection to group on field1 and field2. So, I have something like this: f1.1 f2.1 f3.1 f4.1 f3.2 f4.2 f2.2 f3.3 f4.3 f3.4 f4.4 f3.5 f4.5 f1.2 f2.1 f3.6 f4.6 f2.2 f3.7 f4.7 f3.8 f4.8 f3.9 f4.9 (or at least, I hope that's clear enough) I need to calculate some derived values for each leaf row, for example f3, that is the ratio of f3 to the average of all f3 in that particular part of the tree. So, for f3.7 I need to calculate f3.7 / avg(f3.7..f3.9) and fill that into the f3_index property on the row, displaying that in lieu of f3 itself. So, basically, what it looks like I have to do is add source field values in the summarizeFunction implementation. It seems to me that there must be a better way of doing this. Is there?

    Read the article

  • How do I regenerate statistics in Openx?

    - by Martin Bauer
    ue to faulty hardware, statistics generated over a 2 week period were significantly higher than normal (10000 times higher than normal). After moving the application to a new server, the problem rectified itself. The issue I have is that there are 2 weeks of stats that are clearly wrong. I have checked the raw impressions table for the affected fortnight and it seems to be correct (ie. stats per banner per day match the average for the previous month). Looking at the intermediate & summary impressions tables, the values are inflated. I understand from the openx forum (http://forum.openx.org/index.php?s=7796fd9dae40e020a010773746f3ada9&showtopic=503424297) it's possible to regenerate stats from the raw data but it will only regenerate stats per hour, meaning regenerating stats for 2 weeks would be very time consuming. Is there another, more efficient way to regenerate the stats from the raw data for the affected fortnight?

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • Starting a Blog using Microsoft.Net technologies

    - by manav inder
    I want to start a blog using Microsoft technologies. My primary reason is to get more in-sync with technologies which are very much in demand. It does not matter how steep is the learning curve as long I am willing to devote all the time in the world. There are lot going on like Microsoft WebAPI, Dot net nuke MVC SPA etc. Let me tell you what i know I have very good experience in developing database driven .net application using winforms and wpf. Average experience in asp.net and asp.net mvc. Good in entity framework, ado.net and wcf rest services. Good in IoC/DI.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >