Search Results

Search found 1325 results on 53 pages for 'factor'.

Page 15/53 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • How to analyze 'dbcc memorystatus' result in SQL Server 2008

    - by envykok
    Currently i am facing a sql memory pressure issue. i have run 'dbcc memorystatus', here is part of my result: Memory Manager KB VM Reserved 23617160 VM Committed 14818444 Locked Pages Allocated 0 Reserved Memory 1024 Reserved Memory In Use 0 Memory node Id = 0 KB VM Reserved 23613512 VM Committed 14814908 Locked Pages Allocated 0 MultiPage Allocator 387400 SinglePage Allocator 3265000 MEMORYCLERK_SQLBUFFERPOOL (node 0) KB VM Reserved 16809984 VM Committed 14184208 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 0 MultiPage Allocator 408 MEMORYCLERK_SQLCLR (node 0) KB VM Reserved 6311612 VM Committed 141616 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 1456 MultiPage Allocator 20144 CACHESTORE_SQLCP (node 0) KB VM Reserved 0 VM Committed 0 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 3101784 MultiPage Allocator 300328 Buffer Pool Value Committed 1742946 Target 1742946 Database 1333883 Dirty 940 In IO 1 Latched 18 Free 89 Stolen 408974 Reserved 2080 Visible 1742946 Stolen Potential 1579938 Limiting Factor 13 Last OOM Factor 0 Page Life Expectancy 5463 Process/System Counts Value Available Physical Memory 258572288 Available Virtual Memory 8771398631424 Available Paging File 16030617600 Working Set 15225597952 Percent of Committed Memory in WS 100 Page Faults 305556823 System physical memory high 1 System physical memory low 0 Process physical memory low 0 Process virtual memory low 0 Procedure Cache Value TotalProcs 11382 TotalPages 430160 InUsePages 28 Can you lead me to analyze this result ? Is it a lot execute plan have been cached causing the memory issue or other reasons?

    Read the article

  • A couple of questions on exceptions/flow control and the application of custom exceptions

    - by dotnetdev
    1) Custom exceptions can help make your intentions clear. How can this be? The intention is to handle or log the exception, regardless of whether the type is built-in or custom. The main reason I use custom exceptions is to not use one exception type to cover the same problem in different contexts (eg parameter is null in system code which may be effect by an external factor and an empty shopping basket). However, the partition between system and business-domain code and using different exception types seems very obvious and not making the most of custom exceptions. Related to this, if custom exceptions cover the business exceptions, I could also get all the places which are sources for exceptions at the business domain level using "Find all references". Is it worth adding exceptions if you check the arguments in a method for being null, use them a few times, and then add the catch? Is it a realistic risk that an external factor or some other freak cause could cause the argument to be null after being checked anyway? 2) What does it mean when exceptions should not be used to control the flow of programs and why not? I assume this is like: if (exceptionVariable != null) { } Is it generally good practise to fill every variable in an exception object? As a developer, do you expect every possible variable to be filled by another coder?

    Read the article

  • Evaluation of environment variables in command run by Java's Runtime.exec()

    - by Tom Duckering
    Hi, I have a scenarios where I have a Java "agent" that runs on a couple of platforms (specifically Windows, Solaris & AIX). I'd like to factor out the differences in filesystem structure by using environment variables in the command line I execute. As far as I can tell there is no way to get the Runtime.exec() method to resolve/evaluate any environment variables referenced in the command String (or array of Strings). I know that if push comes to shove I can write some code to pre-process the command String(s) and resolve enviroment variables by hand (using getEnv() etc). However I'm wondering if there is a smarter way to do this since I'm sure I'm not the only person wanting to do this and I'm sure there are pitfalls in "knocking up" my own implementation. Your guidance and suggestions are most welcome. edit: I would like to refer to environment variables in the command string using some consistent notation such as $VAR and/or %VAR%. Not fussed which. edit: To be clear I'd like to be able to execute a command such as: perl $SCRIPT_ROOT/somePerlScript.pl args on Windows and Unix hosts using Runtime.exec(). I specify the command in config file that describes a list of jobs to run and it has to be able to work cross platform, hence my thought that an environment variable would be useful to factor out the filesystem differences (/home/username/scripts vs C:\foo\scripts). Hope that helps clarify it. Thanks. Tom

    Read the article

  • How can a data ellipse be superimposed on a ggplot2 scatterplot?

    - by Radu
    Hi, I have an R function which produces 95% confidence ellipses for scatterplots. The output looks like this, having a default of 50 points for each ellipse (50 rows): [,1] [,2] [1,] 0.097733810 0.044957994 [2,] 0.084433494 0.050337990 [3,] 0.069746783 0.054891438 I would like to superimpose a number of such ellipses for each level of a factor called 'site' on a ggplot2 scatterplot, produced from this command: > plat1 <- ggplot(mapping=aes(shape=site, size=geom), shape=factor(site)); plat1 + geom_point(aes(x=PC1.1,y=PC2.1)) This is run on a dataset, called dflat which looks like this: site geom PC1.1 PC2.1 PC3.1 PC1.2 PC2.2 1 Buhlen 1259.5649 -0.0387975838 -0.022889782 0.01355317 0.008705276 0.02441577 2 Buhlen 653.6607 -0.0009398704 -0.013076251 0.02898955 -0.001345149 0.03133990 The result is fine, but when I try to add the ellipse (let's say for this one site, called "Buhlen"): > plat1 + geom_point(aes(x=PC1.1,y=PC2.1)) + geom_path(data=subset(dflat, site="Buhlen"),mapping=aes(x=ELLI(PC1.1,PC2.1)[,1],y=ELLI(PC1.1,PC2.1)[,2])) I get an error message: "Error in data.frame(x = c(0.0977338099339815, 0.0844334944904515, 0.0697467834016782, : arguments imply differing number of rows: 50, 211 I've managed to fix this in the past, but I cannot remember how. It seems that geom_path is relying on the same points rather than plotting new ones. Any help would be appreciated.

    Read the article

  • Python: combining making two scripts into one

    - by Alex
    I have two separately made python scripts one that makes a sine wave sound based off time, and another that produces a sine wave graph that is based off the same time factors. I need help combining them into one running file. Here's the first: from struct import pack from math import sin, pi import time def au_file(name, freq, freq1, dur, vol): fout = open(name, 'wb') # header needs size, encoding=2, sampling_rate=8000, channel=1 fout.write('.snd' + pack('>5L', 24, 8*dur, 2, 8000, 1)) factor = 2 * pi * freq/8000 factor1 = 2 * pi * freq1/8000 # write data for seg in range(8 * dur): # sine wave calculations sin_seg = sin(seg * factor) + sin(seg * factor1) fout.write(pack('b', vol * 64 * sin_seg)) fout.close() t = time.strftime("%S", time.localtime()) ti = time.strftime("%M", time.localtime()) tis = float(t) tis = tis * 100 tim = float(ti) tim = tim * 100 if __name__ == '__main__': au_file(name='timeSound.au', freq=tim, freq1=tis, dur=1000, vol=1.0) import os os.startfile('timeSound.au') and the second is this: from Tkinter import * import math import time t = time.strftime("%S", time.localtime()) ti = time.strftime("%M", time.localtime()) tis = float(t) tis = tis / 100 tim = float(ti) tim = tim / 100 root = Tk() root.title("This very moment") width = 400 height = 300 center = height//2 x_increment = 1 # width stretch x_factor1 = tis x_factor2 = tim # height stretch y_amplitude = 50 c = Canvas(width=width, height=height, bg='black') c.pack() str1 = "sin(x)=white" c.create_text(10, 20, anchor=SW, text=str1) center_line = c.create_line(0, center, width, center, fill='red') # create the coordinate list for the sin() curve, have to be integers xy1 = [] xy2 = [] for x in range(400): # x coordinates xy1.append(x * x_increment) xy2.append(x * x_increment) # y coordinates xy1.append(int(math.sin(x * x_factor1) * y_amplitude) + center) xy2.append(int(math.sin(x * x_factor2) * y_amplitude) + center) sinS_line = c.create_line(xy1, fill='white') sinM_line = c.create_line(xy2, fill='yellow') root.mainloop()

    Read the article

  • Handling conflicting priorities and expectations in project development

    - by jasonk
    There are any number of situations in the standard day where priority conflicts exist for projects. Management wants maximum productivity from employees. Marketing wants maximum salability and fast turnaround. Ownership wants maximum profit. Customers want usability and low cost. Regardless of the origin of the demands, time and money are always the limiting factor in business. Sometimes project elements have intrinsic or goodwill benefits for which there is not a hard, fast way to measure with monetary means (e.g. arguments for an attractive UI appeal vs. functional but plain). Other elements of software may have a method of providing “mental breaks” or motivating “cool factor” for developers that can get them back on track on other bigger, complex issues. While they may sidetrack the project short term they may have greater results long term through improved job satisfaction, etc. Continued training is a must, but working it in can setup back progress. What are your suggestions for setting priorities? How do you evaluate requests/demands on your projects? What are your suggestions for communicating and passing those on to your team in a way that they stay focused?

    Read the article

  • R: convert data.frame columns from factors to characters

    - by Mike Dewar
    Hi, I have a data frame. Let's call him bob: > head(bob) phenotype exclusion GSM399350 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399351 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399352 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399353 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399354 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399355 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- I'd like to concatenate the rows of this data frame (this will be another question). But look: > class(bob$phenotype) [1] "factor" Bob's columns are factors. So, for example: > as.character(head(bob)) [1] "c(3, 3, 3, 6, 6, 6)" "c(3, 3, 3, 3, 3, 3)" [3] "c(29, 29, 29, 30, 30, 30)" I don't begin to understand this, but I guess these are indices into the levels of the factors of the columns (of the court of king caractacus) of bob? Not what I need. Strangely I can go through the columns of bob by hand, and do bob$phenotype <- as.character(bob$phenotype) which works fine. And, after some typing, I can get a data.frame whose columns are characters rather than factors. So my question is: how can I do this automatically? How do I convert a data.frame with factor columns into a data.frame with character columns without having to manually go through each column? Bonus question: why does the manual approach work?

    Read the article

  • Get the parent id..

    - by tixrus
    I have a bunch of elements like the following: <div class="droppableP" id="s-NSW" style="width:78px; height:63px; position: absolute; top: 223px; left: 532px;"> </div> They all have class droppableP but different id's obviously and I would like to factor the code in this script I am hacking on. The original script just has a specific selector for each of one of these divs, but the code is all alike except for the id it does things to, which is either the id of the parent or another div with a name that's related to it. Here is the original code specifically for this div: $("#s-NSW > .sensible").droppable( { accept : "#i-NSW", tolerance : 'intersect', activeClass : 'droppable-active', hoverClass : 'droppable-hover', drop : function() { $('#s-NSW').addClass('s-NSW'); $('#s-NSW').addClass('encastrada'); //can't move any more.. $('#i-NSW').remove(); $('#s-NSW').animate( { opacity: 0.25 },200, 'linear'); checkWin(); } }); Here is how I would like to factor so the same code can do all of them and I will eventually do chaining as well and maybe get rid of the inline styles but here is my first go: $(".droppableP > .sensible").droppable( { accept : "#i" + $(this).parent().attr('id').substring(2), tolerance : 'intersect', activeClass : 'droppable-active', hoverClass : 'droppable-hover', drop : function() { $(this).parent().addClass($(this).parent().attr('id')); $(this).parent().addClass('encastrada'); $("#i" + ($this).parent().attr('id').substring(2)).remove(); $(this).parent().animate( { opacity: 0.25 },200, 'linear'); checkWin(); } }); The error I get is $(this).parent().attr("id") is undefined Many thanks. I have browsed related questions the one I understand that's closest to mine, turns out they didn't need parent function at all. I'm kind of a noob so please don't yell at me too hard if this is a stupid question.

    Read the article

  • Audio playback, creating nested loop for fade in/out.

    - by Dave Slevin
    Hi Folks, First time poster here. A quick question about setting up a loop here. I want to set up a for loop for the first 1/3 of the main loop that will increase a value from .00001 or similar to 1. So I can use it to multiply a sample variable so as to create a fade-in in this simple audio file playback routine. So far it's turning out to be a bit of a head scratcher, any help greatfully recieved. for(i=0; i < end && !feof(fpin); i+=blockframes) { samples = fread(audioblock, sizeof(short), blocksamples, fpin); frames = samples; for(j=0; j < frames; j++) { for (f = 0; f< frames/3 ;f++) { fade = fade--; } output[j] = audioblock[j]/fade; } fwrite(output,sizeof(short), frames, fpoutput); } Apologies, So far I've read and re-write the file successfully. My problem is I'm trying to figure out a way to loop the variable 'fade' so it either increases or decreases to 1, so as I can modify the output variable. I wanted to do this in say 3 stages: 1. From 0 to frames/3 to increace a multiplication factor from .0001 to 1 2. from frames 1/3 to frames 2/3 to do nothing (multiply by 1) and 3. For the factor to decrease again below 1 so as for the output variable to decrease back to the original point. How can I create a loop that will increase and decrease these values over the outside loop?

    Read the article

  • How to handle alpha in a manual "Overlay" blend operation?

    - by quixoto
    I'm playing with some manual (walk-the-pixels) image processing, and I'm recreating the standard "overlay" blend. I'm looking at the "Photoshop math" macros here: http://www.nathanm.com/photoshop-blending-math/ (See also here for more readable version of Overlay) Both source images are in fairly standard RGBA (8 bits each) format, as is the destination. When both images are fully opaque (alpha is 1.0), the result is blended correctly as expected: But if my "blend" layer (the top image) has transparency in it, I'm a little flummoxed as to how to factor that alpha into the blending equation correctly. I expect it to work such that transparent pixels in the blend layer have no effect on the result, opaque pixels in the blend layer do the overlay blend as normal, and semitransparent blend layer pixels have some scaled effect on the result. Can someone explain to me the blend equations or the concept behind doing this? Bonus points if you can help me do it such that the resulting image has correctly premultiplied alpha (which only comes into play for pixels that are not opaque in both layers, I think.) Thanks! // factor in blendLayerA, (1-blendLayerA) somehow? resultR = ChannelBlend_Overlay(baseLayerR, blendLayerR); resultG = ChannelBlend_Overlay(baseLayerG, blendLayerG); resultB = ChannelBlend_Overlay(baseLayerB, blendLayerB); resultA = 1.0; // also, what should this be??

    Read the article

  • Trait, FunctionN, or trait-inheriting-FunctionN in Scala?

    - by Willis Blackburn
    I have a trait in Scala that has a single method. Call it Computable and the single method is compute(input: Int): Int. I can't figure out whether I should Leave it as a standalone trait with a single method. Inherit from (Int = Int) and rename "compute" to "apply." Just get rid of Computable and use (Int = Int). A factor in favor of it being a trait is that I could usefully add some additional methods. But of course if they were all implemented in terms of the compute method then I could just break them out into a separate object. A factor in favor of just using the function type is simplicity and the fact that the syntax for an anonymous function is more concise than that for an anonymous Computable instance. But then I've no way to distinguish objects that are actually Computable instances from other functions that map Int to Int but aren't meant to be used in the same context as Computable. How do other people approach this type of problem? No right or wrong answers here; I'm just looking for advice.

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • 2D Histogram in R: Converting from Count to Frequency within a Column

    - by Jac
    Would appreciate help with generating a 2D histogram of frequencies, where frequencies are calculated within a column. My main issue: converting from counts to column based frequency. Here's my starting code: # expected packages library(ggplot2) library(plyr) # generate example data corresponding to expected data input x_data = sample(101:200,10000, replace = TRUE) y_data = sample(1:100,10000, replace = TRUE) my_set = data.frame(x_data,y_data) # define x and y interval cut points x_seq = seq(100,200,10) y_seq = seq(0,100,10) # label samples as belonging within x and y intervals my_set$x_interval = cut(my_set$x_data,x_seq) my_set$y_interval = cut(my_set$y_data,y_seq) # determine count for each x,y block xy_df = ddply(my_set, c("x_interval","y_interval"),"nrow") # still need to convert for use with dplyr # convert from count to frequency based on formula: freq = count/sum(count in given x interval) ################ TRYING TO FIGURE OUT ################# # plot results fig_count <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = nrow)) # count fig_freq <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = freq)) # frequency I would appreciate any help in how to calculate the frequency within a column. Thanks! jac EDIT: I think the solution will require the following steps 1) Calculate and store overall counts for each x-interval factor 2) Divide the individual bin count by its corresponding x-interval factor count to obtain frequency. Not sure how to carry this out though. .

    Read the article

  • Generate unique ID from multiple values with fault tolerance

    - by ojreadmore
    Given some values, I'd like to make a (pretty darn) unique result. $unique1 = generate(array('ab034', '981kja7261', '381jkfa0', 'vzcvqdx2993883i3ifja8', '0plnmjfys')); //now $unique1 == "sqef3452y"; I also need something that's pretty close to return the same result. In this case, 20% of the values is missing. $unique2 = generate(array('ab034', '981kja7261', '381jkfa0', 'vzcvqdx2993883i3ifja8')); //also $unique2 == "sqef3452y"; I'm not sure where to begin with such an algorithm but I have some assumptions. I assume that the more values given, the more accurate the resulting ID – in other words, using 20 values is better than 5. I also assume that a confidence factor can be calculated and adjusted. What would be nice to have is a weight factor where one can say 'value 1 is more important than value 3'. This would require a multidimensional array for input instead of one dimension. I just mashed on the keyboard for these values, but in practice they may be short or long alpha numeric values.

    Read the article

  • model.matrix() with na.action=NULL?

    - by Vincent
    I have a formula and a data frame, and I want to extract the model.matrix(). However, I need the resulting matrix to include the NAs that were found in the original dataset. If I were to use model.frame() to do this, I would simply pass it na.action=NULL. However, the output I need is of the model.matrix() format. Specifically, I need only the right-hand side variables, I need the output to be a matrix (not a data frame), and I need factors to be converted to a series of dummy variables. I'm sure I could hack something together using loops or something, but I was wondering if anyone could suggest a cleaner and more efficient workaround. Thanks a lot for your time! And here's an example: dat <- data.frame(matrix(rnorm(20),5,4), gl(5,2)) dat[3,5] <- NA names(dat) <- c(letters[1:4], 'fact') ff <- a ~ b + fact # This omits the row with a missing observation on the factor model.matrix(ff, dat) # This keeps the NA, but it gives me a data frame and does not dichotomize the factor model.frame(ff, dat, na.action=NULL) Here is what I would like to obtain: (Intercept) b fact2 fact3 fact4 fact5 1 1 0.7266086 0 0 0 0 2 1 -0.6088697 0 0 0 0 3 NA 0.4643360 NA NA NA NA 4 1 -1.1666248 1 0 0 0 5 1 -0.7577394 0 1 0 0 6 1 0.7266086 0 1 0 0 7 1 -0.6088697 0 0 1 0 8 1 0.4643360 0 0 1 0 9 1 -1.1666248 0 0 0 1 10 1 -0.7577394 0 0 0 1

    Read the article

  • ggplot: showing % instead of counts in charts of categorical variables

    - by wishihadabettername
    I'm plotting a categorical variable and instead of showing the counts for each category value, I'm looking for a way to get ggplot to display the percentage of values in that category. Of course, it is possible to create another variable with the calculated percentage and plot that one, but I have to do it several dozens of times and I hope to achieve that in one command. I was experimenting with something like qplot (mydataf) + stat_bin(aes(n=nrow(mydataf), y=..count../n)) + scale_y_continuous(formatter="percent") but I must be using it incorrectly, as I got errors. To easily reproduce the setup, here's a simplified example: mydata <- c ("aa", "bb", null, "bb", "cc", "aa", "aa", "aa", "ee", null, "cc"); mydataf <- factor(mydata); qplot (mydataf); #this shows the count, I'm looking to see % displayed. In the real case I'll probably use ggplot instead of qplot, but the right way to use stat_bin still eludes me. Thank you. UPDATE: I've also tried these four approaches: ggplot(mydataf, aes(y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent'); ggplot(mydataf, aes(y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent') + geom_bar(); ggplot(mydataf, aes(x = levels(mydataf), y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent'); ggplot(mydataf, aes(x = levels(mydataf), y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent') + geom_bar(); but all 4 give: Error: ggplot2 doesn't know how to deal with data of class factor The same error appears for the simple case of ggplot (data=mydataf, aes(levels(mydataf))) + geom_bar() so it's clearly something about how ggplot interacts with a single vector. I'm scratching my head, googling for that error gives a single result.

    Read the article

  • Populate an Object Model from a data dataTable(C#3.0)

    - by Newbie
    I have a situation I am getting data from some external sources and is populating into the datatable. The data looks like this DATE WEEK FACTOR 3/26/2010 1 RM_GLOBAL_EQUITY 3/26/2010 1 RM_GLOBAL_GROWTH 3/26/2010 2 RM_GLOBAL_VALUE 3/26/2010 2 RM_GLOBAL_SIZE 3/26/2010 2 RM_GLOBAL_MOMENTUM 3/26/2010 3 RM_GLOBAL_HIST_BETA I have a object model like this public class FactorReturn { public int WeekNo { get; set; } public DateTime WeekDate { get; set; } public Dictionary<string, decimal> FactorCollection { get; set; } } As can be seen that the Date field is always constant. And a single(means unique) week can have multiple FACTORS. i.e. For a date(3/26/2010), for Week No. 1, there are two FACTORS(RM_GLOBAL_EQUITY and RM_GLOBAL_GROWTH). Similarly, For a date(3/26/2010), for Week No. 2, there are three FACTORS(RM_GLOBAL_VALUE , RM_GLOBAL_SIZE and RM_GLOBAL_MOMENTUM ). Now we need to populate this data into our object model. The final output will be WeekDate: 3/26/2010 WeekNo : 1 FactorCollection : RM_GLOBAL_EQUITY FactorCollection : RM_GLOBAL_GROWTH WeekNo : 2 FactorCollection : RM_GLOBAL_VALUE FactorCollection : RM_GLOBAL_SIZE FactorCollection : RM_GLOBAL_MOMENTUM WeekNo : 3 FactorCollection : RM_GLOBAL_HIST_BETA That is, overall only 1 single collection, where the Factor type will vary depending on week numbers. I have tried but of useless. Nothing works. Could you please help me?. I feel it is very tough I am using C# 3.0 Thanks

    Read the article

  • iPhone 3GS lens data for Hugin / Panorama Tools

    - by david-ocallaghan
    I'm using the Hugin frontend to Panorama Tools and it requires camera and lens data for the images it handles (details here: http://wiki.panotools.org/Hugin_Camera_and_Lens_tab). Can anyone tell me the appropriate values for the iPhone 3GS camera? Lens: rectilinear Horizontal Field of View: ? focal length: 3.85mm crop factor / focal length multiplier: ?

    Read the article

  • OpenNMS monitoring SAP

    - by HannesFostie
    I was wondering if anyone had any experience plugging SAP into their OpenNMS installation. Mostly looking for experiences, perhaps Nagios comparisons, or some more concrete information on what is being monitored and how you did it. Go into as much details as you like. I am currently in the process of evaluating both Nagios and OpenNMS and the possibilities with SAP might be the deciding factor here. Sadly, I didn't find a whole lot on google on the subject.

    Read the article

  • Proftp error message Fatal: unknown configuration directive 'DisplayFirstChdir' on line 22 of '/etc/proftpd/proftpd.conf'

    - by LedZeppelin
    Sorry for the newb factor but I'm trying to set up a server using this guide: http://www.intac.net/build-your-own-server/ I'm at the end of step 5 and when I try to restart proftp I get the following error message me@me-desktop:~$ sudo service proftpd restart * Stopping ftp server proftpd [ OK ] * Starting ftp server proftpd Fatal: unknown configuration directive 'DisplayFirstChdir' on line 22 of '/etc/proftpd/proftpd.conf' [fail] Any clues on how to change line 22?

    Read the article

  • Server have 2 psu, can i only turn on 1 psu, to reduce cost in colocation?

    - by Earl
    i just got a server & want to colocation it in datacenter server details : HP DL380, 2x intel Xeon (3,06GHz/533, 512KB L2 Cache), 8x Fans, Form Factor Rack (2U), 2x 400W Power Supplies, the server have 2 psu, can i only turn on 1 psu, to reduce cost in colocation? will the server still running good? the standart colocation packages in my city only give default power 400w, if need additional power 400w need additional cost about $40-60 again permonth please give suggestion from your experience

    Read the article

  • Practical uses for PS3 linux? [closed]

    - by NoCarrier
    I've got a PS3 sitting around that i don't use much because I a. don't have time for games b. don't have time for movies (bluray, dvd, and otherwise) So I'm considering loading some flavor of linux onto it. Besides the gee-whiz-but-does-it-run-linux factor, can anyone suggest any practical uses for doing so?

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >