Search Results

Search found 16 results on 1 pages for 'gacek'.

Page 1/1 | 1 

  • Which CMS for photo-blog website?

    - by Gacek
    I need to add photo-blog to a site that I'm recently working on. It is very simple site so the blog doesn't have to be very sophisticated. What I need is: a CMS that allows me to create simple blog-like news with one (or more) images at the beginning and some description/comment below. Preferably, I would like to create something that works like sites like these two: http://www.photoblog.com/dreamie or http://www.photoblog.pl/mending/ it must be customizable. I want to integrate it's look as much as possible with current page: http://saviorforest.tk preferably, it should provide some mechanizm for uploading and storing images at the server. I thought about wordpress, but it seems to be a little bit too complicated for such simple task. Do you know any simple and easy in use CMS that would work here?

    Read the article

  • Subcontrols not visible in custom control derived from another control

    - by Gacek
    I'm trying to create a custom control by deriving from a ZedGraphControl I need to add a ProgressBar to the control, but I encountered some problems. When I create a custom control and add both, ZedGraphCOntrol and ProgressBar to it, everything is OK: MyCustomControl { ZedGraphControl ProgressBar } All elemnets are visible and working as expected. But I need to derive from ZGC and when I add a progress bar as a subcontrol of ZedGraphControl: MyCustomControl : ZedGRaphControl { ProgressBar } The progress bar is not visible. Is there any way to force the visibility of ProgressBar? Is it possible, that ZedGraphControl is not displaying its subcontrols? I tried do the same thing with a simple button and it's also not being displayed.

    Read the article

  • Where can I find simple beta cdf implementation.

    - by Gacek
    I need to use beta distribution and inverse beta distribution in my project. There is quite good but complicated implementation in GSL, but I don't want to use such a big library only to get one function. I would like to either, implement it on my own or link some simple library. Do you know any sources that could help me? I'm looking for any books/articles about numerical approximation of beta PDF, libraries where it could be implemented. Any other suggestions would be also appreciated. Any programming language, but C++/C# preffered.

    Read the article

  • Simple time profiling - strange times

    - by Gacek
    I'm trying to profile my code to check how long it takes to execute some parts of my code. I've wrapped my most time-consuming part of the code in something like that: DateTime start = DateTime.Now; ... ... // Here comes the time-consuming part ... Console.WriteLine((DateTime.Now - start).Miliseconds); The program is executing this part of code for couple of seconds (about 20 s) but in console I get the result of something about 800 miliseconds. Why is that so? What am I doing wrong?

    Read the article

  • How to simulate OutOfMemory exception

    - by Gacek
    I need to refactor my project in order to make it immune to OutOfMemory exception. There are huge collections used in my project and by changing one parameter I can make my program to be more accurate or use less of the memory... OK, that's the background. What I would like to do is to run the routines in a loop: Run the subroutines with the default parameter. Catch the OutOfMemory exception, change the parameter and try to run it again. Do the 2nd point until parameters allow to run the subroutines without throwing the exception (usually, there will be only one change needed). Now, I would like to test it. I know, that I can throw the OutOfMemory exception on my own, but I would like to simulate some real situation. So the main question is: Is there a way of setting some kind of memory limit for my program, after reaching which the OutOfMemory exception will be thrown automatically? For example, I would like to set a limit, let's say 400MB of memory for my whole program to simulate the situation when there is such an amount of memory available in the system. Can it be done?

    Read the article

  • filter that uses elements from two arrays at the same time

    - by Gacek
    Let's assume we have two arrays of the same size - A and B. Now, we need a filter that, for a given mask size, selects elements from A, but removes the central element of the mask, and inserts there corresponding element from B. So the 3x3 "pseudo mask" will look similar to this: A A A A B A A A A Doing something like this for averaging filter is quite simple. We can compute the mean value for elements from A without the central element, and then combine it with a proper proportion with elements from B: h = ones(3,3); h(2,2) =0; h = h/sum(h(:)); A_ave = filter2(h, A); C = (8/9) * A_ave + (1/9) * B; But how to do something similar for median filter (medfilt2 or even better for ordfilt2)

    Read the article

  • Two tables side by side in one column LaTeX environment

    - by Gacek
    The question is similar to this one: http://stackoverflow.com/questions/1491717/how-to-display-a-content-in-two-column-layout-in-latex but about placing two tables side by side. I have two small tables looking like that: \begin{table}[t] \begin{tabular}{|c|l||r|r||r|r|} %content goes here \end{tabular} \caption{some caption} \end{table} \begin{table}[t] \begin{tabular}{|c|l||r|r||r|r|} %content goes here \end{tabular} \caption{some caption for second table} \end{table} I have one-column document and these tables are really narrow, so I would like to display them side by side (with separate captions) insted of one under another with a lot of unused, white space. I tried to do it with this \multicols but it seems that floats (tables here) cannot be placed inside of it. Any ideas?

    Read the article

  • vectorizing loops in Matlab - performance issues

    - by Gacek
    This question is related to these two: http://stackoverflow.com/questions/2867901/introduction-to-vectorizing-in-matlab-any-good-tutorials http://stackoverflow.com/questions/2561617/filter-that-uses-elements-from-two-arrays-at-the-same-time Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time. I've rewritten this: function B = bfltGray(A,w,sigma_r) dim = size(A); B = zeros(dim); for i = 1:dim(1) for j = 1:dim(2) % Extract local region. iMin = max(i-w,1); iMax = min(i+w,dim(1)); jMin = max(j-w,1); jMax = min(j+w,dim(2)); I = A(iMin:iMax,jMin:jMax); % Compute Gaussian intensity weights. F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2); B(i,j) = sum(F(:).*I(:))/sum(F(:)); end end into this: function B = rngVect(A, w, sigma) W = 2*w+1; I = padarray(A, [w,w],'symmetric'); I = im2col(I, [W,W]); H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2); B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []); But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems. I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer. I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available). So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.

    Read the article

  • Introduction to vectorizing in MATLAB - any good tutorials?

    - by Gacek
    I'm looking for any good tutorials on vectorizing (loops) in MATLAB. I have quite simple algorithm, but it uses two for loops. I know that it should be simple to vectorize it and I would like to learn how to do it instead of asking you for the solution. But to let you know what problem I have, so you would be able to suggest best tutorials that are showing how to solve similar problems, here's the outline of my problem: B = zeros(size(A)); % //A is a given matrix. for i=1:size(A,1) for j=1:size(A,2) H = ... %// take some surrounding elements of the element at position (i,j) (i.e. using mask 3x3 elements) B(i,j) = computeSth(H); %// compute something on selected elements and place it in B end end So, I'm NOT asking for the solution. I'm asking for a good tutorials, examples of vectorizing loops in MATLAB. I would like to learn how to do it and do it on my own.

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • Creating matrix of maximum values indices in MATLAB

    - by Gacek
    Using MATLAB, I have an array of values of size 8 rows x N columns. I need to create a matrix of the same size, that counts maximum values in each column and puts 1 in the cell that contains maximum value, and 0 elsewhere. A little example. Lets assume we have an array of values D: D = 0.0088358 0.0040346 0.40276 0.0053221 0.017503 0.011966 0.015095 0.017383 0.14337 0.38608 0.16509 0.15763 0.27546 0.25433 0.2764 0.28442 0.01629 0.0060465 0.0082339 0.0099775 0.034521 0.01196 0.016289 0.021012 0.12632 0.13339 0.11113 0.10288 0.3777 0.19219 0.005005 0.40137 Then, the output matrix for such matrix D would be: 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 Is there a way to do it without catching vector of indices from max function and then putting ones in the right place using for loop?

    Read the article

  • Memory leaks while using array of double

    - by Gacek
    I have a part of code that operates on large arrays of double (containing about 6000 elements at least) and executes several hundred times (usually 800) . When I use standard loop, like that: double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... } The memory usage rises for about 40MB (from 40MB at the beggining of the loop, to the 80MB at the end). When I force to use the garbage collector to execute at every iteration, the memory usage stays at the level of 40MB (the rise is unsignificant). double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... GC.Collect() } But the execution time is 3 times longer! (it is crucial) How can I force the C# to use the same area of memory instead of allocating new ones? Note: I have the access to the code of someObject class, so if it would be needed, I can change it.

    Read the article

  • Counting point size based on chart area during zooming/unzoomin

    - by Gacek
    Hi folks. I heave a quite simple task. I know (I suppose) it should be easy, but from the reasons I cannot understand, I try to solve it since 2 days and I don't know where I'm making the mistake. So, the problem is as follows: - we have a chart with some points - The chart starts with some known area and points have known size - we would like to "emulate" the zooming effect. So when we zoom to some part of the chart, the size of points is getting proportionally bigger. In other words, the smaller part of the chart we select, the bigger the point should get. So, we have something like that. We know this two parameters: initialArea; // Initial area - area of the whole chart, counted as width*height initialSize; // initial size of the points Now lets assume we are handling some kind of OnZoom event. We selected some part of the chart and would like to count the current size of the points float CountSizeOnZoom() { float currentArea = CountArea(...); // the area is counted for us. float currentSize = initialSize * initialArea / currentArea; return currentSize; } And it works. But the rate of change is too fast. In other words, the points are getting really big too soon. So I would like the currentSize to be invertly proportional to currentArea, but with some scaling coefficient. So I created the second function: float CountSizeOnZoom() { float currentArea = CountArea(...); % the area is counted for us. // Lets assume we want the size of points to change ten times slower, than area of the chart changed. float currentSize = initialSize + 0.1f* initialSize * ((initialArea / currentArea) -1); return currentSize; } Lets do some calculations in mind. if currentArea is smaller than initialArea, initialArea/currentArea > 1 and then we add "something" small and postive to initialSize. Checked, it works. Lets check what happens if we would un-zoom. currentArea will be equal to initialArea, so we would have 0 at the right side (1-1), so new size should be equal to initialSize. Right? Yeah. So lets check it... and it doesn't work. My question is: where is the mistake? Or maybe you have any ideas how to count this scaled size depending on current area in some other way?

    Read the article

  • Filtering two arrays to avoid Inf/NaN values

    - by Gacek
    I have two arrays of doubles of the same size, containg X and Y values for some plots. I need to create some kind of protection against Inf/NaN values. I need to find all that pairs of values (X, Y) for which both, X and Y are not Inf nor NaN If I have one array, I can do it using lambdas: var filteredValues = someValues.Where(d=> !(double.IsNaN(d) || double.IsInfinity(d))).ToList(); Now, for two arrays I use following loop: List<double> filteredX=new List<double>(); List<double> filteredX=new List<double>(); for(int i=0;i<XValues.Count;i++) { if(!double.IsNan(XValues[i]) && !double.IsInfinity(XValues[i]) && !double.IsNan(YValues[i]) && !double.IsInfinity(YValues[i]) ) { filteredX.Add(XValues[i]); filteredY.Add(YValues[i]); } } Is there any way of filtering two arrays at the same time using LINQ/Lambdas, as it was done for single array?

    Read the article

  • How do I profile memory usage in my project

    - by Gacek
    Are there any good, free tools to profile memory usage in C# ? Details: I have a visualization project that uses quite large collections. I would like to check which parts of this project - on the data-processing side, or on the visualization side - use most of the memory, so I could optimize it. I know that when it comes to computing size of the collection the case is quite simple and I can do it on my own. But there are also certain elements for which I cannot estimate the memory usage so easily. The memory usage is quite big, for example processing a file of size 35 MB my program uses a little bit more than 250 MB of RAM.

    Read the article

  • Intersection between sets containing different types of variables

    - by Gacek
    Let's assume we have two collections: List<double> values List<SomePoint> points where SomePoint is a type containing three coordinates of the point: SomePoint { double X; double Y; double Z; } Now, I would like to perform the intersection between these two collections to find out for which points in points the z coordinate is eqal to one of the elements of values I created something like that: HashSet<double> hash = new HashSet<double>(points.Select(p=>p.Z)); hash.IntersectWith(values); var result = new List<SomePoints>(); foreach(var h in hash) result.Add(points.Find(p => p.Z == h)); But it won't return these points for which there is the same Z value, but different X and Y. Is there any better way to do it?

    Read the article

1