Search Results

Search found 2898 results on 116 pages for 'sum of digits'.

Page 46/116 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Total of unknown categories in SSRS 2005

    - by Lijo
    Hi, I am working with SSRS2005. I have requirement to display total in the footer. We have to display the total of each category. What I used to do is, write expression for all category names and hide those totals that are not having any value in the current selection. Mango Count = sum(iif(fields!Category.Value = “Mango”,0,1)) Apple Count = sum(iif(fields!Category.Value = “Apple”,0,1)) However, in the new requirement, I don’t have the knowledge of categories. It could be any number of categories. Is it possible to write an expression for this? Please help Thanks Lijo Cheeran Joseph

    Read the article

  • Algorithm: How to tell if an array is a permutation in O(n)?

    - by Iulian Serbanoiu
    Hello, Input: A read-only array of N elements containing integer values from 1 to N. And a memory zone of a fixed size (10, 100, 1000 etc - not depending on N). How to tell in O(n) if the array represents a permutation? --What I achieved so far:-- I use the limited memory area to store the sum and the product of the array. I compare the sum with N*(N+1)/2 and the product with N! I know that if condition (2) is true I might have a permutation. I'm wondering if there's a way to prove that condition (2) is sufficient to tell if I have a permutation. So far I haven't figured this out ... Thanks, Iulian

    Read the article

  • Generic Method to find the tuples used for computation in Postgres?

    - by Rahul
    If I have a table col1 | name | pay ------+------------------+------ 1 | Steve Jobs | 1006 2 | Mike Markkula | 1007 3 | Mike Scott | 1978 4 | John Sculley | 1983 5 | Michael Spindler | 1653 The user executes a sum query which sums the pay of people getting paid more than $1500. Is there a way to also implicitly know which tuples have been used which satisfy the condition for sum ? I know you can separately write another query to just return the primary key ids which satisfy the condition. But, Is there any other way to do that in the same query ? probably rewrite the query in some way ? or... any suggestion ?

    Read the article

  • MYSQL - Help with a more complicated Query

    - by Joe
    I have two tables: tbl_lists and tbl_houses Inside tbl_lists I have a field called HousesList - it contains the ID's for several houses in the following format: 1# 2# 4# 51# 3# I need to be able to select the mysql fields from tbl_houses WHERE ID = any of those ID's in the list. More specifically, I need to SELECT SUM(tbl_houses.HouseValue) WHERE tbl_houses.ID IN tbl_lists.HousesList -- and I want to do this select to return the SUM for several rows in tbl_lists. Anyone can help? I'm thinking of how I can do this in a SINGLE query since I don't want to do any mysql "loops" (within PHP).

    Read the article

  • Steganography Experiment - Trouble hiding message bits in DCT coefficients

    - by JohnHankinson
    I have an application requiring me to be able to embed loss-less data into an image. As such I've been experimenting with steganography, specifically via modification of DCT coefficients as the method I select, apart from being loss-less must also be relatively resilient against format conversion, scaling/DSP etc. From the research I've done thus far this method seems to be the best candidate. I've seen a number of papers on the subject which all seem to neglect specific details (some neglect to mention modification of 0 coefficients, or modification of AC coefficient etc). After combining the findings and making a few modifications of my own which include: 1) Using a more quantized version of the DCT matrix to ensure we only modify coefficients that would still be present should the image be JPEG'ed further or processed (I'm using this in place of simply following a zig-zag pattern). 2) I'm modifying bit 4 instead of the LSB and then based on what the original bit value was adjusting the lower bits to minimize the difference. 3) I'm only modifying the blue channel as it should be the least visible. This process must modify the actual image and not the DCT values stored in file (like jsteg) as there is no guarantee the file will be a JPEG, it may also be opened and re-saved at a later stage in a different format. For added robustness I've included the message multiple times and use the bits that occur most often, I had considered using a QR code as the message data or simply applying the reed-solomon error correction, but for this simple application and given that the "message" in question is usually going to be between 10-32 bytes I have plenty of room to repeat it which should provide sufficient redundancy to recover the true bits. No matter what I do I don't seem to be able to recover the bits at the decode stage. I've tried including / excluding various checks (even if it degrades image quality for the time being). I've tried using fixed point vs. double arithmetic, moving the bit to encode, I suspect that the message bits are being lost during the IDCT back to image. Any thoughts or suggestions on how to get this working would be hugely appreciated. (PS I am aware that the actual DCT/IDCT could be optimized from it's naive On4 operation using row column algorithm, or an FDCT like AAN, but for now it just needs to work :) ) Reference Papers: http://www.lokminglui.com/dct.pdf http://arxiv.org/ftp/arxiv/papers/1006/1006.1186.pdf Code for the Encode/Decode process in C# below: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing.Imaging; using System.Drawing; namespace ImageKey { public class Encoder { public const int HIDE_BIT_POS = 3; // use bit position 4 (1 << 3). public const int HIDE_COUNT = 16; // Number of times to repeat the message to avoid error. // JPEG Standard Quantization Matrix. // (to get higher quality multiply by (100-quality)/50 .. // for lower than 50 multiply by 50/quality. Then round to integers and clip to ensure only positive integers. public static double[] Q = {16,11,10,16,24,40,51,61, 12,12,14,19,26,58,60,55, 14,13,16,24,40,57,69,56, 14,17,22,29,51,87,80,62, 18,22,37,56,68,109,103,77, 24,35,55,64,81,104,113,92, 49,64,78,87,103,121,120,101, 72,92,95,98,112,100,103,99}; // Maximum qauality quantization matrix (if all 1's doesn't modify coefficients at all). public static double[] Q2 = {1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1}; public static Bitmap Encode(Bitmap b, string key) { Bitmap response = new Bitmap(b.Width, b.Height, PixelFormat.Format32bppArgb); uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Start be transferring the unmodified image portions. // As we'll be using slightly less width/height for the encoding process we'll need the edges to be populated. for (int y = 0; y < b.Height; y++) for (int x = 0; x < b.Width; x++) { if( (x >= imgWidth && x < b.Width) || (y>=imgHeight && y < b.Height)) response.SetPixel(x, y, b.GetPixel(x, y)); } // Setup the counters and byte data for the message to encode. StringBuilder sb = new StringBuilder(); for(int i=0;i<HIDE_COUNT;i++) sb.Append(key); byte[] codeBytes = System.Text.Encoding.ASCII.GetBytes(sb.ToString()); int bitofs = 0; // Current bit position we've encoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to encode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] redData = GetRedChannelData(b, x, y); int[] greenData = GetGreenChannelData(b, x, y); int[] blueData = GetBlueChannelData(b, x, y); int[] newRedData; int[] newGreenData; int[] newBlueData; if (bitofs < totalBits) { double[] redDCT = DCT(ref redData); double[] greenDCT = DCT(ref greenData); double[] blueDCT = DCT(ref blueData); int[] redDCTI = Quantize(ref redDCT, ref Q2); int[] greenDCTI = Quantize(ref greenDCT, ref Q2); int[] blueDCTI = Quantize(ref blueDCT, ref Q2); int[] blueDCTC = Quantize(ref blueDCT, ref Q); HideBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); double[] redDCT2 = DeQuantize(ref redDCTI, ref Q2); double[] greenDCT2 = DeQuantize(ref greenDCTI, ref Q2); double[] blueDCT2 = DeQuantize(ref blueDCTI, ref Q2); newRedData = IDCT(ref redDCT2); newGreenData = IDCT(ref greenDCT2); newBlueData = IDCT(ref blueDCT2); } else { newRedData = redData; newGreenData = greenData; newBlueData = blueData; } MapToRGBRange(ref newRedData); MapToRGBRange(ref newGreenData); MapToRGBRange(ref newBlueData); for(int dy=0;dy<8;dy++) { for(int dx=0;dx<8;dx++) { int col = (0xff<<24) + (newRedData[dx+(dy*8)]<<16) + (newGreenData[dx+(dy*8)]<<8) + (newBlueData[dx+(dy*8)]); response.SetPixel(x+dx,y+dy,Color.FromArgb(col)); } } } } if (bitofs < totalBits) throw new Exception("Failed to encode data - insufficient cover image coefficients"); return (response); } public static void HideBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { int tempValue = 0; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ( (u != 0 || v != 0) && CMatrix[v+(u*8)] != 0 && DCTMatrix[v+(u*8)] != 0) { if (bitofs < totalBits) { tempValue = DCTMatrix[v + (u * 8)]; int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); byte value = (byte)((codeBytes[bytePos] & mask) >> bitPos); // 0 or 1. if (value == 0) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a != 0) DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS) - 1; DCTMatrix[v + (u * 8)] &= ~(1 << HIDE_BIT_POS); } else if (value == 1) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a == 0) DCTMatrix[v + (u * 8)] &= ~((1 << HIDE_BIT_POS) - 1); DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS); } if (DCTMatrix[v + (u * 8)] != 0) bitofs++; else DCTMatrix[v + (u * 8)] = tempValue; } } } } } public static void MapToRGBRange(ref int[] data) { for(int i=0;i<data.Length;i++) { data[i] += 128; if(data[i] < 0) data[i] = 0; else if(data[i] > 255) data[i] = 255; } } public static int[] GetRedChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x,y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 16) & 0xff) - 128; } } return (data); } public static int[] GetGreenChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 8) & 0xff) - 128; } } return (data); } public static int[] GetBlueChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 0) & 0xff) - 128; } } return (data); } public static int[] Quantize(ref double[] DCTMatrix, ref double[] Q) { int[] DCTMatrixOut = new int[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (int)Math.Round(DCTMatrix[v + (u * 8)] / Q[v + (u * 8)]); } } return(DCTMatrixOut); } public static double[] DeQuantize(ref int[] DCTMatrix, ref double[] Q) { double[] DCTMatrixOut = new double[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (double)DCTMatrix[v + (u * 8)] * Q[v + (u * 8)]; } } return(DCTMatrixOut); } public static double[] DCT(ref int[] data) { double[] DCTMatrix = new double[8 * 8]; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double sum = 0.0; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double s = data[x + (y * 8)]; double dctVal = Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += s * dctVal; } } DCTMatrix[u + (v * 8)] = (0.25 * cu * cv * sum); } } return (DCTMatrix); } public static int[] IDCT(ref double[] DCTMatrix) { int[] Matrix = new int[8 * 8]; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double sum = 0; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double idctVal = (cu * cv) / 4.0 * Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += (DCTMatrix[u + (v * 8)] * idctVal); } } Matrix[x + (y * 8)] = (int)Math.Round(sum); } } return (Matrix); } } public class Decoder { public static string Decode(Bitmap b, int expectedLength) { expectedLength *= Encoder.HIDE_COUNT; uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Setup the counters and byte data for the message to decode. byte[] codeBytes = new byte[expectedLength]; byte[] outBytes = new byte[expectedLength / Encoder.HIDE_COUNT]; int bitofs = 0; // Current bit position we've decoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to decode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] blueData = ImageKey.Encoder.GetBlueChannelData(b, x, y); double[] blueDCT = ImageKey.Encoder.DCT(ref blueData); int[] blueDCTI = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q2); int[] blueDCTC = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q); if (bitofs < totalBits) GetBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); } } bitofs = 0; for (int i = 0; i < (expectedLength / Encoder.HIDE_COUNT) * 8; i++) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); List<int> values = new List<int>(); int zeroCount = 0; int oneCount = 0; for (int j = 0; j < Encoder.HIDE_COUNT; j++) { int val = (codeBytes[bytePos + ((expectedLength / Encoder.HIDE_COUNT) * j)] & mask) >> bitPos; values.Add(val); if (val == 0) zeroCount++; else oneCount++; } if (oneCount >= zeroCount) outBytes[bytePos] |= mask; bitofs++; values.Clear(); } return (System.Text.Encoding.ASCII.GetString(outBytes)); } public static void GetBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ((u != 0 || v != 0) && CMatrix[v + (u * 8)] != 0 && DCTMatrix[v + (u * 8)] != 0) { if (bitofs < totalBits) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); int value = DCTMatrix[v + (u * 8)] & (1 << Encoder.HIDE_BIT_POS); if (value != 0) codeBytes[bytePos] |= mask; bitofs++; } } } } } } } UPDATE: By switching to using a QR Code as the source message and swapping a pair of coefficients in each block instead of bit manipulation I've been able to get the message to survive the transform. However to get the message to come through without corruption I have to adjust both coefficients as well as swap them. For example swapping (3,4) and (4,3) in the DCT matrix and then respectively adding 8 and subtracting 8 as an arbitrary constant seems to work. This survives a re-JPEG'ing of 96 but any form of scaling/cropping destroys the message again. I was hoping that by operating on mid to low frequency values that the message would be preserved even under some light image manipulation.

    Read the article

  • How to combine two sql queries?

    - by plasmuska
    Hi Guys, I have a stock table and I would like to create a report that will show how often were items ordered. "stock" table: item_id | pcs | operation apples | 100 | order oranges | 50 | order apples | -100 | delivery pears | 100 | order oranges | -40 | delivery apples | 50 | order apples | 50 | delivery Basically I need to join these two queries together. A query which prints stock balances: SELECT stock.item_id, Sum(stock.pcs) AS stock_balance FROM stock GROUP BY stock.item_id; A query which prints sales statistics SELECT stock.item_id, Sum(stock.pcs) AS pcs_ordered, Count(stock.item_id) AS number_of_orders FROM stock GROUP BY stock.item_id, stock.operation HAVING stock.operation="order"; I think that some sort of JOIN would do the job but I have no idea how to glue queries together. Desired output: item_id | stock_balance | pcs_ordered | number_of_orders apples | 0 | 150 | 2 oranges | 10 | 50 | 1 pears | 100 | 100 | 1 This is just example. Maybe, I will need to add more conditions because there is more columns. Is there a universal technique of combining multiple queries together?

    Read the article

  • Weighted Average with LINQ

    - by jsmith
    My goal is to get a weighted average from one table, based on another tables primary key. Example Data: Table1 Key WEIGHTED_AVERAGE 0200 0 Table2 ForeignKey LENGTH PCR 0200 105 52 0200 105 60 0200 105 54 0200 105 -1 0200 47 55 I need to get a weighted average based on the length of a segment and I need to ignore values of -1. I know how to do this in SQL, but my goal is to do this in LINQ. It looks something like this in SQL: SELECT Sum(t2.PCR*t2.LENGTH)/Sum(t2.LENGTH) AS WEIGHTED_AVERAGE FROM Table1 t1, Table2 t2 WHERE t2.PCR <> -1 AND t2.ForeinKey = t1.Key; I am still pretty new to LINQ, and having a hard time figuring out how I would translate this. The result weighted average should come out to roughly 55.3. Thank you.

    Read the article

  • Peak decomposition

    - by midtiby
    Hi I want to examine a NMR spectre and make the best fit of a specific peak using a sum of gaussians. With the following code it is possible to fit two gaussians to the peak, but can it easily be generalized to n gaussians? freq <- seq(100, 200, 0.1) signal <- 3.5*exp(-(freq-130)^2/50) + 0.2 + 1.5*exp(-(freq-120)^2/10) simsignal <- rpois(length(signal), 100*signal) + rnorm(length(signal)) plot(freq, simsignal) res <- nls(simsignal ~ bg + h1 * exp(-((freq - m1)/s1)^2) + h2 * exp(-((freq - m2)/s2)^2), start=c(bg = 4, h1 = 300, m1 = 128, s1 = 6, h2 = 200, m2 = 122, s2 = 4), trace=T) lines(freq, predict(res, freq), col='red') Another wish is a visulization of the contribution from each of the gaussians to the original peak, eg. the gaussians should be plotted side by side (instead of plotting their sum as done above).

    Read the article

  • Summing the results of Case queries in SQL

    - by David Stelfox
    I think this is a relatively straightforward question but I have spent the afternoon looking for an answer and cannot yet find it. So... I have a view with a country column and a number column. I want to make any number less than 10 'other' and then sum the 'other's into one value. For example, AR 10 AT 7 AU 11 BB 2 BE 23 BY 1 CL 2 I used CASE as follows: select country = case when number < 10 then 'Other' else country end, number from ... This replaces the countries values with less than 10 in the number column to other but I can't work out how to sum them. I want to end up with a table/view which looks like this: AR 10 AU 11 BE 23 Other 12 Any help is greatly appreciated. Cheers, David

    Read the article

  • Get the closest number

    - by user183089
    Hi, i am implementing a little Black Jack game in C# and i have the following problem calculating the player's hand value. the problem is Ace may have value of 1 or 11 so if the players has three cards and one Ace if the sum of the cards is <= 10 Ace will have value of 11(Sorry for who doesn't know the scope of the game is to reach 21 with the sum of the cards) Other way value of 1 Up to here is easy Now lets assume i do not know how many Aces the player has got and the game is implemented giving the possibility to the dealer to use more than one deck of cards. the user may have in one hand even 5,6,7,8 ... aces. What is the best way(possibly using Linq) to evaluate all the aces the player has got to get the closest combination to 21 (in addition tot he other cards)? I hope i have been clear thanks for your help.

    Read the article

  • JQuery Validate: only takes the first addMethod?

    - by Neuquino
    Hi, I need to add multiple custom validations to one form. I have 2 definitions of addMethod. But it only takes the first one... here is the code. $(document).ready(function() { $.validator.addMethod("badSelectionB",function(){ var comboValues = []; for(var i=0;i<6;i++){ var id="comision_B_"+(i+1); var comboValue=document.getElementById(id).value; if($.inArray(comboValue,comboValues) != 0){ comboValues.push(comboValue); }else{ return false; } } return true; },"Seleccione una única prioridad por comisión."); $.validator.addMethod("badSelectionA",function(){ var comboValues = []; for(var i=0;i<6;i++){ var id="comision_A_"+(i+1); var comboValue=document.getElementById(id).value; if($.inArray(comboValue,comboValues) != 0){ comboValues.push(comboValue); }else{ return false; } } return true; },"Seleccione una única prioridad por comisión."); $("#inscripcionForm").validate( { rules : { nombre : "required", apellido : "required", dni : { required: true, digits: true, }, mail : { required : true, email : true, }, comision_A_6: { badSelectionA:true, }, comision_B_6: { badSelectionB: true, } }, messages : { nombre : "Ingrese su nombre.", apellido : "Ingrese su apellido.", dni : { required: "Ingrese su dni.", digits: "Ingrese solo números.", }, mail : { required : "Ingrese su correo electrónico.", email: "El correo electrónico ingresado no es válido." } }, }); }); Do you have any clue of what is happening? Thanks in advance,

    Read the article

  • .NET chart Datamanipulator

    - by peter
    In .NET C#4.0 with the .NET Chart control I have this code to generate a pie chart: chart.Series[0].ChartType = SeriesChartType.Pie; foreach (Order order in orderCollection) { // If I set point.LegendText = order.UserName, .Group will erase it chart.Series[0].Points.AddXY(order.UserName, order.Total); } chart.DataManipulator.Sort(PointSortOrder.Ascending, "X", "Series1"); chart.DataManipulator.Group("SUM", 1, IntervalType.Months, "Series1"); This works well, it generates a pie chart with the top 10 users showing their total order sum. I would like to set the DataPoints' legendtext to the order.UserName property. The problem is, DataManipulator.Group overwrites the series DataPoints. So if I set the legendtext in the foreach loop, they will be erased after the Group call. And after the Group call, I don't see a way to retrieve the correct UserName for a DataPoint to set the legendtext. What is the best approach for this situation?

    Read the article

  • Special scheduling Algorithm (pattern expansion)

    - by tovare
    Question Do you think genetic algorithms worth trying out for the problem below, or will I hit local-minima issues? I think maybe aspects of the problem is great for a generator / fitness-function style setup. (If you've botched a similar project I would love hear from you, and not do something similar) Thank you for any tips on how to structure things and nail this right. The problem I'm searching a good scheduling algorithm to use for the following real-world problem. I have a sequence with 15 slots like this (The digits may vary from 0 to 20) : 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 (And there are in total 10 different sequences of this type) Each sequence needs to expand into an array, where each slot can take 1 position. 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 0 1 1 The constraints on the matrix is that: [row-wise, i.e. horizontally] The number of ones placed, must either be 11 or 111 [row-wise] The distance between two sequences of 1 needs to be a minimum of 00 The sum of each column should match the original array. The number of rows in the matrix should be optimized. The array then needs to allocate one of 4 different matrixes, which may have different number of rows: A, B, C, D A, B, C and D are real-world departments. The load needs to be placed reasonably fair during the course of a 10-day period, not to interfere with other department goals. Each of the matrix is compared with expansion of 10 different original sequences so you have: A1, A2, A3, A4, A5, A6, A7, A8, A9, A10 B1, B2, B3, B4, B5, B6, B7, B8, B9, B10 C1, C2, C3, C4, C5, C6, C7, C8, C9, C10 D1, D2, D3, D4, D5, D6, D7, D8, D9, D10 Certain spots on these may be reserved (Not sure if I should make it just reserved/not reserved or function-based). The reserved spots might be meetings and other events The sum of each row (for instance all the A's) should be approximately the same within 2%. i.e. sum(A1 through A10) should be approximately the same as (B1 through B10) etc. The number of rows can vary, so you have for instance: A1: 5 rows A2: 5 rows A3: 1 row, where that single row could for instance be: 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 etc.. Sub problem* I'de be very happy to solve only part of the problem. For instance being able to input: 1 1 2 3 4 2 2 3 4 2 2 3 3 2 3 And get an appropriate array of sequences with 1's and 0's minimized on the number of rows following th constraints above.

    Read the article

  • max count with joins

    - by trixet
    I have 3 tables: users: Id Login 1 John 2 Bill 3 Jim computers: Id Name 1 Computer1 2 Computer2 3 Computer3 4 Computer4 5 Computer5 sessions: UserId ComputerId Minutes 1 2 47 2 1 32 1 4 15 2 5 5 1 2 7 1 1 40 2 5 31 I would like to display this resulting table: Login Total_sess Total_min Most_freq_computer Sess_on_most_freq Min_on_most_freq John 4 109 Computer2 2 54 Bill 3 68 Computer5 2 36 Jim - - - - - Myself I can only cover first 3 columns with: SELECT Login, COUNT(sessions.UserId), SUM(Minutes) FROM users LEFT JOIN sessions ON users.Id = sessions.UserId GROUP BY users.Id And some kind of other columns with: SELECT main.* FROM (SELECT UserId, ComputerId, COUNT(*) AS cnt ,SUM(Minutes) FROM sessions GROUP BY UserId, ComputerId) AS main INNER JOIN ( SELECT ComputerId, MAX(cnt) AS maxCnt FROM ( SELECT ComputerId, UserId, COUNT(*) AS cnt FROM sessions GROUP BY ComputerId, UserId ) AS Counts GROUP BY ComputerId) AS maxes ON main.ComputerId = maxes.ComputerId AND main.cnt = maxes.maxCnt But I need to get whole resulting table in one query. I feel I'm doing something completely wrong. Need help.

    Read the article

  • Calculate the digital root of a number

    - by Gregory Higley
    A digital root, according to Wikipedia, is "the number obtained by adding all the digits, then adding the digits of that number, and then continuing until a single-digit number is reached." For instance, the digital root of 99 is 9, because 9 + 9 = 18 and 1 + 8 = 9. My Haskell solution -- and I'm no expert -- is as follows. digitalRoot n | n < 10 = n | otherwise = digitalRoot . sum . map (\c -> read [c]) . show $ n As a language junky, I'm interested in seeing solutions in as many languages as possible, both to learn about those languages and possibly to learn new ways of using languages I already know. (And I know at least a bit of quite a few.) I'm particularly interested in the tightest, most elegant solutions in Haskell and REBOL, my two principal "hobby" languages, but any ol' language will do. (I pay the bills with unrelated projects in Objective C and C#.) Here's my (verbose) REBOL solution: digital-root: func [n [integer!] /local added expanded] [ either n < 10 [ n ][ expanded: copy [] foreach c to-string n [ append expanded to-integer to-string c ] added: 0 foreach e expanded [ added: added + e ] digital-root added ] ] EDIT: As some have pointed out either directly or indirectly, there's a quick one-line expression that can calculate this. You can find it in several of the answers below and in the linked Wikipedia page. (I've awarded Varun the answer, as the first to point it out.) Wish I'd known about that before, but we can still bend our brains with this question by avoiding solutions that involve that expression, if you're so inclined. If not, Crackoverflow has no shortage of questions to answer. :)

    Read the article

  • 2-way anova on unbalanced dataset

    - by mortalitysequence
    Is aov appropriate for unbalanced datasets. According to help ...provides a wrapper to lm for fitting linear models to balanced or unbalanced experimental designs. But later on it says aov is designed for balanced designs, and the results can be hard to interpret without balance. How should I perform a 2-way anova on an unbalanced dataset in R? I would like to reproduce the different results for type I and type III sum of squares of SAS output (when using proc glm). I remember we were using type III sum of squares for unbalanced datasets. Thank you in advance.

    Read the article

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • GA written in Java

    - by EnderMB
    I am attempting to write a Genetic Algorithm based on techniques I had picked up from the book "AI Techniques for Game Programmers" that uses a binary encoding and fitness proportionate selection (also known as roulette wheel selection) on the genes of the population that are randomly generated within the program in a two-dimensional array. I recently came across a piece of pseudocode and have tried to implement it, but have come across some problems with the specifics of what I need to be doing. I've checked a number of books and some open-source code and am still struggling to progress. I understand that I have to get the sum of the total fitness of the population, pick a random number between the sum and zero, then if the number is greater than the parents to overwrite it, but I am struggling with the implementation of these ideas. Any help in the implementation of these ideas would be very much appreciated as my Java is rusty.

    Read the article

  • Django Grouping Query

    - by Matt
    I have the following (simplified) models: class Donation(models.Model): entry_date = models.DateTimeField() class Category(models.Model): name = models.CharField() class Item(models.Model): donation = models.ForeignKey(Donation) category = models.ForeignKey(Category) I'm trying to display the total number of items, per category, grouped by the donation year. I've tried this: Donation.objects.extra(select={'year': "django_date_trunc('year', %s.entry_date)" % Donation._meta.db_table}).values('year', 'item__category__name').annotate(items=Sum('item__quantity')) But I get a Field Error on item__category__name. I've also tried: Item.objects.extra(select={"year": "django_date_trunc('year', entry_date)"}, tables=["donations_donation"]).values("year", "category__name").annotate(items=Sum("quantity")).order_by() Which generally gets me what I want, but the item quantity count is multiplied by the number of donation records. Any ideas? Basically I want to display this: 2010 - Category 1: 10 items - Category 2: 17 items 2009 - Category 1: 5 items - Category 3: 8 items

    Read the article

  • matlab fit exp2

    - by HelloWorld
    I'm unsuccessfully looking for documentation of fit function using exp2 (sum of 2 exponents). How to operate the function is clear: [curve, gof] = fit(x, y,'exp2'); But since there are multiple ways to fit a sum of exponents I'm trying to find out what algorithm is used. Particularly what happens when I'm fitting one exponent (the raw data) with a bit of noise, how the exponents are spread. I've simulated several cases, and it seems that it "drops" all the weight on the second set of coefficients, but row data analysis often shows different behavior. Does anyone have suggestions of documentation?

    Read the article

  • help with exception handling in linq

    - by stackoverflowuser
    I have the following code to retrieve customer name, total (orders ), sum (order details) for reach customer in Northwind database. The problem with below code is that it raises an exception since a few customers dont have any entry in orders table. I know using the query syntax (join) the exception can be avoided. I want to know if the same can be handled with the extension method syntax. CustomerOrderDataContext db = new CustomerOrderDataContext(); var customerOrders = db.Customers.Select(c => new { CompanyName = c.CompanyName, TotalOrders = c.Orders.Count(), TotalQuantity = c.Orders.SelectMany(o => o.Order_Details).Sum(o=>o.Quantity) });

    Read the article

  • How to use constraint programming for optimizing shopping baskets?

    - by tangens
    I have a list of items I want to buy. The items are offered by different shops and different prices. The shops have individual delivery costs. I'm looking for an optimal shopping strategy (and a java library supporting it) to purchase all of the items with a minimal total price. Example: Item1 is offered at Shop1 for $100, at Shop2 for $111. Item2 is offered at Shop1 for $90, at Shop2 for $85. Delivery cost of Shop1: $10 if total order < $150; $0 otherwise Delivery cost of Shop2: $5 if total order < $50; $0 otherwise If I buy Item1 and Item2 at Shop1 the total cost is $100 + $90 +$0 = $190. If I buy Item1 and Item2 at Shop2 the total cost is $111 + $85 +$0 = $196. If I buy Item1 at Shop1 and Item2 at Shop2 the total cost is $100 + $10 + $85 + $0 = 195. I get the minimal price if I order Item1 and Item2 at Shop1: $190 What I tried so far I asked another question before that led me to the field of constraint programming. I had a look at cream and choco, but I did not figure out how to create a model to solve my problem. | shop1 | shop2 | shop3 | ... ----------------------------------------- item1 | p11 | p12 | p13 | item2 | p21 | p22 | p23 | . | | | | . | | | | ----------------------------------------- shipping | s1 | s2 | s3 | limit | l1 | l2 | l3 | ----------------------------------------- total | t1 | t2 | t3 | ----------------------------------------- My idea was to define these constraints: each price "p xy" is defined in the domain (0, c) where c is the price of the item in this shop only one price in a line should be non zero if one or more items are bought from one shop and the sum of the prices is lower than limit, then add shipping cost to the total cost shop total cost is the sum of the prices of all items in a shop total cost is the sum of all shop totals The objective is "total cost". I want to minimize this. In cream I wasn't able to express the "if then" constraint for conditional shipping costs. In choco these constraints exist, but even for 5 items and 10 shops the program was running for 10 minutes without finding a solution. Question How should I express my constraints to make this problem solvable for a constraint programming solver?

    Read the article

  • Rails created_at find condition...

    - by Dustin Brewer
    I'm attempting to sum daily purchase amounts for a given user. @dates is an array of 31 dates. I need the find condition to compare a date from the array to the created_at date of the purchases. What I'm doing below compares the exact DateTime for the create_at column. I need it to look at the day itself, not the DateTime. How can I write this so created_at is in between the date from the array? <% @dates.each do |date| %> <%= current_user.purchases.sum(:amount, :conditions = ["created_at = ?", date]) % <% end %

    Read the article

  • Query joining in sql server 2005

    - by Domnic
    I have two queries like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT FROM GLAS_PDC_CHEQUES WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE ORDER BY PC_SL_ACNO -------------------------------------------------- SELECT COAD_PTY_FULL_NAME,PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO, PC_DEPT_NO, PC_DOC_TYPE, PC_CHEQUE_NO, PC_BANK_AC_NO FROM GLAS_PTY_ADDRESS,GLAS_SBLGR_MASTERS,GLAS_PDC_CHEQUES WHERE COAD_COMP_CODE = '1' AND SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND PC_COMP_CODE=SLMA_COMP_CODE AND SLMA_ACNO = PC_SL_ACNO AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) If I execute first query alone I get 5 records... If I join the above two query like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT, COAD_PTY_FULL_NAME FROM GLAS_PDC_CHEQUES LEFT OUTER JOIN GLAS_SBLGR_MASTERS ON( SLMA_COMP_CODE=PC_COMP_CODE AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND SLMA_ACNO = PC_SL_ACNO ) LEFT OUTER JOIN GLAS_PTY_ADDRESS ON( SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID) WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE,COAD_PTY_FULL_NAME ORDER BY PC_SL_ACNO then I just get 2 records.... I need that 5 records to display after join..... How can I do it?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >