Search Results

Search found 1058 results on 43 pages for 'compute'.

Page 24/43 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Syntax error in aggregate argument: Expecting a single column argument with possible 'Child' qualifier.

    - by Rushabh
    DataTable distinctTable = dTable.DefaultView.ToTable(true,"ITEM_NO","ITEM_STOCK"); DataTable dtSummerized = new DataTable("SummerizedResult"); dtSummerized.Columns.Add("ITEM_NO",typeof(string)); dtSummerized.Columns.Add("ITEM_STOCK",typeof(double)); int count=0; foreach(DataRow dRow in distinctTable.Rows) { count++; //string itemNo = Convert.ToString(dRow[0]); double TotalItem = Convert.ToDouble(dRow[1]); string TotalStock = dTable.Compute("sum(" + TotalItem + ")", "ITEM_NO=" + dRow["ITEM_NO"].ToString()).ToString(); dtSummerized.Rows.Add(count,dRow["ITEM_NO"],TotalStock); } Error Message: Syntax error in aggregate argument: Expecting a single column argument with possible 'Child' qualifier. Do anyone can help me out? Thanks.

    Read the article

  • "device-mapper resume ioctl failed" when run a instance

    - by user1490377
    I install ubuntu-12.04 server and openstack on two computer. When Launch an image, the instance can't run and show error state! nova-compute.log details: 2012-06-24 12:02:00 DEBUG nova.utils [req-71fdca27-5f93-438e-a4be-ddc54f698171 c737b66b2102415f817ca50b9649fd8f 5b1da4eaee3643919a230efc06473720] Unexpected error while running command. Command: sudo nova-rootwrap kpartx -a /dev/nbd15 Exit code: 1 Stdout: '' Stderr: 'device-mapper: resume ioctl failed: Invalid argument\ncreate/reload failed on nbd15p1\n' from (pid=1267) trycmd /usr/lib/python2.7/dist-packages/nova/utils.py:277 2012-06-24 12:02:00 DEBUG nova.utils [req-71fdca27-5f93-438e-a4be-ddc54f698171 c737b66b2102415f817ca50b9649fd8f 5b1da4eaee3643919a230efc06473720] Running cmd (subprocess): sudo nova-rootwrap qemu-nbd -d /dev/nbd15 from (pid=1267) execute /usr/lib/python2.7/dist-packages/nova/utils.py:219 2012-06-24 12:02:02 DEBUG nova.virt.disk.api [req-71fdca27-5f93-438e-a4be-ddc54f698171 c737b66b2102415f817ca50b9649fd8f 5b1da4eaee3643919a230efc06473720] Failed to map partitions: Unexpected error while running command. Command: sudo nova-rootwrap kpartx -a /dev/nbd15 Exit code: 1 Stdout: '' Stderr: 'device-mapper: resume ioctl failed: Invalid argument\ncreate/reload failed on nbd15p1\n' from (pid=1267) mount /usr/lib/python2.7/dist-packages/nova/virt/disk/api.py:205

    Read the article

  • Silverlight RelativeSource of TemplatedParent Binding within a DataTemplate, Is it possible?

    - by Matt.M
    I'm trying to make a bar graph Usercontrol. I'm creating each bar using a DataTemplate. The problem is in order to compute the height of each bar, I first need to know the height of its container (the TemplatedParent). Unfortunately what I have: Height="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Height, Converter={StaticResource HeightConverter}, Mode=OneWay}" Does not work. Each time a value of NaN is returned to my Converter. Does RelativeSource={RelativeSource TemplatedParent} not work in this context? What else can I do to allow my DataTemplate to "talk" to the element it is being applied to? Incase it helps here is the barebones DataTemplate: <DataTemplate x:Key="BarGraphTemplate"> <Grid Width="30"> <Rectangle HorizontalAlignment="Center" Stroke="Black" Width="20" Height="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Height, Converter={StaticResource HeightConverter}, Mode=OneWay}" VerticalAlignment="Bottom" /> </Grid> </DataTemplate>

    Read the article

  • How do I use local memory in OpenCL?

    - by splicer
    I've been playing with OpenCL recently, and I'm able to write simple kernels that use only global memory. Now I'd like to start using local memory, but I can't seem to figure out how to use get_local_size() and get_local_id() to compute one "chunk" of output at a time. For example, let's say I wanted to convert Apple's OpenCL Hello World example kernel to something the uses local memory. How would you do it? Here's the original kernel source: __kernel square( __global float *input, __global float *output, const unsigned int count) { int i = get_global_id(0); if (i < count) output[i] = input[i] * input[i]; } If this example can't easily be converted into something that shows how to make use of local memory, any other simple example will do. Thanks!

    Read the article

  • Adaboost algorithm and its usage in face detection

    - by Hani
    I am trying to understand Adaboost algorithm but i have some troubles. After reading about Adaboost i realized that it is a classification algorithm(somehow like neural network). But i could not know how the weak classifiers are chosen (i think they are haar-like features for face detection) and how finally the H result which is the final strong classifier can be used. I mean if i found the alpha values and compute the H ,how am i going to benefit from it as a value (one or zero) for new images. Please is there an example describes it in a perfect way? i found the plus and minus example that is found in most adaboost tutorials but i did not know how exactly hi is chosen and how to adopt the same concept on face detection. I read many papers and i had many ideas but until now my ideas are not well arranged. Thanks....

    Read the article

  • computing matting Laplacian matrix of an Image

    - by ajith
    hi everyone, i need to compute Laplacian Matrix-L for an image(nXn) in opencv...computing goes as follows.......... dij - 1/|wk|{[1+1/(e/|wk|+s2)][(Ii-µk)*(Ij -µk)]}.... for all(i,j)?wk,summing over k yields (i,j)th element of L. where Here dij is the Kronecker delta,µk and s2k are the mean & variance of intensities in the window wk around k,and |wk| is the number of pixels in this window.wk is 3X3 window... here am not clear about 2 things... 1.what ll be the size of L?nXn or (nXn)X(nXn)?? 2.how to select Ii and Ij separately in 2D image?

    Read the article

  • Linear Regression and Java Dates

    - by Smithers
    I am trying to find the linear trend line for a set of data. The set contains pairs of dates (x values) and scores (y values). I am using a version of this code as the basis of my algorithm. The results I am getting are off by a few orders of magnitude. I assume that there is some problem with round off error or overflow because I am using Date's getTime method which gives you a huge number of milliseconds. Does anyone have a suggestion on how to minimize the errors and compute the correct results?

    Read the article

  • CUDA: accumulate data into a large histogram of floats

    - by shoosh
    I'm trying to think of a way to implement the following algorithm using CUDA: Working on a large volume of voxels, for each voxel I calculate an index i and a value c. after the calculation I need to perform histogram[i] += c c is a float value and the histogram can have up to 15,000 bins. I'm looking for a way to implement this efficiently using CUDA. The first obvious problem is that with compute capabilities 1.3 which is what I'm using I can't even do an atomicAdd() of floats so how can I accumulate anything reliably? This example by nVidia does something somewhat simpler. The histograms are saved in the shared memory (which I can't do due to its size) and it only accumulates integers. Can this approach be generalized to my case?

    Read the article

  • Opening a local file in Eclipse from the web

    - by Victor Nicollet
    Right now, when I notice a problem on a page on my PHP web site, I have to look at the URL, mentally deduce what file is responsible for displaying that page, then navigate the Eclipse PDT file tree to open that file. This is annoying and uses brain power that could have been applied to solving the issue instead. I would like my PHP web site to display on every page a link that I could click to automatically open the correct file in Eclipse. I can easily compute the complete absolute path for the file I need to open (for example, open C:/xampp/htdocs/controllers/Foo/Bar.php when visiting /foo/bar), and I can make sure that Eclipse is currently open with the correct project loaded, but I'm stuck on how I can have Firefox/Chrome/IE tell Eclipse to open that specific file.

    Read the article

  • C#: Cached Property: Easier way?

    - by Peterdk
    I have a object with properties that are expensive to compute, so they are only calculated on first access and then cached. private List<Note> notes; public List<Note> Notes { get { if (this.notes == null) { this.notes = CalcNotes(); } return this.notes; } } I wonder, is there a better way to do this? Is it somehow possible to create a Cached Property or something like that in C#?

    Read the article

  • Compile-time lookup array creation for ANSI-C?

    - by multiproximus
    A previous programmer preferred to generate large lookup tables (arrays of constants) to save runtime CPU cycles rather than calculating values on the fly. He did this by creating custom Visual C++ projects that were unique for each individual lookup table... which generate array files that are then #included into a completely separate ANSI-C micro-controller (Renesas) project. This approach is fine for his original calculation assumptions, but has become tedious when the input parameters need to be modified, requiring me to recompile all of the Visual C++ projects and re-import those files into the ANSI-C project. What I would like to do is port the Visual C++ source directly into the ANSI-C microcontroller project and let the compiler create the array tables. So, my question is: Can ANSI-C compilers compute and generate lookup arrays during compile time? And if so, how should I go about it? Thanks in advance for your help!

    Read the article

  • Haskell Lazy Evaluation and Reuse

    - by Jonathan Sternberg
    I know that if I were to compute a list of squares in Haskell, I could do this: squares = [ x ** 2 | x <- [1 ..] ] Then when I call squares like this: print $ take 4 squares And it would print out [1.0, 4.0, 9.0, 16.0]. This gets evaluated as [ 1 ** 2, 2 ** 2, 3 ** 2, 4 ** 2 ]. Now since Haskell is functional and the result would be the same each time, if I were to call squares again somewhere else, would it re-evaluate the answers it's already computed? If I were to re-use squares after I had already called the previous line, would it re-calculate the first 4 values? print $ take 5 squares Would it evaluate [1.0, 4.0, 9.0, 16.0, 5 ** 2]?

    Read the article

  • Modifying Bresenham's line algorithm

    - by sphennings
    I'm trying to use Bresenham's line algorithm to compute Field of View on a grid. The code I'm using calculates the lines without a problem but I'm having problems getting it to always return the line running from start point to endpoint. What do I need to do so that all lines returned run from (x0,y0) to (x1,y1) def bresenham_line(self, x0, y0, x1, y1): steep = abs(y1 - y0) > abs(x1 - x0) if steep: x0, y0 = y0, x0 x1, y1 = y1, x1 if x0 > x1: x0, x1 = x1, x0 y0, y1 = y1, y0 if y0 < y1: ystep = 1 else: ystep = -1 deltax = x1 - x0 deltay = abs(y1 - y0) error = -deltax / 2 y = y0 line = [] for x in range(x0, x1 + 1): if steep: line.append((y,x)) else: line.append((x,y)) error = error + deltay if error > 0: y = y + ystep error = error - deltax return line

    Read the article

  • Constructing a hash table/hash function.

    - by nn
    Hi, I would like to construct a hash table that looks up keys in sequences (strings) of bytes ranging from 1 to 15 bytes. I would like to store an integer value, so I imagine an array for hashing would suffice. I'm having difficulty conceptualizing how to construct a hash function such that given the key would give an index into the array. Any assistance would be much appreiated. The maximum number of entries in the hash is: 4081*15 + 4081*14 + ... 4081 = 4081((15*(16))/2) = 489720. So for example: int table[489720]; int lookup(unsigned char *key) { int index = hash(key); return table[index]; } How can I compute hash(key). I'd preferably like to get a perfect hash function. Thanks.

    Read the article

  • Handler invocation speed: Objective-C vs virtual functions

    - by Kerido
    I heard that calling a handler (delegate, etc.) in Objective-C can be even faster than calling a virtual function in C++. Is it really correct? If so, how can that be? AFAIK, virtual functions are not that slow to call. At least, this is my understanding of what happens when a virtual function is called: Compute the index of the function pointer location in vtbl. Obtain the pointer to vtbl. Dereference the pointer and obtain the beginning of the array of function pointers. Offset (in pointer scale) the beginning of the array with the index value obtained on step 1. Issue a call instruction. Unfortunately, I don't know Objective-C so it's hard for me to compare performance. But at least, the mechanism of a virtual function call doesn't look that slow, right? How can something other than static function call be faster?

    Read the article

  • Python "Every Other Element" Idiom

    - by Matt Luongo
    Hey guys, I feel like I spend a lot of time writing code in Python, but not enough time creating Pythonic code. Recently I ran into a funny little problem that I thought might have an easy, idiomatic solution. Paraphrasing the original, I needed to collect every sequential pair in a list. For example, given the list [1,2,3,4,5,6], I wanted to compute [(1,2),(3,4),(5,6)]. I came up with a quick solution at the time that looked like translated Java. Revisiting the question, the best I could do was l = [1,2,3,4,5,6] [(l[2*x],l[2*x+1]) for x in range(len(l)/2)] which has the side effect of tossing out the last number in the case that the length isn't even. Is there a more idiomatic approach that I'm missing, or is this the best I'm going to get?

    Read the article

  • using Excel VBA, given the daily price of 50 stocks, choose 10 stocks such that they have the minumu

    - by correl
    The high-level goal is to choose 10 stocks that have the lowest correlation among one another, out of a pool of 50, so that I can have a well-diversified portfolio. I have managed to write some VBA macro to download the past 3 years of daily price data from Yahoo finance, and then compute the 50x50 correlation matrix (using the Correl function), using the daily close as the data. What I have tried so far is just some local-maximum heuristic: - For the two stocks that have the highest correlation with each other, remove one of them. Between the two, remove the one that has the higher average correlation with all the other stocks. - When I remove a stock from the pool, I just delete the correponding row and column, to give a smaller matrix. - Repeat until I have just 10 stocks remaining (a 10x10 matrix). I was wondering if there is some algorithm that already solves such a problem and gives the optimum solution?

    Read the article

  • Azure - Microsoft.IdentityModel not found

    - by Andy
    Hi There, I'm working with a WCF service in Azure, which uses Windows Live ID authentication with the recent deviceid requirements. When I host my WCF service locally in the compute emulator, it works properly, but when I deploy the cloud service to Azure and call it the same way (from another project that uses the WCF service as a service reference), I get the error: Could not load file or assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. I found this post : http://social.msdn.microsoft.com/Forums/en-US/netservices/thread/cd139b5c-ad12-4298-af2f-1b2d0136a977 But there are a few problems: 1. I don't seem to have access to Microsoft.IdentityModel, only System.IdentityModel. I'm not sure why it's searching for something in 3.5 at all, as I'm building in .NET 4.0. 2. When I choose to "copy to local" on System.IdentityModel, it doesn't change anything. Any help? I would appreciate it! Best Regards, Andy

    Read the article

  • Map API with Building Elevation Data

    - by Laserallan
    Hi! I'm looking for a map API where I can get detailed elevation for points. I'm not looking elevation differences for certain paths along roads on the map but actual building heights. Getting the the 3d meshes for buildings would also be fine since I can compute the height myself using that information. Does any of the map API's out there support giving out this kind of information? EDIT: Free API's are preferred but if that's not an option I'd like to hear about non free alternatives as well.

    Read the article

  • Tail-recursive pow() algorithm with memoization?

    - by Dan
    I'm looking for an algorithm to compute pow() that's tail-recursive and uses memoization to speed up repeated calculations. Performance isn't an issue; this is mostly an intellectual exercise - I spent a train ride coming up with all the different pow() implementations I could, but was unable to come up with one that I was happy with that had these two properties. My best shot was the following: def calc_tailrec_mem(base, exp, cache_line={}, acc=1, ctr=0): if exp == 0: return 1 elif exp == 1: return acc * base elif exp in cache_line: val = acc * cache_line[exp] cache_line[exp + ctr] = val return val else: cache_line[ctr] = acc return calc_tailrec_mem(base, exp-1, cache_line, acc * base, ctr + 1) It works, but it doesn't memorize the results of all calculations - only those with exponents 1..exp/2 and exp.

    Read the article

  • Tracking unique versions of files with hashes

    - by rwmnau
    I'm going to be tracking different versions of potentially millions of different files, and my intent is to hash them to determine I've already seen that particular version of the file. Currently, I'm only using MD5 (the product is still in development, so it's never dealt with millions of files yet), which is clearly not long enough to avoid collisions. However, here's my question - Am I more likely to avoid collisions if I hash the file using two different methods and store both hashes (say, SHA1 and MD5), or if I pick a single, longer hash (like SHA256) and rely on that alone? I know option 1 has 288 hash bits and option 2 has only 256, but assume my two choices are the same total hash length. Since I'm dealing with potentially millions of files (and multiple versions of those files over time), I'd like to do what I can to avoid collisions. However, CPU time isn't (completely) free, so I'm interested in how the community feels about the tradeoff - is adding more bits to my hash proportionally more expensive to compute, and are there any advantages to multiple different hashes as opposed to a single, longer hash, given an equal number of bits in both solutions?

    Read the article

  • Azure, SLAs and CAP theorem

    - by dayscott
    Azure itself is imo PaaS and not IaaS. Do you agree? MS gurantees an availability of 99% and a strong consistency. You can find MS SLAs here: http://www.microsoft.com/windowsazure/sla (three SLAs Uptime: http://img229.imageshack.us/img229/4889/unbenanntqt.png ) I can't find anyhing about how they are going to archive that. Do they do backups? If Yes: How do they manage consistency? According to the Cap theorem (http://camelcase.blogspot.com/2007/08/cap-theorem.html ) their claims are not realistic. 2.1 Do you know detailed technical stuff about the how they are going to realize the claims about consistency and availability? On the MS page you'll find three SLAs .docs, one for SQL Azure, the second for Azure AppFabric/.Net Services and the third for Azure Compute&Storage.(Screenshot in 1.) How can one track whether SLAs are violated? Do they offer some sort of monitor, so I don't have to measure the uptime by myself?

    Read the article

  • iTunes Visualizer Plugin in C# - Energy Function

    - by James D
    Hi, iTunes Visualizer plugin in C#. Easiest way to compute the "energy level" for a particular sample? I looked at this writeup on beat detection over at GameDev and have had some success with it (I'm not doing beat detection per se, but it's relevant). But ultimately I'm looking for a stupid-as-possible, quick-and-dirty approach for demo purposes. For those who aren't familiar with how iTunes structures visualization data, basically you're given this: struct VisualPluginData { /* SNIP */ RenderVisualData renderData; UInt32 renderTimeStampID; UInt8 minLevel[kVisualMaxDataChannels]; // 0-128 UInt8 maxLevel[kVisualMaxDataChannels]; // 0-128 }; struct RenderVisualData { UInt8 numWaveformChannels; UInt8 waveformData[kVisualMaxDataChannels][kVisualNumWaveformEntries]; // 512-point FFT UInt8 numSpectrumChannels; UInt8 spectrumData[kVisualMaxDataChannels][kVisualNumSpectrumEntries]; }; Ideally, I'd like an approach that a beginning programmer with little to no DSP experience could grasp and improve on. Any ideas? Thanks!

    Read the article

  • Bring to front DatePicker on an UITextField

    - by crazyfr
    Hi, When an UITextField is firstResponder, I would like to bring to front an UIDatePicker (bottom part of the screen) without the "going down keyboard" (no call to UITextField resignFirstResponder). The aim is to process like UIKeyboard of UITextField which pop-up on nearly everything when it becomeFirstResponder. modalViewController seems to be fullscreen only. - showDatePicker:(id)sender { if([taskName isFirstResponder]) [taskName resignFirstResponder]; [self.view.window addSubview: self.pickerView]; // size up the picker view and compute the start/end frame origin (...) [UIView commitAnimations]; } This example is an animation of keyboard going down, and DatePicker going up, behind and not in front. Do you know a solution ? A piece of code would be welcome. Thanks in advance.

    Read the article

  • Matrix inversion in OpenCL

    - by buchtak
    Hi, I am trying to accelerate some computations using OpenCL and part of the algorithm consists of inverting a matrix. Is there any open-source library or freely available code to compute lu factorization (lapack dgetrf and dgetri) of matrix or general inversion written in OpenCL or CUDA? The matrix is real and square but doesn't have any other special properties besides that. So far, I've managed to find only basic blas matrix-vector operations implementations on gpu. The matrix is rather small, only about 60-100 rows and cols, so it could be computed faster on cpu, but it's used kinda in the middle of the algorithm, so I would have to transfer it to host, calculate the inverse, and then transfer the result back on the device where it's then used in much larger computations.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >