Search Results

Search found 8344 results on 334 pages for 'fast vector highlighter'.

Page 51/334 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How Fast is Your Website? Increase Website Speed Instantly

    The most popular websites on the web today load quickly; visitors dont like to wait. Slow websites hurt brands, frustrate visitors, and cost more in terms of visitor productivity and data-transfer. Increasing the performance of your website is now not only easier, we've made it automatic....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • After upgrade my webcam mic records fast, high pitched, and squeaky only in Skype (maybe Sound Recorder problem too)

    - by Dennis
    After an upgrade to 11.10 which probably also updated Skype to 2.2.35 (not sure because I never checked the version before) the sound that comes back from an echo test is very high pitched and squeaky. I'm not sure if when in a call if the other person can't hear or just doesn't know what they are hearing. I am using a USB Logitech C250 Audacity records fine, gmail video chat works fine, but if I start sound recorder I get a "Could not negotiate format", followed by "Could not get/set settings from/on resource". I don't know if this is a Skype problem or a wider Pulse problem. My only real needs are the gmail and Audacity, though I have a couple of contacts that I can only Skype with.

    Read the article

  • Right-click In Ubuntu 11.10 Acts Too Fast - No Fix Yet, Any Workaround?

    - by badlearner
    When I click the right mouse button (anywhere, be it - desktop, browser, console, etc.), the right-click context menu pops up so quickly that the very option in the menu gets clicked. This happens to often to take it easy. This issue has been brought up a couple of times on Ask Ubuntu, but with no fix whatsoever. This is a very low priority issue for the Ubuntu team I believe? (How could they?!) Is there at least a workaround for the issue? Ubuntu is almost unusable for me as it is now. PS: I have a new mouse. So, please don't go about suggesting that I get a new one - - everything's working fine on Windows 7, so should be the case with ubuntu 11.10, which it is not.

    Read the article

  • Does the Fast Video Download addon share my data?

    - by Frost Shadow
    I've installed the Fast Video Download version 3.0.8 addon for Firefox to download flash videos, like from youtube. What I'm wondering is, how does the addon download it, and do other people see that i'm downloading the videos? For example, is all the software to download the video already on my computer, or does the addon contact someone else to get the video, or let them know? Can the webpage's administrator see I'm downloading the video?

    Read the article

  • suggestions for fast reliable proxy IPs like codeen but with posting?

    - by barlop
    Hi, I am looking for a list like one offered by codeen http://codeen.cs.princeton.edu/ of fast reliable proxy servers.. I just want to be able to "post" on usenet or yahoo groups with them.. I think the codeen ones don't allow HTTP-POST I don't need them for downloading or for torrents, or even for any images.. they can block images to keep browsing faster. I know it's not a list, but I did try TOR once, but it was horribly slow.

    Read the article

  • Getting pixel averages of a vector sitting atop a bitmap...

    - by user346511
    I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? (I tagged this as Python, which is preferred, but I'd be happy with the general algorithm!) I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

    Read the article

  • Need to get pixel averages of a vector sitting on a bitmap...

    - by user346511
    I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? I'm not even clear what this type of math is called. I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

    Read the article

  • setPosition of Sprite onUpdate in AndEngine

    - by SSH This
    I am trying to get a "highlighter" circle to follow around a sprite, but I am having trouble, I thought I could use the onUpdate method that's available to me in SequenceEntityModifier but it's not working for me. Here is my code: // make sequence mod with move modifier SequenceEntityModifier modifier = new SequenceEntityModifier(myMovemod) { @Override protected void onModifierFinished(IEntity pItem) { // animation finished super.onModifierFinished(pItem); } public float onUpdate(float pSecondsElapsed, IEntity pItem) { highlighter.setPosition(player2.getX() - highlighterOffset, player2.getY() - highlighterOffset); return pSecondsElapsed; } }; When onUpdate is completely commented out, the sprite moves like I want it to, everything is ok. When I put the onUpdate in, the sprite doesn't move at all. I have a feeling that I am overriding the original onUpdate's actions? Am I going about this the wrong way? I am new to Java, so please feel free to advise if this isn't going to work. UPDATE: The player2 is the sprite that I'm trying to get the highlighter to follow.

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to find a binary logarithm very fast? (O(1) at best)

    - by psihodelia
    Is there any very fast method to find a binary logarithm of an integer number? For example, given a number x=52656145834278593348959013841835216159447547700274555627155488768 such algorithm must find y=log(x,2) which is 215. x is always a power of 2. The problem seems to be really simple. All what is required is to find the position of the most significant 1 bit. There is a well-known method FloorLog, but it is not very fast especially for the very long multi-words integers. What is the fastest method?

    Read the article

  • Hierarchical Data in MySQL is as fast as XML to retrieve?

    - by ajsie
    i've got a list of all countries - states - cities (- subcities/villages etc) in a XML file and to retrieve for example a state's all cities it's really quick with XML (using xml parser). i wonder, if i put all this information in mysql, is retrieving a state's all cities as fast as with XML? cause XML is designed to store hierarchical data while relational databases like mysql are not. the list contains like 500 000 entities. so i wonder if its as fast as XML using either of: Adjacency list model Nested Set model And which one should i use? Cause (theoretically) there could be unlimited levels under a state (i heard that adjacency isn't good for unlimited child-levels). And which is fastest for this huge dataset? Thanks!

    Read the article

  • Will new Acer Revo (with Atom 330) be fast enough to be MythTV client/server?

    - by vava
    As a geek I really like Atom CPUs but can't find a reason to buy one yet :( Although I was thinking about making my own DVR with NAS and media center functionality. Unfortunately, even today's Acer Revo, built on ION platform is not fast enough for streaming Full HD videos. So what do you think, will new two core CPU make it better, will it be able to show Full HD videos, store them to disk and transfer something over the network at the same time? Will it be able to scale videos from Hulu and YouTube to fullscreen?

    Read the article

  • Is there an application I can run that will tell me how fast my computer is running?

    - by Robert Hume
    At work I log into a virtual Windows machine. I'm told it runs as fast as a PC, but I'm skeptical. Is there an application I can run on the machine that will tell me how faster the machine is actually running? It would be nice if it ran on Windows and on Mac. Updating with more details -- I was asked "why does it matter" -- here's why: It matters because I'm a programmer and I need as much speed (CPU and memory) as possible to do my work. IMO, the virtual machine is noticeably slower than a basic $800 PC would be, but I need a way a proving it. Websites like Bandwidthtest.com can show me my internet speed, so I'm wondering if there's an app that can test my computer's speed.

    Read the article

  • Are there any command line utilities which can calculate and/or limit how fast a pipe is running?

    - by stsquad
    I'm doing some basic stress testing of a Linux kernel network IWF with netcat. The set-up is fairly simple. On the target side: nc -l -p 10000 > /dev/null And on my desktop I was running: cat /dev/urandom | nc 192.168.0.20 10000 I'm using urandom for some poor-mans fuzz testing. However I find that even at this rate I can break something quite quickly. EDIT So I've been playing with trickle to rate limit how fast I'm generating data: cat /dev/urandom | trickle -u 10 nc 192.168.0.20 10000 But it's hard to tell if this is working. What would be really useful is a the pv equivilent of trickle that can work with pipes.

    Read the article

  • Desktop runs very slick, animations are all fast and flawless. Moving windows around, however, is very laggy. Why?

    - by Muu
    This isn't a question about Ubuntu being laggy in general - not at all, in fact, it's very slick and fast for me. Clicking the "Workspace Switcher" in the dock performs the animation immediately and very smoothly. Switching between workspaces with the arrow keys - again, flawlessly. My computer has a resolution of 2560x1440 on a 27" display (no, not an Apple product - though my monitor has the same panel that Apple use in their cinema displays). It's powered by an Nvidia GeForce GTX 470 - easily enough to handle it - and an Intel i3. Hardware is not the issue. I am running Ubuntu 11.10 (upgraded from 11.04). I had the same issue in 11.04. I'm running the "NVIDIA accelerated graphics driver (post-release updates) (version current-updates)" from the additional drivers dialogue. Two drivers have been suggested to me via that dialogue and I've tried both - same effect with each. The driver is "activated and currently in use". Any other information required, let me know and I'll post it. I'm a programmer who works with Linux daily (both as a job and as an interest) so technical instructions are fine. I've noticed that Compiz uses a lot of CPU when moving windows around and it's memory usage is relatively high (though possibly expected for Compiz): 1671 user 20 0 478m 286m 33m S 1 7.3 12:44.05 compiz And one more thing - occasionally moving windows around is fast. But it only happens when all applications are closed, and even then it sometimes doesn't. Something must be interfering, but what? I'll try and find out but in the meantime, any suggestions are much appreciated :-)

    Read the article

  • C++ Pointers, objects, etc

    - by Zeee
    It may be a bit confusing, but... Let's say I have a vector type in a class to store objects, something like vector, and I have methods on my class that will later return Operators from this vector. Now if any of my methods receives an Operator, will I have any trouble to insert it directly into the vector? Or should I use the copy constructor to create a new Operator and put this new one on the vector?

    Read the article

  • Some optimization about the code (computing ranks of a vector)?

    - by user1748356
    The following code is a function (performance-critical) to compute tied ranks of a vector: mergeSort(x,inds,ci); //a sort function to sort vector x of length ci, also returns keys (inds) of x. int tj=0; double xi=x[0]; for (int j = 1; j < ci; ++j) { if (x[j] > xi) { double rankvalue = 0.5 * (j - 1 + tj); for (int k = tj; k < j; ++k) { ranks[inds[k]]=rankvalue; }; tj = j; xi = x[j]; }; }; double rankvalue = 0.5 * (ci - 1 + tj); for (int k = tj; k < ci; ++k) { ranks[inds[k]]=rankvalue; }; The problem is, the supposed performance bottleneck mergeSort(), which is O(NlogN) is several times faster than the other part of codes (which is O(N)), which suggests there is room for huge improvment with the other part of the codes, any advices?

    Read the article

  • How can I override list methods to do vector addition and subtraction in python?

    - by Bobble
    I originally implemented this as a wrapper class around a list, but I was annoyed by the number of operator() methods I needed to provide, so I had a go at simply subclassing list. This is my test code: class CleverList(list): def __add__(self, other): copy = self[:] for i in range(len(self)): copy[i] += other[i] return copy def __sub__(self, other): copy = self[:] for i in range(len(self)): copy[i] -= other[i] return copy def __iadd__(self, other): for i in range(len(self)): self[i] += other[i] return self def __isub__(self, other): for i in range(len(self)): self[i] -= other[i] return self a = CleverList([0, 1]) b = CleverList([3, 4]) print('CleverList does vector arith: a, b, a+b, a-b = ', a, b, a+b, a-b) c = a[:] print('clone test: e = a[:]: a, e = ', a, c) c += a print('OOPS: augmented addition: c += a: a, c = ', a, c) c -= b print('OOPS: augmented subtraction: c -= b: b, c, a = ', b, c, a) Normal addition and subtraction work in the expected manner, but there are problems with the augmented addition and subtraction. Here is the output: >>> CleverList does vector arith: a, b, a+b, a-b = [0, 1] [3, 4] [3, 5] [-3, -3] clone test: e = a[:]: a, e = [0, 1] [0, 1] OOPS: augmented addition: c += a: a, c = [0, 1] [0, 1, 0, 1] Traceback (most recent call last): File "/home/bob/Documents/Python/listTest.py", line 35, in <module> c -= b TypeError: unsupported operand type(s) for -=: 'list' and 'CleverList' >>> Is there a neat and simple way to get augmented operators working in this example?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >