Search Results

Search found 4273 results on 171 pages for 'shift reduce'.

Page 5/171 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Best way to store motion changes to reduce memory

    - by Andrew Simpson
    I am comparing jpeg to jpeg in a constant 'video-stream'. i am using EMGU/OpenCV to compare each pixels at the byte level. There are 3 channels to each image (RGB). I had heard that it is common practice to store only the pixels that have changed between frames as a way of conserving memory space. But, if for instance/example I say EVERY pixel has changed (pls note i am using an exaggerated example to make my point and i would normally discard such large changes) then the resultant bytes saved is 3 times larger than the original jpeg. How can I store such motion changes efficiently? thanks

    Read the article

  • Algorithm to reduce calls to mapping API

    - by aidan
    A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)

    Read the article

  • How to reduce tight coupling between two data sources

    - by fstuijt
    I'm having some trouble finding a proper solution to the following architecture problem. In our setting (sketched below) we have 2 data sources, where data source A is the primary source for items of type Foo. A secondary data source exists which can be used to retrieve additional information on a Foo; however this information does not always exist. Furthermore, data source A can be used to retrieve items of type Bar. However, each Bar refers to a Foo. The difficulty here is that each Bar should refer to a Foo which, if available, also contains the information as augmented by data source B. My question is: how to remove the tight coupling between data source A and B?

    Read the article

  • How to reduce noise in Skype?

    - by tkc
    According to Alsamixer I do have a HDA Intel soundcard with Nvidia MCP77/78 HDMI chip (Realtek sound card on MSI notebook). When I use Skype under Ubuntu 12.04 for video calls, the other side hears background noise such as in Windows when you have your microphone boosted. In fact they can hear everything, even the fans turning on. There is nothing tweaked on Ubuntu's fresh installation. Also tried this site: https://code.launchpad.net/~ubuntu-audio-dev/+archive/alsa-daily/+packages , but there are no *.deb files that I can test if any fix the problem. The question is if there is a way to add/tweak something to enable on software level the noise cancellation like the Windows sound drivers have that option. I use my build in mic.

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • Chain-Sys Uses Oracle Business Accelerators to Reduce Implementation Costs by 25%

    - by LanaProut
    Normal 0 false false false EN-US X-NONE X-NONE Find out how Oracle Business Accelerators for Oracle E-Business Suite and Chain-Sys's appLOAD tool reduced implementation costs by 25% for a major Indian power and infrastructure company. Click here to listen to the 11 minute AppCast. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How to reduce MDX code redundancy in SQL Server Analysis Services (SSAS)

    To query an Analysis Services cube, MDX is used as the query language. In most business settings, one would find a set of queries that are common across a number of user query requirements. To cater to this, even with a modest size IT team, there is a good chance that the same queries are developed redundantly either within a SSAS MDX script or repetitively in an ad-hoc manner in client applications. In this tip we would look at how to reuse queries without redeveloping them over and over.

    Read the article

  • Reduce weight in healthy way - Day 3

    - by krnites
    So I am on Day 3 and what I did today was totally opposite of what I should have done. It seems I will take ever to loose what I had aim for. Today I had ate more than 5000 Calorie, had soda drinks and very oily indian food. On my Day 2 post some one commented that with the number that I have I will loose 1 lbs in a week, but my friends it seems I will gain 5 lbs in a week. I have to straighten my act and really focus on what I want to achieve. I am going to hit the gym and going to burn atleast 500 calorie today.Piece of advice - don't eat fried, oily  and junk food.  Try to have as much as vegetables in your food. I understand its not possible as being a normal person and not a diet freak I know its impossible to be away from Taco or burger and not drink Coke, but to achieve something you have to loose something.

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • The need to reduce mesh count

    - by OJW
    In Panda3d, I load a model and place 10000 references to it in the scene-graph. It runs at (say) 2Hz. I load a 3d model containing 10000 copies of that exact same object, and it runs at (say) 60Hz. As does using the flattenStrong() command which is effectively the same thing but at runtime. So the question is: is this behaviour a peculiarity of Panda3d, or is it a fundamental law which applies to all games engines?

    Read the article

  • 5 Ways to Reduce the Google Sandbox Effect

    The most used and most popular search engine is Google all over the World Wide Web. Google is very much strict with the SEO practices every website is using. Google sandbox is known as the effect after a website is put into test if it followed the policy and guidelines of Google.

    Read the article

  • 5 Ways to Reduce the Google Sandbox Effect

    The most used and most popular search engine is Google all over the World Wide Web. Google is very much strict with the SEO practices every website is using. Google sandbox is known as the effect after a website is put into test if it followed the policy and guidelines of Google.

    Read the article

  • Reduce size of MP4

    - by testing
    I have a MP4 file with a lengh of 22:44. Here are the details: Video: width: 720 px height: 404 px data bitrate: 1022 kBit/s overall bitrate: 1182 kBit/s fps: 24 codec: H264 - MPEG4 AVC (part 10) (avc1) Audio: bitrate: 159 kBit/s stereo sample rate: 48 kHz codec: MPEG AAC Audio (mp4a) I thought I can reduce the current filesize (about 200 MB) by reducing the width and the height (420 x 236). I tried different programs: Handbrake, Format Factory, Next Video Converter and Super. The first three didn't worked as expected: Handbrake has a bug by setting the width and the height, the next two doesn't allow the fine setting of the videosize (only presets of width and height). Super seems to be the best, but I didn't found a setting which reduces the file size. I reduced the width and the height but only got 20 MB less. Now I tried the xth setting and I still get a too high file size. I want to reduce the filesize to 100 MB or less. The ouput format should be FLV or MP4, because I need this for flowplayer. Which settings of SUPER or which program should I use to reduce the file size? Of course the video should still be viewable.

    Read the article

  • Disable shift override for autostart programs per user

    - by Jens
    When starting up windows, one can normally disable all programs in the autostart folder by pressing and holding the SHIFT key. This behavior can be disabled by creating the key ignoreShiftOverride with DWORD setting of 1 in the registry key HKEY_LOCAL_MACHINE\ Software\ Microsoft\ Windows NT\ CurrentVersion\ Winlogon. I'd like to be able to configure this setting for each individual user. How can this be done?

    Read the article

  • reduce image size in bytes without resize and quality lose in c#

    - by SR Dusad
    Hi I m using C#.NET 4.0 I have an jpeg image and i want to reduce its size in bytes .I don't want to change the image size in manner of height and width and not want to lose image quality.Some bit of reduce quality is not an issue. I try to make it a thumbnail image but it reduce the size according to height and width. I can't found any solution. Any type help will be appreciated..

    Read the article

  • VB.NET - Convert Unicode in one TB to Shift-JIS in another TB

    - by Yiu Korochko
    Trying to develop a text editor, I've got two textboxes, and a button below each one. When the button below textbox1 is pressed, it is supposed to convert the Unicode text (intended to be Japanese) to Shift-JIS. The reason why I am doing this is because the software VOCALOID2 only allows ANSI and Shift-JIS encoding text to be pasted into the lyrics system. Users of the application normally have their keyboard set to change to Japanese already, but it types in Unicode. How can I convert Unicode text to Shift-JIS when SJIS isn't available in the System.Text.Encoding types?

    Read the article

  • JavaFX - reduce() function to show how to pass functions as parameters

    - by Helper Method
    At the moment I'm writing a JavaFX guide for Java developers. In order to show how to pass a function to another funtion i adopted the reduce() function found in Effective Java: function reduce(seq: Integer[], f: function(: Integer, : Integer): Integer, init: Integer) { var result = init; for (i in seq) { result = f(i, result); } result } def nums = [1 .. 10]; println(reduce(nums, function(a: Integer, b: Integer) { a + b }, 0)); // prints 55 println(reduce(nums, function(a: Integer, b: Integer) { a * b }, 0)); // prints 3628800 Now I wonder if this example is not to hard for someone starting to learn JavaFX. The tutorial is targeted to programmers with a solid understanding of Java, yet I'm not quite sure about the usefulness of the example. Any ideas?

    Read the article

  • Unexpected result from reduce function

    - by StackedCrooked
    I would like to get the smallest element from a vector. For this I use combine the reduce and min functions. However, when providing my own implementation of min I get unexpected results: user=> (reduce (fn [x y] (< x y) x y) [1 2 3 2 1 0 1 2]) 2 user=> (reduce min [1 2 3 2 1 0 1 2 3]) 0 The reduce with standard min returns 0 as expected. However, when I provide my own implementation it returns 2. What am I doing wrong?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >