Search Results

Search found 2565 results on 103 pages for 'reduce'.

Page 62/103 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Avoid copying of data between user and kernel space and vice-versa

    - by bala1486
    Hello, I am developing a active messaging protocol for parallel computation that replaces TCP/IP. My goal is to decrease the latency of a packet. Since the environment is a LAN, i can replace TCP/IP with simpler protocol to reduce the packet latency. I am not writing any device driver and i am just trying to replace the TCP/IP stack with something simpler. Now I wanted to avoid copying of a packet's data from user space to kernel space and vice-versa. I heard of the mmap(). Is it the best way to do this? If yes, it will be nice if you can give links to some examples. I am a linux newbie and i really appreciate your help.. Thank you... Thanks, Bala

    Read the article

  • Resizing Tab Bar Controller Views (iPhone dev)

    - by damiandawber
    Hello, I have an application set up where the window contains a tab bar controller and one of the tabs loads a NIB called 'ShowCaseView.xib': this file is owned by a custom ShowcaseViewController class. In the ShowcaseViewcontroller class I have added a UIScrollView object, like so: imageScrollView = [[UIScrollView alloc] initWithFrame:[[self view] bounds]]; [[self view] addSubview:imageScrollView]; The issue I am having is that this UIScrollView object extends beneath my tab bar controller. So I have had to reduce its insets manually: #define TAB_BAR_HEIGHT 48 . . UIEdgeInsets edgeInsets = UIEdgeInsetsMake(0, 0, TAB_BAR_HEIGHT, 0); [imageScrollView setScrollIndicatorInsets:edgeInsets]; So, Is it common to have to manually deduct the tab bar height from a view (whether this be by reducing the size of subviews or the View NIB in inspector)? Is there a way that I can tell a NIB's view loaded from a tab bar to resize itself automatically to NOT sit behind the tab bar? Cheers

    Read the article

  • Before the lablel for each of the menu item, there is space with the some color, i want to remove th

    - by anandhinaveen
    Hello, I am using richfaces dropDownMenu component which contains a set of rich menuItems. When the menu items are displayed, a extra space is displayed before the lablel for each of the menu item. But I have a requirement of not displaying the space before the labels and to change the color. I used the css to reduce the space: .rich-menu-item-icon img { width: 0px; } .rich-menu-group-icon img { width: 0px; } but, i need to change the color in that place.

    Read the article

  • Printable Version of Google Visualizations DataTable

    - by maleki
    I have a custom routing application that takes information for a route from google maps. It then creates a Google Visualizations DataTable to hold all the steps in the route. My current problem is that in order to reduce overflow for very large routes, I have enabled paging in the options of the DataTable. This leads to a not so printer friendly version because only the portion of the data that is shown in the table will be printed. The other portions of the table are loaded dynamically by the API when you click prev and next. Is there a not so hard way to get the DataTable to be printer friendly when it comes time without sacrificing the ability to have paging enabled?

    Read the article

  • taking intersection of N-many lists in python

    - by user248237
    what's the easiest way to take the intersection of N-many lists in python? if I have two lists a and b, I know I can do: a = set(a) b = set(b) intersect = a.intersection(b) but I want to do something like a & b & c & d & ... for an arbitrary set of lists (ideally without converting to a set first, but if that's the easiest / most efficient way, I can deal with that.) I.e. I want to write a function intersect(*args) that will do it for arbitrarily many sets efficiently. What's the easiest way to do that? EDIT: My own solution is reduce(set.intersection, [a,b,c]) -- is that good? thanks.

    Read the article

  • Parse and charset: why my script doesn't work

    - by Rebol Tutorial
    I want to extract attribute1 and attribute3 values only. I don't understand why charset doesn't seem to work in my case to "skip" any other attributes (attribute3 is not extracted as I would like): content: {<tag attribute1="valueattribute1" attribute2="valueattribute2" attribute3="valueattribute3"> </tag> <tag attribute2="valueattribute21" attribute1="valueattribute11" > </tag> } attribute1: [{attribute1="} copy valueattribute1 to {"} thru {"}] attribute3: [{attribute3="} copy valueattribute3 to {"} thru {"}] spacer: charset reduce [tab newline #" "] letter: complement spacer to-space: [some letter | end] attributes-rule: [(valueattribute1: none valueattribute3: none) [attribute1 | none] to-space [attribute3 | none] (print valueattribute1 print valueattribute3) | [attribute3 | none] to-space [attribute1 | none] (print valueattribute3 print valueattribute1 valueattribute1: none valueattribute3: none ) | none ] rule: [any [to {<tag } thru {<tag } attributes-rule {>} to {</tag>} thru {</tag>}] to end] parse content rule output is >> parse content rule valueattribute1 none == true >>

    Read the article

  • C++0x rvalue references - lvalues-rvalue binding

    - by Doug
    This is a follow-on question to http://stackoverflow.com/questions/2748866/c0x-rvalue-references-and-temporaries In the previous question, I asked how this code should work: void f(const std::string &); //less efficient void f(std::string &&); //more efficient void g(const char * arg) { f(arg); } It seems that the move overload should probably be called because of the implicit temporary, and this happens in GCC but not MSVC (or the EDG front-end used in MSVC's Intellisense). What about this code? void f(std::string &&); //NB: No const string & overload supplied void g1(const char * arg) { f(arg); } void g2(const std::string & arg) { f(arg); } It seems that, based on the answers to my previous question that function g1 is legal (and is accepted by GCC 4.3-4.5, but not by MSVC). However, GCC and MSVC both reject g2 because of clause 13.3.3.1.4/3, which prohibits lvalues from binding to rvalue ref arguments. I understand the rationale behind this - it is explained in N2831 "Fixing a safety problem with rvalue references". I also think that GCC is probably implementing this clause as intended by the authors of that paper, because the original patch to GCC was written by one of the authors (Doug Gregor). However, I don't this is quite intuitive. To me, (a) a const string & is conceptually closer to a string && than a const char *, and (b) the compiler could create a temporary string in g2, as if it were written like this: void g2(const std::string & arg) { f(std::string(arg)); } Indeed, sometimes the copy constructor is considered to be an implicit conversion operator. Syntactically, this is suggested by the form of a copy constructor, and the standard even mentions this specifically in clause 13.3.3.1.2/4, where the copy constructor for derived-base conversions is given a higher conversion rank than other implicit conversions: A conversion of an expression of class type to the same class type is given Exact Match rank, and a conversion of an expression of class type to a base class of that type is given Conversion rank, in spite of the fact that a copy/move constructor (i.e., a user-defined conversion function) is called for those cases. (I assume this is used when passing a derived class to a function like void h(Base), which takes a base class by value.) Motivation My motivation for asking this is something like the question asked in http://stackoverflow.com/questions/2696156/how-to-reduce-redundant-code-when-adding-new-c0x-rvalue-reference-operator-over ("How to reduce redundant code when adding new c++0x rvalue reference operator overloads"). If you have a function that accepts a number of potentially-moveable arguments, and would move them if it can (e.g. a factory function/constructor: Object create_object(string, vector<string>, string) or the like), and want to move or copy each argument as appropriate, you quickly start writing a lot of code. If the argument types are movable, then one could just write one version that accepts the arguments by value, as above. But if the arguments are (legacy) non-movable-but-swappable classes a la C++03, and you can't change them, then writing rvalue reference overloads is more efficient. So if lvalues did bind to rvalues via an implicit copy, then you could write just one overload like create_object(legacy_string &&, legacy_vector<legacy_string> &&, legacy_string &&) and it would more or less work like providing all the combinations of rvalue/lvalue reference overloads - actual arguments that were lvalues would get copied and then bound to the arguments, actual arguments that were rvalues would get directly bound. Questions My questions are then: Is this a valid interpretation of the standard? It seems that it's not the conventional or intended one, at any rate. Does it make intuitive sense? Is there a problem with this idea that I"m not seeing? It seems like you could get copies being quietly created when that's not exactly expected, but that's the status quo in places in C++03 anyway. Also, it would make some overloads viable when they're currently not, but I don't see it being a problem in practice. Is this a significant enough improvement that it would be worth making e.g. an experimental patch for GCC?

    Read the article

  • How does one modify the thread scheduling behavior when using Threading Building Blocks (TBB)?

    - by J Teller
    Does anyone know how to modify the thread scheduling (specifically affinity) when using TBB? Doing a high level analysis on a simple parallel-for application, it seems like TBB is specifying the underlying threads' affinity in a way that reduces performance. Specifically, the cores I'm running on have hyper-threading enabled, and it looks like TBB is affinitizing threads to the same core even if there is a different core left completely unloaded. FWIW, I realize it's likely that TBB is doing the "right thing" and that changing the threads' affinity will only reduce performance. I'd just like to experiment with it to see if that's really the case.

    Read the article

  • Gnome Desktop Icons Alignment

    - by Gabriel L. Oliveira
    Hi all. It has been a long time since I started to compare the gnome desktop to the windows desktop. But since I began to use linux, I realized that the "gnome way" to align the icons on Desktop is not that nice for me. And comparing to Windows's way, windows is better for me. (remember, for me). I'd like to know if anyone has some tip to make "gnome desktop icon alignment" more like windows do. I tried to reduce the icon's size, did something, but was not that all. So, could anyone tell another tip? I like when I put something on windows desktop, and wherever I put the file, Windows organize the file and put the icon right after the last icon (in a cascade style) (and automatically). Any tips?

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • top-k selection/merge

    - by tcurdt
    I have n sorted lists. These lists are quite long (300000+ tuples). Selecting the top 10 of the individual lists is of course trivial - they are right at the head of the lists. Where it gets more interesting is when I want the top 10 of all the sorted lists. The question is whether there is an algorithm to calculate the combined top 10 having the correct order while cutting off the long tail of the lists. The goal is to reduce the required space. And if there is: How does one find the limit where is is safe to cut? Note: The actual counts are not important. Only the order is.

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • Self-Configuring Classes W/ Command Line Args: Pattern or Anti-Pattern?

    - by dsimcha
    I've got a program where a lot of classes have really complicated configuration requirements. I've adopted the pattern of decentralizing the configuration and allowing each class to take and parse the command line/configuration file arguments in its c'tor and do whatever it needs with them. (These are very coarse-grained classes that are only instantiated a few times, so there is absolutely no performance issue here.) This avoids having to do shotgun surgery to plumb new options I add through all the levels they need to be passed through. It also avoids having to specify each configuration option in multiple places (where it's parsed and where it's used). What are some advantages/disadvantages of this style of programming? It seems to reduce separation of concerns in that every class is now doing configuration stuff, and to make programs less self-documenting because what parameters a class takes becomes less explicit. OTOH, it seems to increase encapsulation in that it makes each class more self-contained because no other part of the program needs to know exactly what configuration parameters a class might need.

    Read the article

  • How can I programmically construct the object reference?

    - by Bryan
    Lets just say that I have three textboxes: TextBox1, TextBox2, TextBox3. Normally if I wanted to change the text for example I would put TextBox1.Text = "Whatever" and so on. For what I'm doing right now I would like to something like (TextBox & "i").Text. That obviously isn't the syntax I need to use I'm just using it as an example for what I need to do. So how can I do something like this? The main reason I'm doing this is to reduce code with a loop. Please keep in mind that I'm not actually changing the text of the textboxes I'm simply using that as an example to get the point across.

    Read the article

  • AES acceleration for Java

    - by chris_l
    I want to encrypt/decrypt lots of small (2-10kB) pieces of data. The performance is ok for now: On a Core2Duo, I get about 90 MBytes/s AES256 (when using 2 threads). But I may need to improve that in the future - or at least reduce the impact on the CPU. Is it possible to use dedicated AES encryption hardware with Java (using JCE, or maybe a different API)? Would Java take advantage of special CPU features (SSE5?!), if I get a better CPU? Or are there faster JCE providers? (I tried SunJCE and BouncyCastle - no big difference.) Other possiblilities?

    Read the article

  • best method of turning millions of x,y,z positions of particles into visualisation

    - by Griff
    I'm interested in different algorithms people use to visualise millions of particles in a box. I know you can use Cloud-In-Cell, adaptive mesh, Kernel smoothing, nearest grid point methods etc to reduce the load in memory but there is very little documentation on how to do these things online. i.e. I have array with: x,y,z 1,2,3 4,5,6 6,7,8 xi,yi,zi for i = 100 million for example. I don't want a package like Mayavi/Paraview to do it, I want to code this myself then load the decomposed matrix into Mayavi (rather than on-the-fly rendering) My poor 8Gb Macbook explodes if I try and use the particle positions. Any tutorials would be appreciated.

    Read the article

  • Trying to generate proportionally cropped thumbnails at a fixed width/height with PHP GD

    - by Chuck
    I'm trying to create a Thumbnail Generator in PHP with GD that will take an image and reduce it to a fixed width/height. The square it takes from the original image (based on my fixed width/height) will come from the center of the image to give a proportionally correct thumbnail. I'll try to demonstrate that confusing sentence with some nice ASCII :} LANDSCAPE EXAMPLE: XXXXXXXXXXXXXXXX XXXXOOOOOOOOXXXX XXXXOOOOOOOOXXXX XXXXOOOOOOOOXXXX XXXXOOOOOOOOXXXX XXXXXXXXXXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX PORTRAIT EXAMPLE: XXXXXXXX XXXXXXXX OOOOOOOO OOOOOOOO OOOOOOOO OOOOOOOO XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX As you can see, it pulls out a square from the center of the image to use as a thumbnail. It seems simple, in theory, to get the height/width of the image and then calculate the offset based on my fixed width/height to get the thumbnail. But I can't seem to think of a way to code it :/ Also, how would I go about resizing the image before pulling out the center square? So the thumbnail contains a detailed image of the original rather than some zoomed in graphic?

    Read the article

  • Google App Engine Database Index

    - by fjsj
    I need to store a undirected graph in a Google App Engine database. For optimization purposes, I am thinking to use database indexes. Using Google App Engine, is there any way to define the columns of a database table to create its index? I will need some optimization, since my app uses this stored undirected graph on a content-based filtering for item recommendation. Also, the recommender algorithm updates the weights of some graph's edges. If it is not possible to use database indexes, please suggest another method to reduce query time for the graph table. I believe my algorithm does more data retrieval operations from graph table than write operations. PS: I am using Python.

    Read the article

  • Multiple ASP.Net Controls On One Page

    - by Duracell
    We have created a User Control for ASP.Net. At any one time, a user's profile page could contain between one and infinity of these controls. The Control.ascx file contains quite a bit of javascript. When the control is rendered by .Net to HTML, you notice that it prints the javascript for each control. This was expected. I'd like to reduce the amount of HTML output by the server to increase page load times. Normally, you could just move the javascript to an external file and then you only need one extra HTTP request which will serve for all controls. But what about instances in the javascript where we have something like document.getElementById('<%= txtTextBox.ClientID %'); How would the javascript know which user control work with? Has anyone done something like this, or is the solution staring me in the face?

    Read the article

  • How can I Query only __key__ on a Google Appengine PolyModel child?

    - by Gabriel
    So the situation is: I want to optimize my code some for doing counting of records. So I have a parent Model class Base, a PolyModel class Entry, and a child class of Entry Article: How would I query Article.key so I can reduce the query load but only get the Article count. My first thought was to use: q = db.GqlQuery("SELECT __key__ from Article where base = :1", i_base) but it turns out GqlQuery doesn't like that because articles are actually stored in a table called Entry. Would it be possible to Query the class attribute? something like: q = db.GqlQuery("select __key__ from Entry where base = :1 and :2 in class", i_base, 'Article') neither of which work. Turns out the answer is even easier. But I am going to finish this question because I looked everywhere for this. q = db.GqlQuery("select __key__ from Entry where base = :1 and class = :2", i_base, 'Article')

    Read the article

  • Remove unnecessary svn:mergeinfo properties

    - by LeonZandman
    When I merge stuff in my repository Subversion wants to add/change a lot of svn:mergeinfo properties to files that are totally unrelated to the things that I want to merge. Questions about this behaviour have been asked before here on Stackoverflow.com, as you can read here and here. From what I understand from the topics mentioned above it looks like a lot of files in my repository have explicit svn:mergeinfo properties on them, when they shouldn't. The advice is to reduce the amount and only put those properties on relevant files/folders. So now my question: how can I easily remove those unneeded properties? I'm using TortoiseSVN, but am reluctant to manually check/fix hundreds of files. Is there an easier way to remove those unnecessary svn:mergeinfo properties? P.S. I'm not looking for C++ SVN API code.

    Read the article

  • Uploadify and Image Compression

    - by Ilya Biryukov
    Hi, I am using Uploadify on one of my client's web sites to allow them to upload a large amount of pictures at once to their photo gallery. I am seeing issues lately. They seem to upload large photographs (3 MB and above). I am wondering, is it possible to compress (reduce their size) on the client side, instead of doing it on the server (just like facebook does it). I know I could easily do it on the server, but I am working on another project right now, where I am expecting a large flow of photo uploads. It would require significant amount of CPU time to process them all. So I thought, I'd ask about the client side processing. Thanks.

    Read the article

  • StringBuilder/StringBuffer vs. "+" Operator

    - by matt.seil
    I'm reading "Better, Faster, Lighter Java" (by Bruce Tate and Justin Gehtland) and am familiar with the readability requirements in agile type teams, such as what Robert Martin discusses in his clean coding books. On the team I'm on now, I've been told explicitly not to use the "+" operator because it creates extra (and unnecessary) string objects during runtime. But this article: http://www.ibm.com/developerworks/java/library/j-jtp01274.html Written back in '04 talks about how object allocation is about 10 machine instructions. (essentially free) It also talks about how the GC also helps to reduce costs in this environment. What is the actual performance tradeoffs between using "+," "StringBuilder," or "StringBuffer?" (In my case it is StringBuffer only as we are limited to Java 1.4.2.) StringBuffer to me results in ugly, less readable code, as a couple of examples in Tate's book demonstrates. And StringBuffer is thread-synchronized which seems to have its own costs that outweigh the "danger" in using the "+" operator. Thoughts/Opinions?

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • GRaaarrgghhh.... content not displaying

    - by tony noriega
    OK, i have this page on my DEV server and it works just fine... when i publish to PROD it just doesnt display... can anyone see WTF is going on...? I have re published, modified, stripped down...etc... and can not get it to work... https://www.bcidaho.com/about_us/reduce-healthcare-costs.asp there should be a content slider there that shows 10 DIVS of content... and its about to make me smash my keyboard. thanx

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >