Search Results

Search found 4603 results on 185 pages for 'lower bound'.

Page 145/185 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • How can I change the visibility of elements inside a DataTemplate when a row is selected in a Silver

    - by miketrash
    I'm using the MVVM pattern. I've bound my items and I want to only show the edit button when a row is selected in the datagrid. It appears to be possible with triggers in WPF but we don't have triggers in Silverlight. I tried a TemplatedParent binding but I'm not sure what the TemplatedParent is in this case. We don't have RelativeSource ancestor in Silverlight either. At this point I'm going to look at a solution using the code behind... <data:DataGrid.Columns> <data:DataGridTemplateColumn IsReadOnly="True" Header="Name" Width="300"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Grid> <TextBlock x:Name="textBlock" Text="{Binding Name, Mode=OneWay}" VerticalAlignment="Center" Margin="4,4,0,4"/> <Button Margin="1,1,4,1" HorizontalAlignment="Right" VerticalAlignment="Center" Padding="7,4" Content="Edit" /> </Grid> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns>

    Read the article

  • asp.net 4.0 webforms - how to keep ContentPlaceHolder1_ out of client id's in a simple way?

    - by James Manning
    I'm attempting to introduce master pages to an existing webforms site that's avoided using them because of client id mangling in the past (and me not wanting to deal with the mangling and doing <% foo.ClientID % everywhere :) GOAL: use 'static' id values (whatever is in the server control's id attribute) except for data-bound / repeating controls which would break for those cases and therefore need suffixes or whatever to differentiate (basically, Predictable) Now that the site migrated to ASP.NET 4.0, I first attempted to use ClientIDMode of Static (in the web.config) but that broke too many places doing repeating controls (checkboxes inside gridviews, for instance) since they all resulted with the same id. So, I then tried Predictable (again, just in the web.config) so that the repeating controls wouldn't have conflicting id's, and it works well except that the master page content placeholder (which is indeed a naming container) is still reflecting in the resulting client id's (for instance, ContentPlaceHolder1_someCheckbox). Certainly I could leave the web.config setting as static and then go through all the databound/repeating controls switch them to Predictable, but I'm hoping there's some easier/simpler way to get that effect without having to scatter ClientIDMode attributes in those N number of places (or extend all those databound controls with my own usercontrol that just sets clientidmode, or whatever). I even thought of leaving web.config set to static and doing a master or basepage handler (preinit? not sure if that would work or not) that would go walk Controls with OfType<INamingContainer() (might be a better choice on the type, but that seems like a good starting choice looking at repeater and gridview) and then set those to Predictable so I'd get static for all my 'normal' things outside of repeating controls but not have to deal with static inside things like gridview/repeater/etc. I don't see any way to mark the content placeholder such that it 'opts out' of being included in child id's - setting the ID of the placeholder to empty/blank doesn't work as it's a required attribute :) At that point I figured there was a better/simpler way that I was missing and decided to ask on SO :) Edit: I thought about changing all my 'fetch by id' jquery calls from $('#foo') to fetch_by_id('foo') and then having that function return the 'right one' by checking $('#foo').length and then $('#ContentPlaceHolder1_foo').length (and maybe other patterns) or even just have it return $('#foo, #ContentPlaceHolder1_foo') (again, potentially other patterns) but changing all the places I fetch elements by id seemed pretty ugly too, and I'd like to avoid that abstraction layer if possible to do so easily :)

    Read the article

  • Java: how to have try-catch as conditional in for-loop?

    - by HH
    I know how to solve the problem by comparing size to an upper bound but I want a conditional that look for an exception. If an exception occur in conditinal, I want to exit. import java.io.*; import java.util.*; public class listTest{ public static void main(String[] args){ Stack<Integer> numbs=new Stack<Integer>(); numbs.push(1); numbs.push(2); for(int count=0,j=0;try{((j=numbs.pop())<999)}catch(Exception e){break;}&& !numbs.isEmpty(); ){ System.out.println(j); } // I waited for 1 to be printed, not 2. } } Some Errors javac listTest.java listTest.java:10: illegal start of expression for(int count=0,j=0;try{((j=numbs.pop())<999)}catch(Exception e){break;}&& ^ listTest.java:10: illegal start of expression for(int count=0,j=0;try{((j=numbs.pop())<999)}catch(Exception e){break;}&& ^

    Read the article

  • Conditional column values in NSTableView?

    - by velocityb0y
    I have an NSTableView that binds via an NSArrayController to an NSMutableArray. What's in the array are derived classes; the first few columns of the table are bound to properties that exist on the base class. That all works fine. Where I'm running into problem is a column that should only be populated if the row maps to one specific subclass. The property that column is meant to display only exists in that subclass, since it makes no sense in terms of the base class. The user will know, from the first two columns, why the third column's cell is populated/editable or not. The binding on the third column's value is on arrangedObjects, with a model path of something like "foo.name" where foo is the property on the subclass. However, this doesn't work, as the other subclasses in the hierarchy are not key-value compliant for foo. It seems like my only choice is to have foo be a property on the base class so everybody responds to it, but this clutters up the interfaces of the model objects. Has anyone come up with a clean design for this situation? It can't be uncommon (I'm a relative newcomer to Cocoa and I'm just learning the ins and outs of bindings.)

    Read the article

  • How can I obtain the local TCP port and IP Address of my client program?

    - by Dr Dork
    Hello! I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code and client side code setup to communicate. Currently, my client code successfully connects to the server code and the server code sends it a test message, then both quit out. Perfect! That's exactly what I wanted to accomplish. Now I'm playing around with the functions used to obtain info about the two environments (server and client). I'd like to obtain the local IP address and dynamically assigned TCP port of the client. The function I've found to do this is getsockname()... //setup the socket if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("client: socket"); continue; } //Retrieve the locally-bound name of the specified socket and store it in the sockaddr structure sa_len = sizeof(sa); getsock_check = getsockname(sockfd,(struct sockaddr *)&sa,(socklen_t *)&sa_len) ; if (getsock_check== -1) { perror("getsockname"); exit(1); } printf("Local IP address is: %s\n", inet_ntoa(sa.sin_addr)); printf("Local port is: %d\n", (int) ntohs(sa.sin_port)); but the output is always zero... Local IP address is: 0.0.0.0 Local port is: 0 does anyone see anything I might be or am definitely doing wrong? Thanks so much in advance for all your help!

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • Designing small comparable objects

    - by Thomas Ahle
    Intro Consider you have a list of key/value pairs: (0,a) (1,b) (2,c) You have a function, that inserts a new value between two current pairs, and you need to give it a key that keeps the order: (0,a) (0.5,z) (1,b) (2,c) Here the new key was chosen as the average between the average of keys of the bounding pairs. The problem is, that you list may have milions of inserts. If these inserts are all put close to each other, you may end up with keys such to 2^(-1000000), which are not easily storagable in any standard nor special number class. The problem How can you design a system for generating keys that: Gives the correct result (larger/smaller than) when compared to all the rest of the keys. Takes up only O(logn) memory (where n is the number of items in the list). My tries First I tried different number classes. Like fractions and even polynomium, but I could always find examples where the key size would grow linear with the number of inserts. Then I thought about saving pointers to a number of other keys, and saving the lower/greater than relationship, but that would always require at least O(sqrt) memory and time for comparison. Extra info: Ideally the algorithm shouldn't break when pairs are deleted from the list.

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Finding Local IP via Socket Creation / getsockname

    - by BSchlinker
    I need to get the IP address of a system within C++. I followed the logic and advice of another comment on here and created a socket and then utilized getsockname to determine the IP address which the socket is bound to. However, this doesn't appear to work (code below). I'm receiving an invalid IP address (58.etc) when I should be receiving a 128.etc Any ideas? string Routes::systemIP(){ // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; if ((rv = getaddrinfo("4.2.2.1", "80", &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr.s_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return str; }

    Read the article

  • How to generate a number in arbitrary range using random()={0..1} preserving uniformness and density?

    - by psihodelia
    Generate a random number in range [x..y] where x and y are any arbitrary floating point numbers. Use function random(), which returns a random floating point number in range [0..1] from P uniformly distributed numbers (call it "density"). Uniform distribution must be preserved and P must be scaled as well. I think, there is no easy solution for such problem. To simplify it a bit, I ask you how to generate a number in interval [-0.5 .. 0.5], then in [0 .. 2], then in [-2 .. 0], preserving uniformness and density? Thus, for [0 .. 2] it must generate a random number from P*2 uniformly distributed numbers. The obvious simple solution random() * (x - y) + y will generate not all possible numbers because of the lower density for all abs(x-y)>1.0 cases. Many possible values will be missed. Remember, that random() returns only a number from P possible numbers. Then, if you multiply such number by Q, it will give you only one of P possible values, scaled by Q, but you have to scale density P by Q as well.

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Detecting an online poker cheat

    - by Tom Gullen
    It recently emerged on a large poker site that some players were possibly able to see all opponents cards as they played through exploiting a security vulnerability that was discovered. A naïve cheater would win at an incredibly fast rate, and these cheats are caught very quickly usually, and if not caught quickly they are easy to detect through a quick scan through their hand histories. The more difficult problem occurs when the cheater exhibits intelligence, bluffing in spots they are bound to be called in, calling river bets with the worst hands, the basic premise is that they lose pots on purpose to disguise their ability to see other players cards, and they win at a reasonably realistic rate. Given: A data set of millions of verified and complete information hand histories Theoretical unlimited computer power Assume the game No Limit Hold'em, although suggestions on Omaha or limit poker may be beneficial How could we reasonably accurately classify these cheaters? The original 2+2 thread appeals for ideas, and I thought that the SO community might have some useful suggestions. It's an interesting problem also because it is current, and has real application in bettering the world if someone finds a creative solution, as there is a good chance genuine players will have funds refunded to them when identified cheaters are discovered.

    Read the article

  • How to bind to another event after ajax call in jquery

    - by robert
    Hi, I'm creating a graph using flot javascript library. I have enabled clickable event and the div is binded to plotclick event. So, when a datapoint is clicked, an ajax call is made and the result is used to replace the current div. After this i need to bind the div to a new event. I tried to call bind, but it is still bound to old callback. When i call unbind and bind, the new callback is not called. var handleTeacherClick = function( event, pos, item ) { if( typeof item != "undefined" && item ) { var user_id = jQuery( 'input[name="id' + item.datapoint[0] + '"]' ).val(); jQuery.ajax({ type: 'POST', url: BASEPATH + 'index.php/ajax/home/latest', data: { "user_id": user_id }, dataType: 'json', success: function ( result ) { jQuery.plot(jQuery('#stats_prog'), result.progress_data, result.progress_options); jQuery.plot(jQuery('#stats_perf'), result.performance_data, result.performance_options); jQuery('.stats_title'). html('<span class="stats_title">'+ ' >> Chapter '+Math.ceil(item.datapoint[0])+'</span>'); jQuery('#stats_prog')./*unbind("plotclick").*/ bind('plotclick', statClickHandler ); jQuery('#stats_perf')./*unbind("plotclick"). */ bind( 'plotclick', statClickHandler ); }, }); } }

    Read the article

  • [Cocoa] Placing an NSTimer in a separate thread

    - by ndg
    I'm trying to setup an NSTimer in a separate thread so that it continues to fire when users interact with the UI of my application. This seems to work, but Leaks reports a number of issues - and I believe I've narrowed it down to my timer code. Currently what's happening is that updateTimer tries to access an NSArrayController (timersController) which is bound to an NSTableView in my applications interface. From there, I grab the first selected row and alter its timeSpent column. From reading around, I believe what I should be trying to do is execute the updateTimer function on the main thread, rather than in my timers secondary thread. I'm posting here in the hopes that someone with more experience can tell me if that's the only thing I'm doing wrong. Having read Apple's documentation on Threading, I've found it an overwhelmingly large subject area. NSThread *timerThread = [[[NSThread alloc] initWithTarget:self selector:@selector(startTimerThread) object:nil] autorelease]; [timerThread start]; -(void)startTimerThread { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; activeTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(updateTimer:) userInfo:nil repeats:YES] retain]; [runLoop run]; [pool release]; } -(void)updateTimer:(NSTimer *)timer { NSArray *selectedTimers = [timersController selectedObjects]; id selectedTimer = [selectedTimers objectAtIndex:0]; NSNumber *currentTimeSpent = [selectedTimer timeSpent]; [selectedTimer setValue:[NSNumber numberWithInt:[currentTimeSpent intValue]+1] forKey:@"timeSpent"]; } -(void)stopTimer { [activeTimer invalidate]; [activeTimer release]; }

    Read the article

  • Closures in Ruby

    - by Isaac Cambron
    I'm kind of new to Ruby and some of the closure logic has me a confused. Consider this code: array = [] for i in (1..5) array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] This makes sense to me because i is bound outside the loop, so the same variable is captured by each trip through the loop. It also makes sense to me that using an each block can fix this: array = [] (1..5).each{|i| array << lambda {i}} array.map{|f| f.call} => [1, 2, 3, 4, 5] ...because i is now being declared separately for each time through. But now I get lost: why can't I also fix it by introducing an intermediate variable? array = [] for i in 1..5 j = i array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] Because j is new each time through the loop, I'd think a different variable would be captured on each pass. For example, this is definitely how C# works, and how -- I think-- Lisp behaves with a let. But in Ruby not so much. It almost looks like = is aliasing the variable instead of copying the reference, but that's just speculation on my part. What's really happening?

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

  • jQuery's draggable grid

    - by Art
    It looks like that the 'grid' option in the constructor of Draggable is relatively bound to the original coordinates of the element being dragged - so simply put, if you have three draggable divs with their top set respectively to 100, 200, 254 pixels relative to their parent: <div class="parent-div" style="position: relative;"> <div id="div1" class="draggable" style="top: 100px; position: absolute;"></div> <div id="div2" class="draggable" style="top: 200px; position: absolute;"></div> <div id="div3" class="draggable" style="top: 254px; position: absolute;"></div> </div> Adn all of them are getting enabled for dragging with 'grid' set to [1, 100]: draggables = $('.draggable'); $.each(draggables, function(index, elem) { $(elem).draggable({ containment: $('#parent-div'), opacity: 0.7, revert: 'invalid', revertDuration: 300, grid: [1, 100], refreshPositions: true }); }); Problem here is that as soon as you drag div3, say, down, it's top is increased by 100, moving it to 354px instead of being increased by just mere 46px (254 + 46 = 300), which would get it to the next stop in the grid - 300px, if we are looking at the parent-div as a point of reference and "grid holder". I had a look at the draggable sources and it seem to be an in-built flaw - they just do all the calculations relative to the original position of the draggable element. I would like to avoid monkey-patching the code of draggable library and what I am really looking for here is the way how to make the Draggable calculate the grid positions relative to containing parent. However if monkey-patching is unavoidable, I guess I'll have to live with it. Thanks!

    Read the article

  • How can I emulate Vim's * search in GNU Emacs?

    - by rq
    In Vim the * key in normal mode searches for the word under the cursor. In GNU Emacs the closest native equivalent would be: C-s C-w But that isn't quite the same. It opens up the incremental search mini buffer and copies from the cursor in the current buffer to the end of the word. In Vim you'd search for the whole word, even if you are in the middle of the word when you press *. I've cooked up a bit of elisp to do something similar: (defun find-word-under-cursor (arg) (interactive "p") (if (looking-at "\\<") () (re-search-backward "\\<" (point-min))) (isearch-forward)) That trots backwards to the start of the word before firing up isearch. I've bound it to C-+, which is easy to type on my keyboard and similar to *, so when I type C-+ C-w it copies from the start of the word to the search mini-buffer. However, this still isn't perfect. Ideally it would regexp search for "\<" word "\>" to not show partial matches (searching for the word "bar" shouldn't match "foobar", just "bar" on its own). I tried using search-forward-regexp and concat'ing \ but this doesn't wrap in the file, doesn't highlight matches and is generally pretty lame. An isearch-* function seems the best bet, but these don't behave well when scripted. Any ideas? Can anyone offer any improvements to the bit of elisp? Or is there some other way that I've overlooked?

    Read the article

  • Approach for fixing NoClassDefFoundError?

    - by DJC
    I'm seeing this question is getting asked a lot in many different contexts. Perhaps we can set some strategies for locating and fixing it? I'm noobish myself so all I can contribute are horror stories and questions, sorry... It seems this is thrown when a class is visible at compile time but not at run time... how can this happen? In my case I am developing an app that uses the Google APIs, in Eclipse, for the Android platform. I've configured the Project Properties / Java Build Path / Libraries to include the gdata .jars and all is well. When I execute in the emulator I get a force close and the logcat shows a NoClassDefFoundError on a simple new ContactsService("myApp"); I've also tried a new CalendarService("myApp") with the same results. Is it possible or desirable to statically bind at compile time to avoid the problem? How could dynamic binding of an add-on library work in the mobile environment anyway? Either it has to be bound into my .apk or else I need to "install" it? ... hmmm. Advice much appreciated.

    Read the article

  • MVC View Model Intellesense / Compile error

    - by Marty Trenouth
    I have one Library with my ORM and am working with a MVC Application. I have a problem where the pages won't compile because the Views can't see the Model's properties (which are inherited from lower level base classes). They system throws a compile error saying that 'object' does not contain a definition for 'ID' and no extension method 'ID' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?) implying that the View is not seeing the model. In the Controller I have full access to the Model and have check the Inherits from portion of the view to validate the correct type is being passed. Controller: return View(new TeraViral_Blog()); View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<com.models.TeraViral_Blog>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index2 </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Index2</h2> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> </fieldset> </asp:Content>

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • Including partial views when applying the Mode-View-ViewModel design pattern

    - by Filip Ekberg
    Consider that I have an application that just handles Messages and Users I want my Window to have a common Menu and an area where the current View is displayed. I can only work with either Messages or Users so I cannot work simultaniously with both Views. Therefore I have the following Controls MessageView.xaml UserView.xaml Just to make it a bit easier, both the Message Model and the User Model looks like this: Name Description Now, I have the following three ViewModels: MainWindowViewModel UsersViewModel MessagesViewModel The UsersViewModel and the MessagesViewModel both just fetch an ObserverableCollection<T> of its regarding Model which is bound in the corresponding View like this: <DataGrid ItemSource="{Binding ModelCollection}" /> The MainWindowViewModel hooks up two different Commands that have implemented ICommand that looks something like the following: public class ShowMessagesCommand : ICommand { private ViewModelBase ViewModel { get; set; } public ShowMessagesCommand (ViewModelBase viewModel) { ViewModel = viewModel; } public void Execute(object parameter) { var viewModel = new ProductsViewModel(); ViewModel.PartialViewModel = new MessageView { DataContext = viewModel }; } public bool CanExecute(object parameter) { return true; } public event EventHandler CanExecuteChanged; } And there is another one a like it that will show Users. Now this introduced ViewModelBase which only holds the following: public UIElement PartialViewModel { get { return (UIElement)GetValue(PartialViewModelProperty); } set { SetValue(PartialViewModelProperty, value); } } public static readonly DependencyProperty PartialViewModelProperty = DependencyProperty.Register("PartialViewModel", typeof(UIElement), typeof(ViewModelBase), new UIPropertyMetadata(null)); This dependency property is used in the MainWindow.xaml to display the User Control dynamicly like this: <UserControl Content="{Binding PartialViewModel}" /> There are also two buttons on this Window that fires the Commands: ShowMessagesCommand ShowUsersCommand And when these are fired, the UserControl changes because PartialViewModel is a dependency property. I want to know if this is bad practice? Should I not inject the User Control like this? Is there another "better" alternative that corresponds better with the design pattern? Or is this a nice way of including partial views?

    Read the article

  • File not found - MOSS 2007

    - by isimple
    i received the following error on some Sharepoint Pages File Not Found. at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssembly(String assemblyName, Boolean throwOnFail) at System.Web.UI.TemplateParser.LoadAssembly(String assemblyName, Boolean throwOnFail) at System.Web.UI.MainTagNameToTypeMapper.ProcessTagNamespaceRegistrationCore(TagNamespaceRegisterEntry nsRegisterEntry) at System.Web.UI.MainTagNameToTypeMapper.ProcessTagNamespaceRegistration(TagNamespaceRegisterEntry nsRegisterEntry) at System.Web.UI.BaseTemplateParser.ProcessDirective(String directiveName, IDictionary directive) at System.Web.UI.TemplateControlParser.ProcessDirective(String directiveName, IDictionary directive) at System.Web.UI.PageParser.ProcessDirective(String directiveName, IDictionary directive) at System.Web.UI.TemplateParser.ParseStringInternal(String text, Encoding fileEncoding) I used Assembly Binding Log Viewer Tool (fuslogvw.exe) to locate the assembly which is not bound and the assembly seems to be Microsoft.Sharepoint.Intl.Resources.dll. This assemlby is not being located by the application resulting in 'File not Found' error. Can anyone guide me how to solve this?

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >