Search Results

Search found 2558 results on 103 pages for 'significant digits'.

Page 85/103 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Visitor Pattern can be replaced with Callback functions?

    - by getit
    Is there any significant benefit to using either technique? In case there are variations, the Visitor Pattern I mean is this: http://en.wikipedia.org/wiki/Visitor_pattern And below is an example of using a delegate to achieve the same effect (at least I think it is the same) Say there is a collection of nested elements: Schools contain Departments which contain Students Instead of using the Visitor pattern to perform something on each collection item, why not use a simple callback (Action delegate in C#) Say something like this class Department { List Students; } class School { List Departments; VisitStudents(Action<Student> actionDelegate) { foreach(var dep in this.Departments) { foreach(var stu in dep.Students) { actionDelegate(stu); } } } } School A = new School(); ...//populate collections A.Visit((student)=> { ...Do Something with student... }); *EDIT Example with delegate accepting multiple params Say I wanted to pass both the student and department, I could modify the Action definition like so: Action class School { List Departments; VisitStudents(Action<Student, Department> actionDelegate, Action<Department> d2) { foreach(var dep in this.Departments) { d2(dep); //This performs a different process. //Using Visitor pattern would avoid having to keep adding new delegates. //This looks like the main benefit so far foreach(var stu in dep.Students) { actionDelegate(stu, dep); } } } }

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • Communication with different social networks, strategy pattern?

    - by bclaessens
    Hi For the last few days I've been thinking how I can solve the following programming problem and find the ideal, flexible programming structure. (note: I'm using Flash as my platform technology but that shouldn't matter since I'm just looking for the ideal design pattern). Our Flash website has multiple situations in which it has to communicate with different social networks (Facebook, Netlog and Skyrock). Now, the communication strategy doesn't have to change multiple times over one "run". The strategy should be picked once (at launch time) for that session. The real problem is the way the communication works between each social network and our website. Some networks force us to ask for a token, others force us to use a webservice, yet another forces us to set up its communication through javascript. The problem becomes more complicated when our website has to run in each network's canvas. Which results in even more (different) ways of communicating. To sum up, our website has to work in the following cases: standalone on the campaign website url (user chooses their favourite network) communicate with netlog OR communicate with facebook OR communicate with skyrock run in a netlog canvas and log in automatically (website checks for netlog parameters) run in a facebook canvas and log in automatically (website checks for facebook params) run in a skyrock canvas and log in automatically (website checks for skyrock params) As you can see, our website needs 6 different ways to communicate with a social network. To be honest, the actual significant difference between all communication strategies is the way they have to connect to their individual network (as stated above in my example). Posting an image, make a comment, ... is the same whether it runs standalone or in the canvas url. WARNING: posting an image, posting a comment DOES differ from network to network. Should I use the strategy pattern and make 6 different communication strategies or is there a better way? An example would be great but isn't required ;) Thanks in advance

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • re.sub emptying list

    - by jmau5
    def process_dialect_translation_rules(): # Read in lines from the text file specified in sys.argv[1], stripping away # excess whitespace and discarding comments (lines that start with '##'). f_lines = [line.strip() for line in open(sys.argv[1], 'r').readlines()] f_lines = filter(lambda line: not re.match(r'##', line), f_lines) # Remove any occurances of the pattern '\s*<=>\s*'. This leaves us with a # list of lists. Each 2nd level list has two elements: the value to be # translated from and the value to be translated to. Use the sub function # from the re module to get rid of those pesky asterisks. f_lines = [re.split(r'\s*<=>\s*', line) for line in f_lines] f_lines = [re.sub(r'"', '', elem) for elem in line for line in f_lines] This function should take the lines from a file and perform some operations on the lines, such as removing any lines that begin with ##. Another operation that I wish to perform is to remove the quotation marks around the words in the line. However, when the final line of this script runs, f_lines becomes an empty lines. What happened? Requested lines of original file: ## English-Geek Reversible Translation File #1 ## (Moderate Geek) ## Created by Todd WAreham, October 2009 "TV show" <=> "STAR TREK" "food" <=> "pizza" "drink" <=> "Red Bull" "computer" <=> "TRS 80" "girlfriend" <=> "significant other"

    Read the article

  • Threadpool design question

    - by ZeroVector
    I have a design question. I want some feedback to know if a ThreadPool is appropriate for the client program I am writing. I am having a client running as a service processing database records. Each of these records contains connection information to external FTP sites [basically it is a queue of files to transfer]. A lot of them are to the same host, just moving different files. Therefore, I am grouping them together by host. I want to be able to create a new thread per host. I really don't care when the transfers finish, they just need to do all the work (or try to do) they were assigned, and then terminate once they are finished, cleaning up all resources they used in the process. I anticipate no more than 10-25 connections to be established. Once the transfer queue is empty, the program will simply wait until there are records in the queue again. Is the ThreadPool a good candidate for this or should I use a different approach? Edit: For the most part, this is the only significant custom application running on the server.

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • STL vectors with uninitialized storage?

    - by Jim Hunziker
    I'm writing an inner loop that needs to place structs in contiguous storage. I don't know how many of these structs there will be ahead of time. My problem is that STL's vector initializes its values to 0, so no matter what I do, I incur the cost of the initialization plus the cost of setting the struct's members to their values. Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements? (I'm certain that this part of the code needs to be optimized, and I'm certain that the initialization is a significant cost.) Also, see my comments below for a clarification about when the initialization occurs. SOME CODE: void GetsCalledALot(int* data1, int* data2, int count) { int mvSize = memberVector.size() memberVector.resize(mvSize + count); // causes 0-initialization for (int i = 0; i < count; ++i) { memberVector[mvSize + i].d1 = data1[i]; memberVector[mvSize + i].d2 = data2[i]; } }

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • I want tell the VC++ Compiler to compile all code. Can it be done?

    - by KGB
    I am using VS2005 VC++ for unmanaged C++. I have VSTS and am trying to use the code coverage tool to accomplish two things with regards to unit tests: See how much of my referenced code under test is getting executed See how many methods of my code under test (if any) are not unit tested at all Setting up the VSTS code coverage tool (see the link text) and accomplishing task #1 was straightforward. However #2 has been a surprising challenge for me. Here is my test code. class CodeCoverageTarget { public: std::string ThisMethodRuns() { return "Running"; } std::string ThisMethodDoesNotRun() { return "Not Running"; } }; #include <iostream> #include "CodeCoverageTarget.h" using namespace std; int main() { CodeCoverageTarget cct; cout<<cct.ThisMethodRuns()<<endl; } When both methods are defined within the class as above the compiler automatically eliminates the ThisMethodDoesNotRun() from the obj file. If I move it's definition outside the class then it is included in the obj file and the code coverage tool shows it has not been exercised at all. Under most circumstances I want the compiler to do this elimination for me but for the code coverage tool it defeats a significant portion of the value (e.g. finding untested methods). I have tried a number of things to tell the compiler to stop being smart for me and compile everything but I am stumped. It would be nice if the code coverage tool compensated for this (I suppose by scanning the source and matching it up with the linker output) but I didn't find anything to suggest it has a special mode to be turned on. Am I totally missing something simple here or is this not possible with the VC++ compiler + VSTS code coverage tool? Thanks in advance, KGB

    Read the article

  • Does anyone still believe in the Capability Maturity Model for Software?

    - by Ed Guiness
    Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes. But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses. So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM? And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial? Does anyone still believe? As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody

    Read the article

  • Getting moved out of a development job

    - by Jay
    I'm a year out of college and I started my first dev job at a small (<15 people) company several months ago. It was an internship position that recently turned full time. The position started out as development but for full time I got offered a grab bag of positions: qa, docs, call support and some dev work. It's clear that my employers feel I am lacking dev skills, which is true. I did not major in CS in college and did not have much dev experience. However, I'm convinced that I can be a good developer and I will be a good developer once given the chance to write lots of code. My question is simple: what should I do? As I see it, there are two options. Work hard in the non-dev duties so that my employers may eventually give me significant dev responsibilities. Look for a new job where I will be a developer first and an all purpose guy second (if at all). Thanks guys.

    Read the article

  • Disabling browser print options (headers, footers, margins) from page?

    - by Anthony
    I have seen this question asked in a couple of different ways on SO and several other websites, but most of them are either too specific or out-of-date. I'm hoping someone can provide a definitive answer here without pandering to speculation. Is there a way, either with CSS or javascript, to change the default printer settings when someone prints within their browser? And of course by "prints from their browser" I mean some form of HTML, not PDF or some other plug-in reliant mime-type. Please note: If some browsers offer this and others don't (or if you only know how to do it for some browsers) I welcome browser-specific solutions. Similarly, if you know of a mainstream browser that has specific restrictions against EVER doing this, that is also helpful, but some fairly up-to-date documentation would be appreciated. (simply saying "that goes against XYZ's security policy" isn't very convincing when XYZ has made significant changes in said policy in the last three years). Finally, when I say "change default print settings" I don't mean forever, just for my page, and I am referring specifically to print margins, headers, and footers. I am very aware that CSS offers the option of changing the page orientation as well as the page margins. One of the many struggles is with Firefox. If I set the page margins to 1 inch, it ADDS this to the half inch it already puts into place. I very much want to reduce the usage of PDFs on my client's site, but the infringement on presentation (as well as the lack of reliability) are their main concern.

    Read the article

  • Time gaps between host clEnqueue_xxx calls

    - by dialer
    Consider these OpenCL calls (3 memcpy DtoH, 4313 cl_float elements each): clEnqueueReadBuffer(CommandQueue, SpectrumAbsMem, CL_FALSE, 0, SpectrumMemSize, SpectrumAbs, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumReMem, CL_FALSE, 0, SpectrumMemSize, SpectrumRe, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumImMem, CL_FALSE, 0, SpectrumMemSize, SpectrumIm, 0, NULL, NULL); When I analyze these with the NVIDIA visual profiler, I see that the actual memcpy operation only takes 8 us, but there is a significant gap of around 130 us after each memcpy. I'm already using the supposedly asynchronous method (the CL_FALSE in the argument list). When I use only one operation, but with three times the size, the operation is way faster. Why is the time gap between the actual memcpy operations so huge, whereas the gap between the kernel execution (exactly before these three operations) and the first memcpy is only 7us? Can I get rid of it, or do I need to accumulate more data before starting a memcpy? If so, is there a convenient way how I could combine mutliple arrays into a single contiguous block of memory, but still have a cl_mem object as a separate device memory pointer to each section?

    Read the article

  • How Best to Replace Ugly Queries and Dynamic PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and sometimes requiring dynamic SQL. That is, sometimes SQL queries require set based processing along with significant procedural processing. This is what PL/SQL is custom made for, but as a language it does not begin to compare to C#. Now and then I convert a PL/SQL procedure to C#, and am always amazed at how much cleaner and easier to both read and write the C# version is. The C# program might for example construct a SQL query string piece by piece and/or run several queries and process them as needed. The C# version is usually much faster as well, which must mean that I'm not very good at PL/SQL either. I do not currently have access to LINQ. My question is, how best to package all these little C# programs, which are really just mini reports, that is, replacements for ugly SQL queries? Right now I'm actually using NUnit to hold them, and calling each report a [Test], even though they aren't really tests. NUnit just happens to provide a convenient packaging framework.

    Read the article

  • Get Highest Res Favicon

    - by Jeremy
    I'm making a website that needs to dynamically obtain the favicon of sites upon request. I've found a few api's that can accomplish this fairly well, and so far I'm liking http://www.fvicon.com/. The final image for my website will be 64x64px, and some websites such as Google and Wordpress have nice images of this size that are easily retrieved via this api. Though, of course, most websites only have a 16x16 favicon image and scaling that image to 64x64 has very bad quality loss. Examples: (high res) http://a.fvicon.com/wordpress.com?format=png&width=64&height=64 (low res) http://a.fvicon.com/yahoo.com?format=png&width=64&height=64 Keeping this in mind, I'm planning on somehow determining whether a high-res image is available and, if so, the website will use this image. If not, I want to use a pre-made 64x64 icon with the smaller icon layered over it. What I'm having trouble with is determining if there is a high res favicon available or not. Also, I'm curious if there's a better approach to this situation. I'd rather not use smaller images (64x64 works out really well for this project). The lowest res I'm willing to drop to is 48x48 but even then there will be a significant quality loss for scaling up 16x16 favicons. Any ideas? If you need any more information I will gladly provide it. Thank you!

    Read the article

  • java setting resolution and print size for an Image

    - by Ingrid
    I wrote a program that generates a BufferedImage to be displayed on the screen and then printed. Part of the image includes grid lines that are 1 pixel wide. That is, the line is 1 pixel, with about 10 pixels between lines. Because of screen resolution, the image is displayed much bigger than that, with several pixels for each line. I'd like to draw it smaller, but when I scale the image (either by using Image.getScaledInstance or Graphics2D.scale), I lose significant amounts of detail. I'd like to print the image as well, and am dealing with the same problem. In that case, I am using this code to set the resolution: HashPrintRequestAttributeSet set = new HashPrintRequestAttributeSet(); PrinterResolution pr = new PrinterResolution(250, 250, ResolutionSyntax.DPI); set.add(pr); job.print(set); which works to make the image smaller without losing detail. But the problem is that the image is cut off at the same boundary as if I hadn't set the resolution. I'm also confused because I expected a larger number of DPI to make a smaller image, but it's working the other way. I'm using java 1.6 on Windows 7 with eclipse.

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • ASP.NET or PHP: Is Memcached useful for storing user-state information?

    - by hamlin11
    This question may expose my ignorance as a web developer, but that wouldn't exactly be a bad thing for me now would it? I have the need to store user-state information. Examples of information that I need to store per user. (define user: unauthenticated visitor) User arrived to the site from google/bing/yahoo User utilized the search feature (true/false) List of previous visited product pages on current visit It is my understanding that I could store this in the view state, but that causes a problem with page load from the end-users' perspective because a significant amount of non-viewable information is being transferred to and from the end-users even though the server is the only side that needs the info. On a similar note, it is my understanding that the session state can be used to store such information, but does not this also result in the same information being transferred to the user and stored in their cookie? (Not quite as bad as viewstate, but it does not feel ideal). This leaves me with either a server-only-session storage system or a mem-caching solution. Is memcached the only good option here?

    Read the article

  • iPhone: Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Rapid Opening and Closing System.IO.StreamWriter in C#

    - by ccomet
    Suppose you have a file that you are programmatically logging information into with regards to a process. Kinda like your typical debug Console.WriteLine, but due to the nature of the code you're testing, you don't have a console to write onto so you have to write it somewhere like a file. My current program uses System.IO.StreamWriter for this task. My question is about the approach to using the StreamWriter. Is it better to open just one StreamWriter instance, do all of the writes, and close it when the entire process is done? Or is it a better idea to open a new StreamWriter instance to write a line into the file, then immediately close it, and do this for every time something needs to be written in? In the latter approach, this would probably be facilitated by a method that would do just that for a given message, rather than bloating the main process code with excessive amounts of lines. But having a method to aid in that implementation doesn't necessarily make it the better choice. Are there significant advantages to picking one approach or the other? Or are they functionally equivalent, leaving the choice on the shoulders of the programmer?

    Read the article

  • Expanders inside listbox leaving blank space on collapse

    - by siz
    We have a rather complex UI that is presenting some problems for us. I have a ListBox that contains a set of DataItems. The DataTemplate for each item is an Expander. The header is text, the content of the Expander is a ListBox. The ListBox contains SubDataItems. The DataTemplate for each SubDataItem is an Expander. Here is a simplified XAML in which I reproduce the issue: <ListBox ItemsSource="{Binding Items}"> <ListBox.ItemTemplate> <DataTemplate> <Expander Header="{Binding Header}"> <ListBox ItemsSource="{Binding SubItems}"> <ListBox.ItemTemplate> <DataTemplate> <Expander Header="{Binding SubHeader}"> <Grid Height="40"> <TextBlock Text="{Binding SubText}" /> </Grid> </Expander> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </Expander> </DataTemplate> </ListBox.ItemTemplate> </ListBox> There is a problem with how the layout is produced. If any Expander corresponding to the SubDataItem is expanded, the ListBoxItem containing this ListBox (the Expander.Content in the parent DataTemplate) correctly requests more space. So I can expand all SubDataItems and correctly see my data. However, when I collapse, the space I previously asked to expand, remains blank, instead of being reclaimed by the ListBoxItem. This is a problem because if I have say 10 SubDataItems and happen to expand all of them at the same time and then collapse, there is a significant amount of white space wasting my real estate. How can I force WPF to resize the ListBoxItem to the correct state?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >