Search Results

Search found 15866 results on 635 pages for 'css practice'.

Page 558/635 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • Best way to unit test Collection?

    - by limc
    I'm just wondering how folks unit test and assert that the "expected" collection is the same/similar as the "actual" collection (order is not important). To perform this assertion, I wrote my simple assert API:- public void assertCollection(Collection<?> expectedCollection, Collection<?> actualCollection) { assertNotNull(expectedCollection); assertNotNull(actualCollection); assertEquals(expectedCollection.size(), actualCollection.size()); assertTrue(expectedCollection.containsAll(actualCollection)); assertTrue(actualCollection.containsAll(expectedCollection)); } Well, it works. It's pretty simple if I'm asserting just bunch of Integers or Strings. It can also be pretty painful if I'm trying to assert a collection of Hibernate domains, say for example. The collection.containsAll(..) relies on the equals(..) to perform the check, but I always override the equals(..) in my Hibernate domains to check only the business keys (which is the best practice stated in the Hibernate website) and not all the fields of that domain. Sure, it makes sense to check just against the business keys, but there are times I really want to make sure all the fields are correct, not just the business keys (for example, new data entry record). So, in this case, I can't mess around with the domain.equals(..) and it almost seems like I need to implement some comparators for just unit testing purposes instead of relying on collection.containsAll(..). Are there some testing libraries I could leverage here? How do you test your collection? Thanks.

    Read the article

  • When is the reintegrate option really necessary?

    - by Tor Hovland
    If you always sync a feature branch before you merge it back, why do you really have to use the --reintegrate option? The Subversion book says: When merging your branch back to the trunk, however, the underlying mathematics is quite different. Your feature branch is now a mishmosh of both duplicated trunk changes and private branch changes, so there's no simple contiguous range of revisions to copy over. By specifying the --reintegrate option, you're asking Subversion to carefully replicate only those changes unique to your branch. (And in fact, it does this by comparing the latest trunk tree with the latest branch tree: the resulting difference is exactly your branch changes!) So the --reintegrate option only merges the changes that are unique to the feature branch. But if you always sync before merge (which is a recommended practice, in order to deal with any conflicts on the feature branch), then the only changes between the branches are the changes that are unique to the feature branch, right? And if Subversion tries to merge code that is already on the target branch, it will just do nothing, right? In this blog post, Mark Phippard writes: http://blogs.open.collab.net/svn/2008/07/subversion-merg.html If we include those synched revisions, then we merge back changes that already exist in trunk. This yields unnecessary and confusing conflicts. Can somebody give me an example of when dropping reintegrate gives me unnecessary conflicts?

    Read the article

  • Returning pointers in a thread-safe way.

    - by Roddy
    Assume I have a thread-safe collection of Things (call it a ThingList), and I want to add the following function. Thing * ThingList::findByName(string name) { return &item[name]; // or something similar.. } But by doing this, I've delegated the responsibility for thread safety to the calling code, which would have to do something like this: try { list.lock(); // NEEDED FOR THREAD SAFETY Thing *foo = list.findByName("wibble"); foo->Bar = 123; list.unlock(); } catch (...) { list.unlock(); throw; } Obviously a RAII lock/unlock object would simplify/remove the try/catch/unlocks, but it's still easy for the caller to forget. There are a few alternatives I've looked at: Return Thing by value, instead of a pointer - fine unless you need to modify the Thing Add function ThingList::setItemBar(string name, int value) - fine, but these tend to proliferate Return a pointerlike object which locks the list on creation and unlocks it again on destruction. Not sure if this is good/bad practice... What's the right approach to dealing with this?

    Read the article

  • What characters are NOT escaped with a mysqli prepared statement?

    - by barfoon
    Hey everyone, I'm trying to harden some of my PHP code and use mysqli prepared statements to better validate user input and prevent injection attacks. I switched away from mysqli_real_escape_string as it does not escape % and _. However, when I create my query as a mysqli prepared statement, the same flaw is still present. The query pulls a users salt value based on their username. I'd do something similar for passwords and other lookups. Code: $db = new sitedatalayer(); if ($stmt = $db->_conn->prepare("SELECT `salt` FROM admins WHERE `username` LIKE ? LIMIT 1")) { $stmt->bind_param('s', $username); $stmt->execute(); $stmt->bind_result($salt); while ($stmt->fetch()) { printf("%s\n", $salt); } $stmt->close(); } else return false; Am I composing the statement correctly? If I am what other characters need to be examined? What other flaws are there? What is best practice for doing these types of selects? Thanks,

    Read the article

  • How to Implement an Interface that Requires Duplicate Member Names?

    - by Will Marcouiller
    I often have to implement some interfaces such as IEnumerable<T> in my code. Each time, when implementing automatically, I encounter the following: public IEnumerator<T> GetEnumerator() { // Code here... } public IEnumerator GetEnumerator1() { // Code here... } Though I have to implement both GetEnumerator() methods, they impossibly can have the same name, even if we understand that they do the same, somehow. The compiler can't treat them as one being the overload of the other, because only the return type differs. When doing so, I manage to set the GetEnumerator1() accessor to private. This way, the compiler doesn't complaint about not implementing the interface member, and I simply throw a NotImplementedException within the method's body. However, I wonder whether it is a good practice, or if I shall proceed differently, as perhaps a method alias or something like so. What is the best approach while implementing an interface such as IEnumerable<T> that requires the implementation of two different methods with the same name? EDIT #1 Does VB.NET reacts differently from C# while implementing interfaces, since in VB.NET it is explicitly implemented, thus forcing the GetEnumerator1(). Here's the code: Public Function GetEnumerator() As System.Collections.Generic.IEnumerator(Of T) Implements System.Collections.Generic.IEnumerable(Of T).GetEnumerator // Code here... End Function Public Function GetEnumerator1() As System.Collections.Generic.IEnumerator Implements System.Collections.Generic.IEnumerable.GetEnumerator // Code here... End Function Both GetEnumerator() methods are explicitly implemented, and the compile will refuse them to have the same name. Why?

    Read the article

  • Powershell: splatting after passing hashtable by reference

    - by user1815871
    Powershell newbie ... I recently learned about splatting — very useful. I ran into a snag when I passed a hash table by reference to a function for splatting purposes. (For brevity's sake — a silly example.) Function AllMyChildren { param ( [ref]$ReferenceToHash } get-childitem @ReferenceToHash.Value # etc.etc. } $MyHash = @{ 'path' = '*' 'include' = '*.ps1' 'name' = $null } AllMyChildren ([ref]$MyHash) Result: an error ("Splatted variables cannot be used as part of a property or array expression. Assign the result of the expression to a temporary variable then splat the temporary variable instead."). Tried this afterward: $newVariable = $ReferenceToHash.Value get-childitem @NewVariable That did work and seemed right per the error message. But: is it the preferred syntax in a case like this? (An oh, look, it actually worked solution isn't always a best practice. My approach here strikes me as "Perl-minded" and perhaps in Powershell passing by value is better, though I don't yet know the syntax for it w.r.t. a hash table.)

    Read the article

  • Extremely CPU Intensive Alarm Clock

    - by SoulBeaver
    For some reason my program, a console alarm clock I made for laughs and practice, is extremely CPU intensive. It consumes about 2mB RAM, which is already quite a bit for such a small program, but it devastates my CPU with over 50% resources at times. Most of the time my program is doing nothing except counting down the seconds, so I guess this part of my program is the one that's causing so much strain on my CPU, though I don't know why. If it is so, could you please recommend a way of making it less, or perhaps a library to use instead if the problem can't be easily solved? /* The wait function waits exactly one second before returning to the * * called function. */ void wait( const int &seconds ) { clock_t endwait; // Type needed to compare with clock() endwait = clock() + ( seconds * CLOCKS_PER_SEC ); while( clock() < endwait ) {} // Nothing need be done here. } In case anybody browses CPlusPlus.com, this is a genuine copy/paste of the clock() function they have written as an example for clock(). Much why the comment //Nothing need be done here is so lackluster. I'm not entirely sure what exactly clock() does yet. The rest of the program calls two other functions that only activate every sixty seconds, otherwise returning to the caller and counting down another second, so I don't think that's too CPU intensive- though I wouldn't know, this is my first attempt at optimizing code. The first function is a console clear using system("cls") which, I know, is really, really slow and not a good idea. I will be changing that post-haste, but, since it only activates every 60 seconds and there is a noticeable lag-spike, I know this isn't the problem most of the time. The second function re-writes the content of the screen with the updated remaining time also only every sixty seconds. I will edit in the function that calls wait, clearScreen and display if it's clear that this function is not the problem. I already tried to reference most variables so they are not copied, as well as avoid endl as I heard that it's a little slow compared to \n.

    Read the article

  • Groovy htmlunit getFirstByXPath returning null

    - by StartingGroovy
    I have had a few issues with HtmlUnit returning nulls lately and am looking for guidance. each of my results for grabbing the first row of a website have returned null. I am wondering if someone can A) explain why they might be returning null B) explain better ways (if there are some) to go about getting the information Here is my current code (URL is in the source): client = new WebClient(BrowserVersion.FIREFOX_3) client.javaScriptEnabled = false def url = "http://www.hidemyass.com/proxy-list/" page = client.getPage(url) IpAddress = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[2]").getValue() println "IP Address is: $data" //returns null //Port_Number is an Image Country = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[4][@class='country']/@rel").getValue() println "Country abbreviation is: $Country" //differentiate speed and connection by name of gif? Type = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[7]").getValue() println "Proxy type is: $Type" Anonymity = page.getFirstByXPath("//html/body/div/div/form/table/tbody/tr/td[8]").getValue() println "Anonymity Level is: $Anonymity" client.closeAllWindows() Right now all of my XPaths return null and .getValue() obviously doesn't work on null. I also have questions as to what I should do about the PORT since it is an image? Is there a better alternative than downloading it and attempting to solve it by OCR? Side Note There is no significance in this site, I was just looking for a site that I could practice scraping on (the last one I ran into issues of fragment identities and couldn't get an answer to: HtmlUnit getByXpath returns null and HtmlUnit and Fragment Identities )

    Read the article

  • Open-sourcing a web site with active users?

    - by Lars Yencken
    I currently run several research-related web-sites with active users, and these sites use some personally identifying information about these users (their email address, IP address, and query history). Ideally I'd release the code to these sites as open source, so that other people could easily run similar sites, and more importantly scrutinise and replicate my work, but I haven't been comfortable doing so, since I'm unsure of the security implications. For example, I wouldn't want my users' details to be accessed or distributed by a third party who found some flaw in my site, something which might be easy to do with full source access. I've tried going half-way by refactoring the (Django) site into more independent modules, and releasing those, but this is very time consuming, and in practice I've never gotten around to releasing enough that a third party can replicate the site(s) easily. I also feel that maybe I'm kidding myself, and that this process is really no different to releasing the full source. What would you recommend in cases like this? Would you open-source the site and take the risk? As an alternative, would you advertise the source as "available upon request" to other researchers, so that you at least know who has the code? Or would you just apologise to them and keep it closed in order to protect users?

    Read the article

  • is it better to test if a function is needed inside or outside of it?

    - by b0x0rz
    what is the best practice? call a function then return if you test for something, or test for something then call? i prefer the test inside of function because it makes an easier viewing of what functions are called. for example: protected void Application_BeginRequest(object sender, EventArgs e) { this.FixURLCosmetics(); } and private void FixURLCosmetics() { HttpContext context = HttpContext.Current; if (!context.Request.HttpMethod.ToString().Equals("GET", StringComparison.OrdinalIgnoreCase)) { // if not a GET method cancel url cosmetics return; }; string url = context.Request.RawUrl.ToString(); bool doRedirect = false; // remove > default.aspx if (url.EndsWith("/default.aspx", StringComparison.OrdinalIgnoreCase)) { url = url.Substring(0, url.Length - 12); doRedirect = true; } // remove > www if (url.Contains("//www")) { url = url.Replace("//www", "//"); doRedirect = true; } // redirect if necessary if (doRedirect) { context.Response.Redirect(url); } } is this good: if (!context.Request.HttpMethod.ToString().Equals("GET", StringComparison.OrdinalIgnoreCase)) { // if not a GET method cancel url cosmetics return; }; or should that test be done in Application_BeginRequest? what is better? thnx

    Read the article

  • Keeping Linq to SQL alive when using ViewModels (ASP.NET MVC)

    - by Kohan
    I have recently started using custom ViewModels (For example, CustomerViewModel) public class CustomerViewModel { public IList<Customer> Customers{ get; set; } public int ProductId{ get; set; } public CustomerViewModel(IList<Customer> customers, int productId) { this.Customers= customers; this.ProductId= productId; } public CustomerViewModel() { } } ... and am now passing them to my view instead of the Entities themselves (for example, var Custs = repository.getAllCusts(id) ) as it seems good practice to do so. The problem i have encountered is that when using ViewModels; by the time it has got to the the view i have lost the ability to lazy load on customers. I do not believe this was the case before using them. Is it possible to retain the ability of Lazy Loading while still using ViewModels? Or do i have to eager load using this method? Thanks, Kohan.

    Read the article

  • Simplest way to mix sequences of types with iostreams?

    - by Kylotan
    I have a function void write<typename T>(const T&) which is implemented in terms of writing the T object to an ostream, and a matching function T read<typename T>() that reads a T from an istream. I am basically using iostreams as a plain text serialisation format, which obviously works fine for most built-in types, although I'm not sure how to effectively handle std::strings just yet. I'd like to be able to write out a sequence of objects too, eg void write<typename T>(const std::vector<T>&) or an iterator based equivalent (although in practice, it would always be used with a vector). However, while writing an overload that iterates over the elements and writes them out is easy enough to do, this doesn't add enough information to allow the matching read operation to know how each element is delimited, which is essentially the same problem that I have with a single std::string. Is there a single approach that can work for all basic types and std::string? Or perhaps I can get away with 2 overloads, one for numerical types, and one for strings? (Either using different delimiters or the string using a delimiter escaping mechanism, perhaps.)

    Read the article

  • Swing modal dialog refuses to close - sometimes!

    - by Zarkonnen
    // This is supposed to show a modal dialog and then hide it again. In practice, // this works about 75% of the time, and the other 25% of the time, the dialog // stays visible. // This is on Ubuntu 10.10, running: // OpenJDK Runtime Environment (IcedTea6 1.9) (6b20-1.9-0ubuntu1) // This always prints // setVisible(true) about to happen // setVisible(false) about to happen // setVisible(false) has just happened // even when the dialog stays visible. package modalproblemdemo; import java.awt.Frame; import javax.swing.JDialog; import javax.swing.SwingUtilities; public class Main { public static void main(String[] args) { final Dialogs d = new Dialogs(); new Thread() { @Override public void run() { d.show(); d.hide(); } }.start(); } static class Dialogs { final JDialog dialog; public Dialogs() { dialog = new JDialog((Frame) null, "Hello World", /*modal*/ true); dialog.setSize(400, 200); } public void show() { SwingUtilities.invokeLater(new Runnable() { public void run() { dialog.setLocationRelativeTo(null); System.out.println("setVisible(true) about to happen"); dialog.setVisible(true); }}); } public void hide() { SwingUtilities.invokeLater(new Runnable() { public void run() { System.out.println("setVisible(false) about to happen"); dialog.setVisible(false); System.out.println("setVisible(false) has just happened"); }}); } } }

    Read the article

  • Does everything after my try statement have to be encompassed in that try statement to access variab

    - by Mithrax
    I'm learning java and one thing I've found that I don't like, is generally when I have code like this: import java.util.*; import java.io.*; public class GraphProblem { public static void main(String[] args) { if (args.length < 2) { System.out.println("Error: Please specify a graph file!"); return; } FileReader in = new FileReader(args[1]); Scanner input = new Scanner(in); int size = input.nextInt(); WeightedGraph graph = new WeightedGraph(size); for(int i = 0; i < size; i++) { graph.setLabel(i,Character.toString((char)('A' + i))); } for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { graph.addEdge(i, j, input.nextInt()); } } // .. lots more code } } I have an uncaught exception around my FileReader. So, I have to wrap it in a try-catch to catch that specific exception. My question is does that try { } have to encompass everything after that in my method that wants to use either my FileReader (in) or my Scanner (input)? If I don't wrap the whole remainder of the program in that try statement, then anything outside of it can't access the in/input because it may of not been initialized or has been initialized outside of its scope. So I can't isolate the try-catch to just say the portion that intializes the FileReader and close the try statement immediately after that. So, is it the "best practice" to have the try statement wrapping all portions of the code that are going to access variables initialized in it? Thanks!

    Read the article

  • Why do I need to give my options a value attribute in my dropdown? JQuery related.

    - by Alex
    So far in my web developing experiences, I've noticed that almost all web developers/designers choose to give their options in a select a value like so: <select name="foo"> <option value="bar">BarCheese</option> // etc. // etc. </select> Is this because it is best practice to do so? I ask this because I have done a lot of work with jQuery and dropdown's lately, and sometimes I get really annoyed when I have to check something like: $('select[name=foo]').val() == "bar"); To me, many times that seems less clear than just being able to check the val() against BarCheese. So why is it that most web developers/designers specify a value paramater instead of just letting the options actual value be its value? And yes, if the option has a value attribute I know I can do something like this: $('select[name=foo] option:contains("BarCheese")').attr('selected', 'selected'); But I would still really like to know why this is done. Thanks!!

    Read the article

  • What should I do with an over-bloated select-box/drop-down

    - by Tristan Havelick
    All web developers run into this problem when the amount of data in their project grows, and I have yet to see a definitive, intuitive best practice for solving it. When you start a project, you often create forms with tags to help pick related objects for one-to-many relationships. For instance, I might have a system with Neighbors and each Neighbor belongs to a Neighborhood. In version 1 of the application I create an edit user form that has a drop down for selecting users, that simply lists the 5 possible neighborhoods in my geographically limited application. In the beginning, this works great. So long as I have maybe 100 records or less, my select box will load quickly, and be fairly easy to use. However, lets say my application takes off and goes national. Instead of 5 neighborhoods I have 10,000. Suddenly my little drop-down takes forever to load, and once it loads, its hard to find your neighborhood in the massive alphabetically sorted list. Now, in this particular situation, having hierarchical data, and letting users drill down using several dynamically generated drop downs would probably work okay. However, what is the best solution when the objects/records being selected are not hierarchical in nature? In the past, of done this with a popup with a search box, and a list, but this seems clunky and dated. In today's web 2.0 world, what is a good way to find one object amongst many for ones forms? I've considered using an Ajaxifed search box, but this seems to work best for free text, and falls apart a little when the data to be saved is just a reference to another object or record. Feel free to cite specific libraries with generic solutions to this problem, or simply share what you have done in your projects in a more general way

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • Omit return type in C++0x

    - by Clinton
    I've recently found myself using the following macro with gcc 4.5 in C++0x mode: #define RETURN(x) -> decltype(x) { return x; } And writing functions like this: template <class T> auto f(T&& x) RETURN (( g(h(std::forward<T>(x))) )) I've been doing this to avoid the inconvenience having to effectively write the function body twice, and having keep changes in the body and the return type in sync (which in my opinion is a disaster waiting to happen). The problem is that this technique only works on one line functions. So when I have something like this (convoluted example): template <class T> auto f(T&& x) -> ... { auto y1 = f(x); auto y2 = h(y1, g1(x)); auto y3 = h(y1, g2(x)); if (y1) { ++y3; } return h2(y2, y3); } Then I have to put something horrible in the return type. Furthermore, whenever I update the function, I'll need to change the return type, and if I don't change it correctly, I'll get a compile error if I'm lucky, or a runtime bug in the worse case. Having to copy and paste changes to two locations and keep them in sync I feel is not good practice. And I can't think of a situation where I'd want an implicit cast on return instead of an explicit cast. Surely there is a way to ask the compiler to deduce this information. What is the point of the compiler keeping it a secret? I thought C++0x was designed so such duplication would not be required.

    Read the article

  • Learning PHP - start out using a framework or no?

    - by Kevin Torrent
    I've noticed a lot of jobs in my area for PHP. I've never used PHP before, and figure if I can get more opportunities if I pick it up then it might be a good idea. The problem is that PHP without any framework is ugly and 99% of the time really bad code. All the tutorials and books I've seen are really lousy - it never shows any kind of good programming practice but always the quick and dirty kind of way of doing things. I'm afraid that trying to learn PHP this way will just imprint these bad practices in my head and make me waste time later trying to unlearn them. I've used C# in the past so I'm familiar with OOP and software design patterns and similar. Should I be trying to learn PHP by using one of the better known frameworks for it? I've looked at CakePHP, Symfony and the Zend Framework so far; Zend seems to be the most flexible without being too constraining like Cake and Symfony (although Symfony seemed less constraining than CakePHP which is trying too hard to be Ruby on Rails), but many tutorials for Zend I've seen assume you already know PHP and want to learn to use the framework. What would be my best opportunity for learning PHP, but learning GOOD PHP that uses real software engineering techniques instead of spaghetti code? It seems all the PHP books and resources either assume you are just using raw PHP and therefore showcase bade practices, or that you already know PHP and therefore don't even touch on parts of the language.

    Read the article

  • How to catch unintentional function interpositioning?

    - by SiegeX
    Reading through my book Expert C Programming, I came across the chapter on function interpositioning and how it can lead to some serious hard to find bugs if done unintentionally. The example given in the book is the following: my_source.c mktemp() { ... } main() { mktemp(); getwd(); } libc mktemp(){ ... } getwd(){ ...; mktemp(); ... } According to the book, what happens in main() is that mktemp() (a standard C library function) is interposed by the implementation in my_source.c. Although having main() call my implementation of mktemp() is intended behavior, having getwd() (another C library function) also call my implementation of mktemp() is not. Apparently, this example was a real life bug that existed in SunOS 4.0.3's version of lpr. The book goes on to explain the fix was to add the keyword static to the definition of mktemp() in my_source.c; although changing the name altogether should have fixed this problem as well. This chapter leaves me with some unresolved questions that I hope you guys could answer: Does GCC have a way to warn about function interposition? We certainly don't ever intend on this happening and I'd like to know about it if it does. Should our software group adopt the practice of putting the keyword static in front of all functions that we don't want to be exposed? Can interposition happen with functions introduced by static libraries? Thanks for the help. EDIT I should note that my question is not just aimed at interposing over standard C library functions, but also functions contained in other libraries, perhaps 3rd party, perhaps ones created in-house. Essentially, I want to catch any instance of interpositioning regardless of where the interposed function resides.

    Read the article

  • What goes into main function?

    - by Woltan
    I am looking for a best practice tip of what goes into the main function of a program using c++. Currently I think two approaches are possible. (Although the "margins" of those approaches can be arbitrarily close to each other) 1: Write a "Master"-class that receives the parameters passed to the main function and handle the complete program in that "Master"-class (Of course you also make use of other classes). Therefore the main function would be reduced to a minimum of lines. #include "MasterClass.h" int main(int args, char* argv[]) { MasterClass MC(args, argv); } 2: Write the "complete" program in the main function making use of user defined objects of course! However there are also global functions involved and the main function can get somewhat large. I am looking for some general guidelines of how to write the main function of a program in c++. I came across this issue by trying to write some unit test for the first approach, which is a little difficult since most of the methods are private. Thx in advance for any help, suggestion, link, ...

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Eclipse > Javascript > Code highlighting not working with Object Notation

    - by Redsandro
    I am using Eclipse Helios with PDT, and when I am editing JavaScript files with the default JavaScript Editor (JSDT), code highlighting (Mark Occurrences) is not working for half of the code, for example JSON-style (or Object Literal if you will) declarations. Little example: Foo = {}; Foo.Bar = Foo.Bar || {}; Foo.Bar = { bar: function(str) { alert(str) }, baz: function(str) { this.bar(str); // This bar *is* highlighted though } }; Foo.Bar.baz('text'); No Bar, bar or baz is highlighted. For now, I humbly edit the JavaScript part of projects in Notepad++ because it just highlights every occurrence of whatever is currently selected. Is there a common practice for Eclipse JavaScript developers to get code highlighting work correctly, using the popular Object Literal notation? An option or update I missed? -update- I have found that code highlighting depends on the code being properly outlined. Altough commonly used, Object Literal outlining still seems rare in javascript editors. the Spket Javascript Editor does partial Object Literal outlining, and the Aptana Javascript Editor does full Object Literal outlining. But both loses other important functionality. A quest for the editor with the least loss of functionality is currently in progress in this question.

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >