Search Results

Search found 15051 results on 603 pages for 'meta programming'.

Page 183/603 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • .NET / WPF Alternative

    - by eWolf
    I know the .NET framework and WPF pretty well, but I think the whole thing has gotten too blown up, especially for small apps as the whole .NET framework 3.5 weighs 197 MB by now. I am looking for a language/framework/library that provides functionality similar to that of WPF (animations, gradients, a.s.o.) and the .NET framework (of course not everything, but the basic features) and which is faster and more lightweight than the .NET framework and creates smaller and faster applications than the ones using .NET. Do you have any suggestions?

    Read the article

  • Enumerating computers in NT4 domain using WNetEnumResourceW (C++) or DirectoryEntry (C#)

    - by Kevin Davis
    I'm trying to enumerate computers in NT4 domains (not Active Directory) and support Unicode NetBIOS names. According to MSDN, WNetEnumResourceW is the Unicode counterpart of WNetEnumResource which to me would imply that using this would do the trick. However, I have not been able to get Unicode NetBIOS names properly using WNetEnumResourceW. I've also tried the C# rough equivalent DirectoryEntry using the WinNT: provider with no luck on Unicode names either. If I use DirectoryEntry on Active Directory (using the LDAP: provider) I do get Unicode names back. I noticed that during some debugging my code using DirectoryEntry and the WinNT: provider, the exceptions I saw were of type System.Runtime.InteropServices.COMException which tends to make me believe that this is just calling WNetEnumResourceW via COM. This web page implies that for some Net APIs the MS documentation is incomplete and possibly inaccurate which further confuses things. Additionally I've found that using the C# method which certainly results in cleaner, more understandable code also yields incomplete results in enumerating computers in domains\workgroups. Does anyone have any insight on this? Is it possible that computer acting as the WINS server is mangling the name? How would I determine this? Thanks

    Read the article

  • Simple IF statement question

    - by JGreig
    How can I simply the below if statements? if ( isset(var1) & isset(var2) ) { if ( (var1 != something1) || (var2 != something2) ) { // ... code ... } } Seems like this could be condensed to only one IF statement but am not certain if I'd use an AND or OR

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Using sys/socket.h functions on windows

    - by BSchlinker
    Hello, I'm attempting to utilize the socket.h functions within Windows. Essentially, I'm currently looking at the sample code at http://beej.us/guide/bgnet/output/html/multipage/clientserver.html#datagram. I understand that socket.h is a Unix function -- is there anyway I can easily emulate that environment while compiling this sample code? Does a different IDE / compiler change anything? Otherwise, I imagine that I need to utilize a virtualized Linux environment, which may be best anyways as the code will most likely be running in a UNIX environment. Thanks.

    Read the article

  • java string detection of ip

    - by user384706
    Hi, Assume a java string that contains an IP (v4 or v6) or a hostname. What is the best way to detect among these cases? I am not interested so much on whether the IP is in valid range (e.g. 999.999.999.999 for IPv4). I am interested in just a way to detect if a String is a hostname or an IP (v4 or v6). Initially I though this: if(theString.indexOf("@")!=-1){ //Not an Ip } but I am not sure if I should always expect a format containing @ always. Thanks

    Read the article

  • Why does Java force user-agent through simple Socket IO?

    - by Zombies
    I am using nothing but raw Socket IO. There isn't one HttpURLConnection nor any http client libs in my project. When I run it through wireshark I see somethign very revealing: GET / HTTP/1.1 User-Agent: Java/1.6.0_15 Host: www.google.com Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2 Connection: keep-alive Here is the crazy part, I never put ANY of that in my original request. My original request was: "GET http://www.google.com/ HTTP/1.1\r\n" + "Host: www.google.com\r\n" + "User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.8) Gecko/20100214 Ubuntu/9.10 (karmic) Firefox/3.5.8\r\n" + "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n" + "Accept-Language: en-us,en;q=0.5\r\n" + "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n" + "Keep-Alive: 300\r\n" + "\r\n"; I am using the default Sun JVM.

    Read the article

  • What is the best Linux distribution as a Xen host?

    - by St3fan
    I ordered a server for the home office and I would like to partition it with Xen. I think this will keep things clean and easier to maintain. I will be running things like MySQL, PostgreSQL, Tomcat and my own code on this machine. My question is: what freely available Linux distribution has the best Xen hosting facilities?

    Read the article

  • Regarding grep in solaris

    - by Arav
    I want grep for a particular work in multiple files. Multiple files are stored in variable testing. TESTING=$(ls -tr *.txt) echo $TESTING test.txt ab.txt bc.txt grep "word" "$TESTING" grep: can't open test.txt ab.txt bc.txt Giving me an error. Is there any other way to do it other than for loop

    Read the article

  • How exactly do MbUnit's [Parallelizable] and DegreeOfParallelism work?

    - by BenA
    I thought I understood how MbUnit's parallel test execution worked, but the behaviour I'm seeing differs sufficiently much from my expectation that I suspect I'm missing something! I have a set of UI tests that I wish to run concurrently. All of the tests are in the same assembly, split across three different namespaces. All of the tests are completely independent of one another, so I'd like all of them to be eligible for parallel execution. To that end, I put the following in the AssemblyInfo.cs: [assembly: DegreeOfParallelism(8)] [assembly: Parallelizable(TestScope.All)] My understanding was that this combination of assembly attributes should cause all of the tests to be considered [Parallelizable], and that the test runner should use 8 threads during execution. My individual tests are marked with the [Test] attribute, and nothing else. None of them are data-driven. However, what I actually see is at most 5-6 threads being used, meaning that my test runs are taking longer than they should be. Am I missing something? Do I need to do anything else to ensure that all of my 8 threads are being used by the runner? N.B. The behaviour is the same irrespective of which runner I use. The GUI, command line and TD.Net runners all behave the same as described above, again leading me to think I've missed something. EDIT: As pointed out in the comments, I'm running v3.1 of MbUnit (update 2 build 397). The documentation suggests that the assembly level [parallelizable] attribute is available, but it does also seem to reference v3.2 of the framework despite that not yet being available. EDIT 2: To further clarify, the structure of my assembly is as follows: assembly - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute)

    Read the article

  • Algorithm for non-contiguous netmask match

    - by Gianluca
    Hi, I have to write a really really fast algorithm to match an IP address to a list of groups, where each group is defined using a notation like 192.168.0.0/252.255.0.255. As you can see, the bitmask can contain zeros even in the middle, so the traditional "longest prefix match" algorithms won't work. If an IP matches two groups, it will be assigned to the group containing most 1's in the netmask. I'm not working with many entries (let's say < 1000) and I don't want to use a data structure requiring a large memory footprint (let's say 1-2 MB), but it really has to be fast (of course I can't afford a linear search). Do you have any suggestion? Thanks guys. UPDATE: I found something quite interesting at http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf, but it's still too memory-hungry for my utopic use case

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • book with good examples for each implementation?

    - by ajsie
    i've read about design patterns and it seems that there are a lot of different design patterns to use. i wonder if there are some books that acts like a reference. "you want to build a framework, then consider this, this and this pattern". also giving some examples. then jumps to another implementation eg. search engine and gives some patterns and concrete examples to use. in this way you learn about the weakness and strength about each pattern and where they will fit, instead of just reading about every design pattern decoupled from each other. are there good "reference sheets" or other tutorials good for a beginner at this? thanks

    Read the article

  • How well does Scala Perform Comapred to Java?

    - by Teja Kantamneni
    The Question actually says it all. The reason behind this question is I am about to start a small side project and want to do it in Scala. I am learning scala for the past one month and now I am comfortable working with it. The scala compiler itself is pretty slow (unless you use fsc). So how well does it perform on JVM? I previously worked on groovy and I had seen sometimes over performed than java. My Question is how well scala perform on JVM compared to Java. I know scala has some very good features(FP, dynamic lang, statically typed...) but end of the day we need the performance...

    Read the article

  • sending data packet just before closing socket

    - by xopht
    Before disconnect the client, the server wants to send some info to the client - why do I(server) disconnect you(client). If I send packet to the info and close the client socket immediately, closesocket() returns -1 and if I use linger option to work closesocket() successfully, the info cannot be sent completely. How can I complete this and is it possible to know socket buffer is empty(means my packet sent all)? thx.

    Read the article

  • Explicit method tables in C# instead of OO - good? bad?

    - by FunctorSalad
    Hi! I hope the title doesn't sound too subjective; I absolutely do not mean to start a debate on OO in general. I'd merely like to discuss the basic pros and cons for different ways of solving the following sort of problem. Let's take this minimal example: you want to express an abstract datatype T with functions that may take T as input, output, or both: f1 : Takes a T, returns an int f2 : Takes a string, returns a T f3 : Takes a T and a double, returns another T I'd like to avoid downcasting and any other dynamic typing. I'd also like to avoid mutation whenever possible. 1: Abstract-class-based attempt abstract class T { abstract int f1(); // We can't have abstract constructors, so the best we can do, as I see it, is: abstract void f2(string s); // The convention would be that you'd replace calls to the original f2 by invocation of the nullary constructor of the implementing type, followed by invocation of f2. f2 would need to have side-effects to be of any use. // f3 is a problem too: abstract T f3(double d); // This doesn't express that the return value is of the *same* type as the object whose method is invoked; it just expresses that the return value is *some* T. } 2: Parametric polymorphism and an auxilliary class (all implementing classes of TImpl will be singleton classes): abstract class TImpl<T> { abstract int f1(T t); abstract T f2(string s); abstract T f3(T t, double d); } We no longer express that some concrete type actually implements our original spec -- an implementation is simply a type Foo for which we happen to have an instance of TImpl. This doesn't seem to be a problem: If you want a function that works on arbitrary implementations, you just do something like: // Say we want to return a Bar given an arbitrary implementation of our abstract type Bar bar<T>(TImpl<T> ti, T t); At this point, one might as well skip inheritance and singletons altogether and use a 3 First-class function table class /* or struct, even */ TDictT<T> { readonly Func<T,int> f1; readonly Func<string,T> f2; readonly Func<T,double,T> f3; TDict( ... ) { this.f1 = f1; this.f2 = f2; this.f3 = f3; } } Bar bar<T>(TDict<T> td; T t); Though I don't see much practical difference between #2 and #3. Example Implementation class MyT { /* raw data structure goes here; this class needn't have any methods */ } // It doesn't matter where we put the following; could be a static method of MyT, or some static class collecting dictionaries static readonly TDict<MyT> MyTDict = new TDict<MyT>( (t) => /* body of f1 goes here */ , // f2 (s) => /* body of f2 goes here */, // f3 (t,d) => /* body of f3 goes here */ ); Thoughts? #3 is unidiomatic, but it seems rather safe and clean. One question is whether there are any performance concerns with it. I don't usually need dynamic dispatch, and I'd prefer if these function bodies get statically inlined in places where the concrete implementing type is known statically. Is #2 better in that regard?

    Read the article

  • How to learn a language that has very little coverage?

    - by bennybdbc
    I recently came across the Kogut language, and was interested by it. However, the only website to gain information from is the sourceforge page that hosts the project. I had no idea how to even attempt to look at the language in more depth. So what I'm asking is, has anyone here learnt a language that doesn't have the thousands of resources that Ruby, Python etc. have? What would be the best method to do so?

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >