Search Results

Search found 442 results on 18 pages for 'the naive'.

Page 15/18 | < Previous Page | 11 12 13 14 15 16 17 18  | Next Page >

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • Qt display image in a new window

    - by user312543
    Hi, I am naive to Qt and GUI programming. http://stackoverflow.com/questions/1357960/qt-jpg-image-display/2605463#2605463 The procedure given for displaying an image is working fine and thank you for providing that. But I want to display an image when I click on radiobutton. I created a slot, and I connected the button click event to the slot (dispImage is my slot). My slot consists only the code which is working to display an image (First answer in this link). I am able to compile it and run it. But the o/p is not as we desire. On button click, image windoe flashes for a sec and disappears. One more point to share is, I tried the same with windowsflags example present in qt examples. In this example I want to display the image on the preview window created by us. Even this is also not worked for me. Please provide me the solution. Thanks in advance.

    Read the article

  • AIX specific socket programming query

    - by kumar_m_kiran
    Hi All, Question 1 From SUSE man pages, I get the below details for socket connect options If the initiating socket is connection-mode, then connect() shall attempt to establish a connection to the address specified by the address argument. If the connection cannot be established immediately and O_NONBLOCK is not set for the file descriptor for the socket, connect() shall block for up to an unspecified timeout interval until the connection is established. If the timeout interval expires before the connection is established, connect() shall fail and the connection attempt shall be aborted. If connect() is interrupted by a signal that is caught while blocked waiting to establish a connection, connect() shall fail and set errno to [EINTR], but the connection request shall not be aborted, and the connection shall be established asynchronously. Question : Is the above contents valid for AIX OS (especially the connection time-out, timed wait ...etc)?Because I do not see it in AIX man pages (5.1 and 5.3) Question 2 I have a client socket whose attributes are a. SO_RCVTIMEO ,SO_SNDTIMEO are set for 5 seconds. b. AF_INET and SOCK_STREAM. c. SO_LINGER with linger on and time is 5 seconds. d. SO_REUSEADDR is set. Note that the client socket is not O_NONBLOCK. Question : Now since O_NONBLOCK is not set and SO_RCVTIMEO and SO_SNDTIMEO is set for 5 seconds, does it mean a. connect in NON Blocking or Blocking? b. If blocking, is it timed blocking or "infinite" time blocking? c. If it is infinite, How do I establish a "connect" system call which is O_BLOCKING with timeout to t secs. Sorry if the questions are be very naive. Thanks in advance for your input.

    Read the article

  • Can you dynamically combine multiple conditional functions into one in Python?

    - by erich
    I'm curious if it's possible to take several conditional functions and create one function that checks them all (e.g. the way a generator takes a procedure for iterating through a series and creates an iterator). The basic usage case would be when you have a large number of conditional parameters (e.g. "max_a", "min_a", "max_b", "min_b", etc.), many of which could be blank. They would all be passed to this "function creating" function, which would then return one function that checked them all. Below is an example of a naive way of doing what I'm asking: def combining_function(max_a, min_a, max_b, min_b, ...): f_array = [] if max_a is not None: f_array.append( lambda x: x.a < max_a ) if min_a is not None: f_array.append( lambda x: x.a > min_a ) ... return lambda x: all( [ f(x) for f in f_array ] ) What I'm wondering is what is the most efficient to achieve what's being done above? It seems like executing a function call for every function in f_array would create a decent amount of overhead, but perhaps I'm engaging in premature/unnecessary optimization. Regardless, I'd be interested to see if anyone else has come across usage cases like this and how they proceeded. Also, if this isn't possible in Python, is it possible in other (perhaps more functional) languages?

    Read the article

  • breadth-first traversal of directory tree is not lazy

    - by user855443
    I try to traverse the diretory tree. A naive depth-first traversal seems not to produce the data in a lazy fashion and runs out of memory. I next tried a breadth first approach, which shows the same problem - it uses all the memory available and then crashes. the code i have is: getFilePathBreadtFirst :: FilePath -> IO [FilePath] getFilePathBreadtFirst fp = do fileinfo <- getInfo fp res :: [FilePath] <- if isReadableDirectory fileinfo then do children <- getChildren fp lower <- mapM getFilePathBreadtFirst children return (children ++ concat lower) return (children ++ concat () else return [fp] -- should only return the files? return res getChildren :: FilePath -> IO [FilePath] getChildren path = do names <- getUsefulContents path let namesfull = map (path </>) names return namesfull testBF fn = do -- crashes for /home/frank, does not go to swap fps <- getFilePathBreadtFirst fn putStrLn $ unlines fps I think all the code is either linear or tail recursive, and I would expect that the listing of filenames starts immediately, but in fact it does not. Where is the error in my code and my thinking? where have I lost lazy evaluation?

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Why is DivMod Limited to Words (<=65535)?

    - by Andreas Rejbrand
    In Delphi, the declaration of the DivMod function is procedure DivMod(Dividend: Cardinal; Divisor: Word; var Result, Remainder: Word); Thus, the divisor, result, and remainder cannot be grater than 65535, a rather severe limitation. Why is this? Why couldn't the delcaration be procedure DivMod(Dividend: Cardinal; Divisor: Cardinal; var Result, Remainder: Cardinal); The procedure is implemented using assembly, and is therefore probably extremely fast. Would it not be possible for the code PUSH EBX MOV EBX,EDX MOV EDX,EAX SHR EDX,16 DIV BX MOV EBX,Remainder MOV [ECX],AX MOV [EBX],DX POP EBX to be adapted to cardinals? How much slower is the naïve attempt procedure DivModInt(const Dividend: integer; const Divisor: integer; out result: integer; out remainder: integer); begin result := Dividend div Divisor; remainder := Dividend mod Divisor; end; that is not (?) limited to 16-bit integers?

    Read the article

  • How can I make this simple C# generics factory work?

    - by Kevin Brassen
    I have this design: public interface IFactory<T> { T Create(); T CreateWithSensibleDefaults(); } public class AppleFactory : IFactory<Apple> { ... } public class BananaFactory : IFactory<Banana> { ... } // ... The fictitious Apple and Banana here do not necessarily share any common types (other than object, of course). I don't want clients to have to depend on specific factories, so instead, you can just ask a FactoryManager for a new type. It has a FactoryForType method: IFactory<T> FactoryForType<T>(); Now you can invoke the appropriate interface methods with something like FactoryForType<Apple>().Create(). So far, so good. But there's a problem at the implementation level: how do I store this mapping from types to IFactory<T>s? The naive answer is an IDictionary<Type, IFactory<T>>, but that doesn't work since there's no type covariance on the T (I'm using C# 3.5). Am I just stuck with an IDictionary<Type, object> and doing the casting manually?

    Read the article

  • Memory Management with returning char* function

    - by RageD
    Hello all, Today, without much thought, I wrote a simple function return to a char* based on a switch statement of given enum values. This, however, made me wonder how I could release that memory. What I did was something like this: char* func() { char* retval = new char; // Switch blah blah - will always return some value other than NULL since default: return retval; } I apologize if this is a naive question, but what is the best way to release the memory seeing as I cannot delete the memory after the return and, obviously, if I delete it before, I won't have a returned value. What I was thinking as a viable solution was something like this void func(char*& in) { // blah blah switch make it do something } int main() { char* val = new char; func(val); // Do whatever with func (normally func within a data structure with specific enum set so could run multiple times to change output) val = NULL; delete val; val = NULL; return 0; } Would anyone have anymore insight on this and/or explanation on which to use? Regards, Dennis M.

    Read the article

  • C++ Iterator Pipelining Designs

    - by Kirakun
    Suppose we want to apply a series of transformations, int f1(int), int f2(int), int f3(int), to a list of objects. A naive way would be SourceContainer source; TempContainer1 temp1; transform(source.begin(), source.end(), back_inserter(temp1), f1); TempContainer2 temp2; transform(temp1.begin(), temp1.end(), back_inserter(temp2), f2); TargetContainer target; transform(temp2.begin(), temp2.end(), back_inserter(target), f3); This first solution is not optimal because of the extra space requirement with temp1 and temp2. So, let's get smarter with this: int f123(int n) { return f3(f2(f1(n))); } ... SourceContainer source; TargetContainer target; transform(source.begin(), source.end(), back_inserter(target), f123); This second solution is much better because not only the code is simpler but more importantly there is less space requirement without the intermediate calculations. However, the composition f123 must be determined at compile time and thus is fixed at run time. How would I try to do this efficiently if the composition is to be determined at run time? For example, if this code was in a RPC service and the actual composition--which can be any permutation of f1, f2, and f3--is based on arguments from the RPC call.

    Read the article

  • How to configure multiple mappings using FluentHibernate?

    - by chris.baglieri
    First time rocking it with NHibernate/Fluent so apologies in advance if this is a naive question. I have a set of Models I want to map. When I create my session factory I'm trying to do all mappings at once. I am not using auto-mapping (though I may if what I am trying to do ends up being more painful than it ought to be). The problem I am running into is that it seems only the top map is taking. Given the code snippet below and running a unit test that attempts to save 'bar', it fails and checking the logs I see NHibernate is trying to save a bar entity to the foo table. While I suspect it's my mappings it could be something else that I am simply overlooking. Code that creates the session factory (note I've also tried separate calls into .Mappings): Fluently.Configure().Database(MsSqlConfiguration.MsSql2008 .ConnectionString(c => c .Server(@"localhost\SQLEXPRESS") .Database("foo") .Username("foo") .Password("foo"))) .Mappings(m => { m.FluentMappings.AddFromAssemblyOf<FooMap>() .Conventions.Add(FluentNHibernate.Conventions.Helpers .Table.Is(x => "foos")); m.FluentMappings.AddFromAssemblyOf<BarMap>() .Conventions.Add(FluentNHibernate.Conventions.Helpers .Table.Is(x => "bars")); }) .BuildSessionFactory(); Unit test snippet: using (var session = Data.SessionHelper.SessionFactory.OpenSession()) { var bar = new Bar(); session.Save(bar); Assert.NotNull(bar.Id); }

    Read the article

  • Help me find article on Multi-threading and Event Handling in Java

    - by JDR
    I once read an article on how to properly write event handlers for multi-threading in Java, but I can't for the life of me find it anymore. It described the pitfalls and potentials for deadlocks that can occur when firing events (not Swing events mind you, but general events like model update notifications). To clarify, the situation would be as such: // let's say this is code from an MVC model somewhere public void setSomeProperty(String myProperty){ if(!this.myProperty.equals(myProperty)){ this.myProperty = myProperty; fireMyPropertyChangedEvent(...); } } The article described how passing control to arbitrary external listener code was a potential cause for deadlock. I now find myself in a situation where I need to fire such events in a multithreaded environment and I would very much like to read the article again to see what it has to say before I continue. Does anyone know the article I'm referring to? I believe it came as a (fairly short) PDF. It started off with an initial naive implementation and incrementally pointed out flaws and improved upon it. It ended with a sort of final proper-way-to-fire-multithreaded-events. I've searched endlessly in my browse history and on google, but all I could find were endless amounts topics on Swing event dispatch threads. Thank you.

    Read the article

  • Modifying DOM of a webpage

    - by Prashant Singh
    The structure of a webpage is like this :- <div id='abc'> <div class='a'>Some contents here </div> <div class='b'>Some other contents< </div> </div> My aim is to add this after the class a in above structure. <div class='a'>Some other contents here </div> So that final structure looks like this :- <div id='abc'> <div class='a'>Some contents here </div> <div class='a'>Some other contents here </div> <div class='b'>Some other contents< </div> </div> Can there be a better way to do this using DOM properties. I was thinking of naive way of parsing the content and updating. Please comment if I am unclear in asking my doubt !

    Read the article

  • Prevent coersion to a single type in unlist() or c(); passing arguments to wrapper functions

    - by Leo Alekseyev
    Is there a simple way to flatten a list while retaining the original types of list constituents?.. Is there a way to programmatically construct a heterogeneous list?.. For instance, I want to create a simple wrapper for functions like png(filename,width,height) that would take device name, file name, and a list of options. The naive approach would be something like my.wrapper <- function(dev,name,opts) { do.call(dev,c(filename=name,opts)) } or similar code with unlist(list(...)). This doesn't work because opts gets coerced to character, and the resulting call is e.g. png(filename,width="500",height="500"). If there's no straightforward way to create heterogeneous lists like that, is there a standard idiomatic way to splice arguments into functions without naming them explicitly (e.g. do.call(dev,list(filename=name,width=opts["width"]))? -- Edit -- Gavin Simpson answered both questions below in his discussion about constructing wrapper functions. Let me give a summary of the answer to the title question: It is possible to construct a heterogeneous list with c() provided the arguments to c() are lists. To wit: > foo <- c("a","b"); bar <- 1:3 > c(foo,bar) [1] "a" "b" "1" "2" "3" > c(list(foo),list(bar)) [[1]] [1] "a" "b" [[2]] [1] 1 2 3 > c(as.list(foo),as.list(bar)) ## this creates a flattened heterogeneous list [[1]] [1] "a" [[2]] [1] "b" [[3]] [1] 1 [[4]] [1] 2 [[5]] [1] 3

    Read the article

  • How do I constrain a container's height to the height of a specific content element?

    - by Robert Rossney
    I'm trying to do something which seems like it should be extremely simple and yet I can't see how. I have a very simple layout, a TextBox with an image next to it, similar to the way it might look adorned with an ErrorProvider in a WinForms application. The problem is, I want the image to be no higher than the TextBox it's next to. If I lay it out like this, say: <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <TextBox Grid.Row="0" Grid.Column="0" MinWidth="100"/> <Image Grid.Row="0" Grid.Column="1" Source="error.png" /> </Grid> the row will size to the height of the image if the image is taller than the TextBox. This also happens if I use a DockPanel or StackPanel. The naive solution would be to bind the Height to the TextBox's ActualHeight. I'm sure this is wrong. But what's right?

    Read the article

  • Limiting object allocation over multiple threads

    - by John
    I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache. A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache. Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance? Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation: void ResultCallback( Result result, Request *request ) { lock totalResultsCached lock cachedLimit if( totalResultsCached + 1 > cachedLimit ) { unlock cachedLimit unlock totalResultsCached //cancel the request return; } ++totalResultsCached; unlock cachedLimit unlock totalResultsCached request.add(result) } void SendResults( int resultsToSend, Request *request ) { while ( resultsToSend > 0 ) { send(request.remove()) lock totalResultsCached --totalResultsCached unlock totalResultsCached --resultsToSend; } }

    Read the article

  • Filter design for audio signal.

    - by beanyblue
    What I am trying to do is simple. I have a few .wav files. I want to remove noise and filter out specific frequencies. I don't have matlab and I intend to write my own code for all the filters. Right now, I have a way to read the .wav file and dump out the structure into a text file. My questions are the following: Can I directly apply the digital filters on this sampled data?{ ie, can I directly do a convolution between my input samples and h(n) for the filter function that i choose?). How do I choose the number of coefficients for the Window function? I have octave, so if someone can point me to anything that gives me some idea on how to process the .wav file using octave, that would be great too. I want to be able to filter out the frequency and then listen to the sound again. Is this possible with octave? I'm just a beginner with these kinds of things, so please bear with me if my questions are too naive. Any help will be great.

    Read the article

  • SQL queries to determine all values that would satisfy an arbitrary query

    - by jasterm007
    I'm trying to figure out how to efficiently run a set of queries that will provide a new table of all values that would return results for an arbitrary query. Say my table has a schema like: id name age city What is an efficient way to list all values that would return results for an arbitrary query, say "NOT city=X AND age BETWEEN Y and Z"? My naive approach for this would be to use a script and recurse through all possible combinations of {city, age, age} and see which SELECTs return more than 0 results, but that seems incredibly inefficient. I've also tried building large joins on {city, age, age} as well and basically using that table as an argument list to the query, but that quickly becomes an impossibility for queries on many columns. For simple conjunctive equality queries, i.e. "name=X and age=Y", this is much simpler, as I can do something like SELECT name, age, count(*) AS count FROM main GROUP BY name, age HAVING count > 0 But I'm having difficulty coming up with a general approach for anything more complicated than that. Any pointers in the right direction would be most helpful, thanks.

    Read the article

  • How is machine code understood by the machine

    - by Kraken
    I have a very naive question here, and I would like you to correct me on whatever wrong concepts I put out here. The question is as follows: I have ubuntu installed on my machine, now I write a helloWorld.c program in C language. Now, on the operating system I have a compiler installed, when I execute my helloWorld.c program, the OS schedules the compiler and that basically compiles my code into machine code, which eventually, I execute. Now my kernel code is written in C, then how does my machine interprets that code? Say my kernel code is helloWorld.c, now would not I require any compiler, to compile this code. Also, if I hardcode a compiler in maybe ROM or something, then what language is it written in? Assembly language? Let me know if I have made myself clear with the problem. Thanks. EDIT: By kernel code I mean, the code for operating system. Operating System code. I guess it is written in C right?

    Read the article

  • How to write a cgi script in perl that accepts output (an image file) from a urlconnection in a Java applet and writes it to the server?

    - by Brad Rock
    I created a Java applet for my web page that creates an image file. I want users to be able to run the applet and create unique images, click a button, and have the image they created be saved to the web server. I think I have the code down for writing the image to a URLconnection in the Java applet itself after having successfully written a file while running the applet on my system rather than the web page and saving a file to the local disk, and then altering a few things to write to a URLConnection instead of to the local disk (although we shall see how that works too). I am now trying to write a cgi script in perl that takes the image output from the URLConnection and writes the image to a file on my web server (note: I am a newbie to perl). I have found many examples of how to do something similar with simple text coming from an applet and then writing it to a text file, but I want to know how to apply the same concept to an image. Particularly, how do I read in the image? I've seen text input get read by using read(STDIN, $some_variable)--does the same thing work with an image? Likewise, how do I write the image file? What function do I use? Thanks for your help. I know that I am rather naive about all of this.

    Read the article

  • Change value of adjacent vertices and remove self loop

    - by StereoMatching
    Try to write a Karger’s algorithm with boost::graph example (first column is vertice, other are adjacent vertices): 1 2 3 2 1 3 4 3 1 2 4 4 2 3 assume I merge 2 to 1, I get the result 1 2 3 2 1 1 3 4 2 1 3 4 3 1 2 4 4 2 3 first question : How could I change the adjacent vertices("2" to "1") of vertice 1? my naive solution template<typename Vertex, typename Graph> void change_adjacent_vertices_value(Vertex input, Vertex value, Graph &g) { for (auto it = boost::adjacent_vertices(input, g); it.first != it.second; ++it.first){ if(*it.first == value){ *(it.first) = input; //error C2106: '=' : left operand must be l-value } } } Apparently, I can't set the value of the adjacent vertices to "1" by this way The result I want after "change_adjacent_vertices_value" 1 1 3 1 1 1 3 4 2 1 3 4 3 1 2 4 4 2 3 second question : How could I pop out the adjacent vertices? Assume I want to pop out the consecutive 1 from the vertice 1 The result I expected 1 1 3 1 3 4 2 1 3 4 3 1 2 4 4 2 3 any function like "pop_adjacent_vertex" could use?

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • SQLAuthority News – Who I Am And How I Got Here – True Story as Blog Post

    - by pinaldave
    Here are few of the sample questions I get every day? “Give me shortcut to become superstar?” “How do I become like you?” “Which book I should read so I know everything?” “Can you share your secret to be successful? I want to know it but do not share with others.” There is generic answer I always give is to work hard and read good educational material or watch good online videos. One of the emails really caught my attention. It was from a friend and SQL Server Expert John Sansom (Blog | Twitter). He wrote if I would like to share my story with the world about “Who I am and How I got Here”. I was very much intrigued with his suggestion. John is one guy I respect a lot. Every single topic he writes, I read it with dedication. I eagerly wait for his Weekly Summary of Best SQL Links. If you have not read them, you are missing something out. Writing a guest post for him was like walking in memory lane. I remembered the time when I was beginning my career and I was bit overconfident and bit naive. I had my share of mistakes when I started my career. As time passed by I realize the truth. Well, we all do mistakes. Though, I am proud that as soon as I know my mistakes I corrected them. I never acted on impulse or when I am angry. I think that alone has helped me analysis the situation better and become better human being. During the course, I have lost my ego and it is replaced by passion. I am much more happy and successful in my work. Quite often people ask me if I am always online and wether I have family or not. Honestly, I am able to work hard because of my family. They support me and they encourage me to be enjoy in what I do. They support everything I do and personally, I do not miss a single occasion to join them in daily chores of fun. If there was a shortcut to success – I want know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me with my journey to learn SQL Server – more the merrier. I have written a story of my life as a guest post.  Read Here: A Journey to SQL Authority Special thanks to John Sansom (Blog | Twitter) for giving me space to talk my story. Indeed I am honored. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18  | Next Page >