Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 62/335 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • (iphone) Does it make difference to provide more images when the object is moving in a straight line?

    - by Eugene
    Hi. Among many animation scenarios, there are times when I want an object to move a straight line then change direction, move another straight line and so forth. Assuming I would use either UIImageView or CABasicAnimation with image arrays. Does it make difference to provide more images when the object is moving in a straight line? For example, point1 ---------point2 ------- point3 (all points are in a straight line) Providing an image at point2 to UIImageView or CABasicAnimation, gives any better animation result, assuming I don't need to change the animation speed along the course? If I were flashing each image myself, yes it would make the animation look smooth, but I'm giving the images to UIImageView/CABasicAnimation, and wonder what they do. Thank you

    Read the article

  • Is there any difference in the implementation of these three validation methods?

    - by dontWatchMyProfile
    Core Data is calling these methods in certain situations: - (BOOL)validateForInsert:(NSError **)outError; - (BOOL)validateForUpdate:(NSError **)outError; - (BOOL)validateForDelete:(NSError **)outError; I wonder if they're doing anything different, or if they're essentially doing the exact same things. As far as I know, these methods call the -validateValue:forKey:error: method once for every property. The only difference I can imagine is in the .validateForDelete: method. I see no reason why to validate an object when it shall be deleted, except for applying delete rules, probably only in the case of the DENY rule.

    Read the article

  • How to check if the sum of some records equals the difference between two other records in t-sql?

    - by Dan Appleyard
    I have a view that contains bank account activity. ACCOUNT BALANCE_ROW AMOUNT SORT_ORDER 111 1 0.00 1 111 0 10.00 2 111 0 -2.50 3 111 1 7.50 4 222 1 100.00 5 222 0 25.00 6 222 1 125.00 7 ACCOUNT = account number BALANCE_ROW = either starting or ending balance would be 1, otherwise 0 AMOUNT = the amount SORT_ORDER = simple order to return the records in the order of start balance, activity, and end balance I need to figure out a way to see if the sum of the non balance_row rows equal the difference between the ending balance and the starting balance. The result for each account (1 for yes, 0 for no) would be simply added to the resulting result set. Example: Account 111 had a starting balance of 0.00. There were two account activity records of 10.00 and -2.5. That resulted in the ending balance of 7.50. I've been playing around with temp tables, but I was not sure if there is a more efficient way of accomplishing this. Thanks for any input you may have!

    Read the article

  • What's the difference between FireBug's conole.log() and console.debug() ?

    - by 6bytes
    A very simple code to illustrate the difference. var x = [0, 3, 1, 2]; console.debug('debug', x); console.log('log', x); // above display the same result x.splice(1, 2); // below display kind of a different result console.debug('debug', x); console.log('log', x); The javascript value is exactly the same but console.log() displays it a bit differently than before applying splice() method. Because of this I lost quite a few hours as I thought splice is acting funny making my array multidimensional or something. I just want to know why does this work like that. Does anyone know? :)

    Read the article

  • Is there any difference these two pieces of code?

    - by Poiuyt
    #include<stdio.h> class A {public: int a; }; class B: public A {private: int a;}; int main(){ B b; printf("%d", b.a); return 0; } #include<stdio.h> class A {public: int a; }; class B: private A {}; int main(){ B b; printf("%d", b.a); return 0; } I ask because I get different errors: error: 'int B::a' is private error: 'int A::a' is inaccessible Apart from what the errors might reveal, is there any difference at all in the behaviour of these two pieces of code?

    Read the article

  • Hi, i have a c programming doubt? I want to know the difference between the two and where one is use

    - by aks
    Hi, i have a c programming doubt? I want to know the difference between the two and where one is useful over other? suppose i have a struct called employee as below: struct emp{ char first_name[10]; char last_name[10]; char key[10]; }; Now, i want to store the table of employee records, then which method should i use: struct emp e1[100]; // Or struct emp * e1[100]; I know the two are not same but would like to know a use case where second declaration would be of interest and more advantageous to use? Can someone clarify?

    Read the article

  • Using GCC 4.2 to compile *.mm files is very very slow, but LLVM has done a very good job, any difference?

    - by jianhua
    My project is obj-c and C++ hybirid, filled with by both *.m and *.mm. When compiling, if choose GCC 4.2, *.m obj-c source files compile speed is very fast but *.mm very very slow, but LLVM 2.0 can do a very good job, it is very fast for both *.m and *.mm. My question: Is there any difference between LLVM and GCC 4.2 during compliling *.mm files? why GCC 3.2 is so slow? Any ieda or discussion will be appreciated, thanks in advance. ENV: XCODE 4.0.1

    Read the article

  • ExpressJS: What is the difference between app.local and res.local?

    - by aeyang
    I'm trying to learn Express and in my app I have middleware that passes the session object from the Request object to my Response object so that I can access it in my views: app.use((req, res, next) -> res.locals.session = req.session next() ) But app.locals is available to the view as well right? So is it the same if I do app.locals.session = req.session? Is there a convention for the types of things app.locals and res.locals are used for? I was also confused on what the difference is between res.render() and res.redirect()? When should each be used? Thanks for reading. Any help related to Express is appreciated!

    Read the article

  • tcp flags in iptables: What's the difference between RST SYN and RST and SYN RST ? When to use ALL?

    - by Kris
    I'm working on a firewall for a virtual dedicated server and one of the things I'm looking into is port scanners. TCP flags are used for protection. I have 2 questions. The rule: -p tcp --tcp-flags SYN,ACK,FIN,RST SYN -j DROP First argument says check packets with flag SYN Second argument says make sure the flags ACK,FIN,RST SYN are set And when that's the case (there's a match), drop the tcp packet First question: I understand the meaning of RST and RST/ACK but in the second argument RST SYN is being used. What's the difference between RST SYN and RST and SYN RST ? Is there a "SYN RST" flag in a 3 way handshake ? Second question is about the difference between -p tcp --tcp-flags SYN,ACK,FIN,RST SYN -j DROP and -p tcp --tcp-flags ALL SYN,ACK,FIN,RST SYN -j DROP When should ALL be used ? When I use ALL, does that mean if the tcp packet with the syn flag doesn't have the ACK "and" the FIN "and" the RST SYN flags set, there will be no match ?

    Read the article

  • How to implement Cache in web apps?

    - by Jhonnytunes
    This is really two questions. Im doing a project for the university for storing baseball players statitics, but from baseball data I have to calculate the score by year for the player who is beign displayed. The background is, lets say 10, 000 users hit the player "Alex Rodriguez", the application have to calculate 10, 000 the A-Rod stats by years intead of just read it from some where is temporal saved. Here I go: What is the best method for caching this type of data? Do I have to used the same database, and some temporal values on the same database, or create a Web Service for that? What reading about web caching so you recommend?

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

  • What is the difference between AF_INET and PF_INET constants?

    - by Denilson Sá
    Looking at examples about socket programming, we can see that some people use AF_INET while others use PF_INET. In addition, sometimes both of them are used at the same example. The question is: Is there any difference between them? Which one should we use? If you can answer that, another question would be... Why there are these two similar (but equal) constants? What I've discovered, so far: The socket manpage In (Unix) socket programming, we have the socket() function that receives the following parameters: int socket(int domain, int type, int protocol); The manpage says: The domain argument specifies a communication domain; this selects the protocol family which will be used for communication. These families are defined in <sys/socket.h>. And the manpage cites AF_INET as well as some other AF_ constants for the domain parameter. Also, at the NOTES section of the same manpage, we can read: The manifest constants used under 4.x BSD for protocol families are PF_UNIX, PF_INET, etc., while AF_UNIX etc. are used for address families. However, already the BSD man page promises: "The protocol family generally is the same as the address family", and subsequent standards use AF_* everywhere. The C headers The sys/socket.h does not actually define those constants, but instead includes bits/socket.h. This file defines around 38 AF_ constants and 38 PF_ constants like this: #define PF_INET 2 /* IP protocol family. */ #define AF_INET PF_INET Python The Python socket module is very similar to the C API. However, there are many AF_ constants but only one PF_ constant (PF_PACKET). Thus, in Python we have no choice but use AF_INET. I think this decision to include only the AF_ constants follows one of the guiding principles: "There should be one-- and preferably only one --obvious way to do it." (The Zen of Python)

    Read the article

  • Understanding Javascript's difference between calling a function, and returning the function but executing it later.

    - by Squeegy
    I'm trying to understand the difference between foo.bar() and var fn = foo.bar; fn(); I've put together a little example, but I dont totally understand why the failing ones actually fail. var Dog = function() { this.bark = "Arf"; }; Dog.prototype.woof = function() { $('ul').append('<li>'+ this.bark +'</li>'); }; var dog = new Dog(); // works, obviously dog.woof(); // works (dog.woof)(); // FAILS var fnWoof = dog.woof; fnWoof(); // works setTimeout(function() { dog.woof(); }, 0); // FAILS setTimeout(dog.woof, 0); Which produces: Arf Arf undefined Arf undefined On JSFiddle: http://jsfiddle.net/D6Vdg/1/ So it appears that snapping off a function causes it to remove it's context. Ok. But why then does (dog.woof)(); work? It's all just a bit confusing figuring out whats going on here. There are obviously some core semantics I'm just not getting.

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • What exactly is the difference between the Dreamhost IDE and Netbeans?

    - by mikemick
    I just started using Netbeans about a week ago, and really like it thus far. Now I'm seeing something about Dreamhost IDE which I guess is a program that is built using the Netbeans platform. I use Dreamhost as the hosting company for many of my projects. What is the benefit of using Dreamhost IDE over Netbeans? Documentation on the software is non-existent from what I can tell (not even a mention in the Dreamhost wiki). All I was able to find was a short description of what it was on a Sourceforge download page, and I found a short silent video on YouTube demoing it. So I guess I'm asking, what features is it bringing to the table, and what is the difference between it and Netbeans? The description on the Sourceforge page is as follows (typos retained)... DreamHost IDE is php and ruby integrated development environment built on NetBeans IDE and provides easy deploy of your applications to the DreamHost services. Also provides you an easy eay hew to setup these services. Maybe the answer is in the description, and I just don't comprehend it?

    Read the article

  • Is there a significant mechanical difference between these faux simulations of default parameters?

    - by ccomet
    C#4.0 introduced a very fancy and useful thing by allowing default parameters in methods. But C#3.0 doesn't. So if I want to simulate "default parameters", I have to create two of that method, one with those arguments and one without those arguments. There are two ways I could do this. Version A - Call the other method public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return CutBetween(str, left, right, false); } Version B - Copy the method body public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return str.CutAfter(left, false).CutBefore(right, false); } Is there any real difference between these? This isn't a question about optimization or resource usage or anything (though part of it is my general goal of remaining consistent), I don't even think there is any significant effect in picking one method or the other, but I find it wiser to ask about these things than perchance faultily assume.

    Read the article

  • difference between http.context.user and thread.currentprincipal and when to use them?

    - by yamspog
    I have just recently run into an issue running an asp.net web app under visual studio 2008. I get the error 'type is not resolved for member...customUserPrincipal'. Tracking down various discussion groups it seems that there is an issue with Visual Studio's web server when you assign a custom principal against the Thread.CurrentPrincipal. In my code, I now use... HttpContext.Current.User = myCustomPrincipal //Thread.CurrentPrincipal = myCustomPrincipal I'm glad that I got the error out of the way, but it begs the question "What is the difference between these two methods of setting a principal?". There are other stackoverflow questions related to the differences but they don't get into the details of the two approaches. I did find one tantalizing post that had the following grandiose comment but no explanation to back up his assertions... Use HttpConext.Current.User for all web (ASPX/ASMX) applications. Use Thread.CurrentPrincipal for all other applications like winForms, console and windows service applications. Can any of you security/dot.net gurus shed some light on this subject?

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • Is it possible to create a Mac OS specific CSS to fix font difference ?

    - by Gabriel
    I'm working on a project with a designer and he insisted on using some specific font for titles and various elements in the page. So we're using a font kit to embed with @font-face. It's working perfectly on PC (Firefox, IE 7 and 8, Chrome, Safari) but on Mac OS (Safari and Firefox) the fonts are not vertically aligned the same way. After looking on the Web, I didn't find any solution for this except "there always been differences between browsers and platforms, live with it". I know that fonts are never rendered exactly the same across platforms, but this time it's not something like the font looks more bold or something like that. The font looks as if it's baseline is completely different between Windows and Mac OS X. On Mac OS, the font, at a size of 16px is 3px higher than on PC. So I'm looking for a backup solution : is there a way to create a CSS specifically for Mac OS users? I do not want to target only Safari because Safari PC is ok, and Firefox Mac is not ok. Or if you have a solution to fix the baseline difference that does not require a specific CSS file, I'd be happy to hear it. Thanks!

    Read the article

  • How to combine two rows and calculate the time difference between two timestamp values in MySQL?

    - by Nadar
    I have a situation that I'm sure is quite common and it's really bothering me that I can't figure out how to do it or what to search for to find a relevant example/solution. I'm relatively new to MySQL (have been using MSSQL and PostgreSQL earlier) and every approach I can think of is blocked by some feature lacking in MySQL. I have a "log" table that simply lists many different events with their timestamp (stored as datetime type). There's lots of data and columns in the table not relevant to this problem, so lets say we have a simple table like this: CREATE TABLE log ( id INT NOT NULL AUTO_INCREMENT, name VARCHAR(16), ts DATETIME NOT NULL, eventtype VARCHAR(25), PRIMARY KEY (id) ) Let's say that some rows have an eventtype = 'start' and others have an eventtype = 'stop'. What I want to do is to somehow couple each "startrow" with each "stoprow" and find the time difference between the two (and then sum the durations per each name, but that's not where the problem lies). Each "start" event should have a corresponding "stop" event occuring at some stage later then the "start" event, but because of problems/bugs/crashed with the data collector it could be that some are missing. In that case I would like to disregard the event without a "partner". That means that given the data: foo, 2010-06-10 19:45, start foo, 2010-06-10 19:47, start foo, 2010-06-10 20:13, stop ..I would like to just disregard the 19:45 start event and not just get two result rows both using the 20:13 stop event as the stop time. I've tried to join the table with itself in different ways, but the key problems for me seems to be to find a way to correctly identify the corresponding "stop" event to the "start" event for the given "name". The problem is exactly the same as you would have if you had table with employees stamping in and out of work and wanted to find out how much they actually were at work. I'm sure there must be well known solutions to this, but I can't seem to find them...

    Read the article

  • What is the difference between _tmain() and main() in C++?

    - by joshcomley
    If I run my C++ application with the following main() method everything is OK: int main(int argc, char *argv[]) { cout << "There are " << argc << " arguments:" << endl; // Loop through each argument and print its number and value for (int i=0; i<argc; i++) cout << i << " " << argv[i] << endl; return 0; } I get what I expect and my arguments are printed out. However, if I use _tmain: int _tmain(int argc, char *argv[]) { cout << "There are " << argc << " arguments:" << endl; // Loop through each argument and print its number and value for (int i=0; i<argc; i++) cout << i << " " << argv[i] << endl; return 0; } It just displays the first character of each argument. What is the difference causing this?

    Read the article

  • Difference between SET autocommit=1 and START TRANSACTION in mysql (Have I missed something?)

    - by tkolar
    Hey there, I am reading up on transactions in mysql and am not sure whether I have grasped something specific correctly, and I want to be sure I understood that correctly, so here goes. I know what a transaction is supposed to do, I'm just not sure whether I understood the statement semantics or not. So, my question is, is anything wrong, (and, if that is the case, what is wrong) with the following: By default, autocommit mode is enabled in mysql. Now, SET autocommit=0; will begin a transaction, SET autocommit=1; will implicitly commit. It is possible to COMMIT; as well as ROLLBACK;, in both of which cases autocommit is still set to 0 afterwards (and a new transaction is implicitly started). START TRANSACTION; will basically SET autocommit=0; until a COMMIT; or ROLLBACK; takes place. In other words, START TRANSACTION; and SET autocommit=0; are equivalent, except for the fact that START TRANSACTION; does the equivalent of implicitly adding a SET autocommit=0; after COMMIT; or ROLLBACK; If that is the case, I don't understand http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html#isolevel_serializable - seeing as having an isolation level implies that there is a transaction, meaning that autocommit should be off anyway? And if there is another difference (other than the one described above) between beginning a transaction and setting autocommit, what is it? Thanks a lot in advance for your help!

    Read the article

  • Is there any fundamental difference between piping in mac and linux?

    - by Mohammad Moghimi
    ps -e | grep bash sample output from a linux machine: 1128 pts/14 00:00:00 bash 7491 pts/7 00:00:00 bash 12651 pts/14 00:00:00 bash 16145 pts/2 00:00:00 bash sample output from a mac machine: 58352 ttys000 0:00.09 login -pfl username /bin/bash -c exec -la bash /bin/bash 58353 ttys000 0:00.02 -bash 58390 ttys000 0:00.00 grep bash 20372 ttys005 0:00.06 login -pfl username /bin/bash -c exec -la bash /bin/bash 20373 ttys005 0:00.18 -bash My question is that why we see "grep bash" in the second case but not the first case.

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • How to use bzdiff to find difference between 2 bzipped files with diff -I option?

    - by englebip
    I'm trying to do a diff on MySQL dumps (created with mysqldump and piped to bzip2), to see if there are changes between consecutive dumps. The followings are the tails of 2 dumps: tmp1: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 1:06:50 tmp2: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 0:40:11 When I bzdiff their bzipped version: $ bzdiff tmp?.bz2 10c10 < -- Dump completed on 2011-03-11 1:06:50 --- > -- Dump completed on 2011-03-11 0:40:11 According to the manual of bzdiff, any option passed on to bzdiff is passed on to diff. I therefore looked at the -I option that allows to define a regexp; lines matching it are ignored in the diff. When I then try: $ bzdiff -I'Dump' tmp1.bz2 tmp2.bz2 I get an empty diff. I would like to match as much as possible of the "Dump completed" line, though, but when I then try: $ bzdiff -I'Dump completed' tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.miCJEvX9E8' diff: Try `diff --help' for more information. Same thing happens for some variations: $ bzdiff '-IDump completed' tmp1.bz2 tmp2.bz2 $ bzdiff '-I"Dump completed"' tmp1.bz2 tmp2.bz2 $ bzdiff -'"IDump completed"' tmp1.bz2 tmp2.bz2 If I diff the un-bzipped files there is no problem: $diff -I'^[-][-] Dump completed on' tmp1 tmp2 gives also an empty diff. bzdiff is a shell script usually placed in /bin/bzdiff. Essentially, it parses the command options and passes them on to diff as follows: OPTIONS= FILES= for ARG do case "$ARG" in -*) OPTIONS="$OPTIONS $ARG";; *) if test -f "$ARG"; then FILES="$FILES $ARG" else echo "${prog}: $ARG not found or not a regular file" exit 1 fi ;; esac done [...] bzip2 -cdfq "$1" | $comp $OPTIONS - "$tmp" I think the problem stems from escaping the spaces in the passing of $OPTIONS to diff, but I couldn't figure out how to get it interpreted correctly. Any ideas? EDIT @DerfK: Good point with the ., I had forgotten about them... I tried the suggestion with the multiple level of quotes, but that is still not recognized: $ bzdiff "-I'\"Dump.completed.on\"'" tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.Di7RtihGGL'

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >