Search Results

Search found 16772 results on 671 pages for 'charles long'.

Page 83/671 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Timeout Exceptions

    - by Raihan Jamal
    This is my below code, I am confuse why this thing is happening. In this code getLocationByIpTimeout is a method in which I am passing two things- one is the ip address and second is the timeout. So I will get the timeout exception if the response is not getting back in under 5 ms. So when I ran this below code, I am getting few timeout exceptions but the most important thing that I am confuse is if I am getting timeout exceptions (time taken to get the response is greater than 5 ms) then why the program is entering in that if loop in which I am having difference 5. What can be the possible reason for this? It is because of catch block?? Any suggestions will be appreciated. long runs =10000; long difference = 0; while(runs > 0) { String ipAddress = generateIPAddress(); long start_time = System.nanoTime(); try { resp = PersonalizationGeoLocationServiceClientHelper.getLocationByIpTimeout(ipAddress, 5); } catch (TimeoutException e) { System.out.println("Timeout Exception"); } long end_time = System.nanoTime(); if(resp == null || (resp.getLocation() == null)) { difference = 0; } else if(resp.getLocation() != null) { difference = (end_time - start_time)/1000000; } if(difference> 5) { System.out.println("Debug"); } }

    Read the article

  • How to cancel a deeply nested process

    - by Mystere Man
    I have a class that is a "manager" sort of class. One of it's functions is to signal that the long running process of the class should shut down. It does this by setting a boolean called "IsStopping" in class. public class Foo { bool isStoping void DoWork() { while (!isStopping) { // do work... } } } Now, DoWork() was a gigantic function, and I decided to refactor it out and as part of the process broke some of it into other classes. The problem is, Some of these classes also have long running functions that need to check if isStopping is true. public class Foo { bool isStoping void DoWork() { while (!isStopping) { MoreWork mw = new MoreWork() mw.DoMoreWork() // possibly long running // do work... } } } What are my options here? I have considered passing isStopping by reference, which I don't really like because it requires there to be an outside object. I would prefer to make the additional classes as stand alone and dependancy free as possible. I have also considered making isStopping a property, and then then having it call an event that the inner classes could be subscribed to, but this seems overly complex. Another option was to create a "Process Cancelation Token" class, similar to what .net 4 Tasks use, then that token be passed to those classes. How have you handled this situation? EDIT: Also consider that MoreWork might have a EvenMoreWork object that it instantiates and calls a potentially long running method on... and so on. I guess what i'm looking for is a way to be able to signal an arbitrary number of objects down a call tree to tell them to stop what they're doing and clean up and return.

    Read the article

  • Strange performance behaviour

    - by plastilino
    I'm puzzled with this. In my machine Direct calculation: 375 ms Method calculation: 3594 ms, about TEN times SLOWER If I place the method calulation BEFORE the direct calculation, both times are SIMILAR. Woud you check it in your machine? class Test { static long COUNT = 50000 * 10000; private static long BEFORE; /*--------METHOD---------*/ public static final double hypotenuse(double a, double b) { return Math.sqrt(a * a + b * b); } /*--------TIMER---------*/ public static void getTime(String text) { if (BEFORE == 0) { BEFORE = System.currentTimeMillis(); return; } long now = System.currentTimeMillis(); long elapsed = (now - BEFORE); BEFORE = System.currentTimeMillis(); if (text.equals("")) { return; } String message = "\r\n" + text + "\r\n" + "Elapsed time: " + elapsed + " ms"; System.out.println(message); } public static void main(String[] args) { double a = 0.2223221101; double b = 122333.167; getTime(""); /*--------DIRECT CALCULATION---------*/ for (int i = 1; i < COUNT; i++) { Math.sqrt(a * a + b * b); } getTime("Direct: "); /*--------METHOD---------*/ for (int k = 1; k < COUNT; k++) { hypotenuse(a, b); } getTime("Method: "); } }

    Read the article

  • A question about TBB/C++ code

    - by Jackie
    I am reading The thread building block book. I do not understand this piece of code: FibTask& a=*new(allocate_child()) FibTask(n-1,&x); FibTask& b=*new(allocate_child()) FibTask(n-2,&y); What do these directive mean? class object reference and new work together? Thanks for explanation. The following code is the defination of this class FibTask. class FibTask: public task { public: const long n; long* const sum; FibTask(long n_,long* sum_):n(n_),sum(sum_) {} task* execute() { if(n FibTask& a=*new(allocate_child()) FibTask(n-1,&x); FibTask& b=*new(allocate_child()) FibTask(n-2,&y); set_ref_count(3); spawn(b); spawn_and_wait_for_all(a); *sum=x+y; } return 0; } };

    Read the article

  • Convert VBA to VBS

    - by dnLL
    I have a little VBA script with some functions that I would like to convert to a single VBS file. Here is an example of what I got: Private Declare Function GetPrivateProfileString Lib "kernel32" Alias "GetPrivateProfileStringA" (ByVal lpApplicationName As String, ByVal lpKeyName As Any, ByVal lpDefault As String, ByVal lpReturnedString As String, ByVal nSize As Long, ByVal lpFileName As String) As Long Private Function ReadIniFileString(ByVal Sect As String, ByVal Keyname As String) As String Dim Worked As Long Dim RetStr As String * 128 Dim StrSize As Long Dim iNoOfCharInIni As Integer Dim sIniString, sProfileString As String iNoOfCharInIni = 0 sIniString = "" If Sect = "" Or Keyname = "" Then MsgBox "Erreur lors de la lecture des paramètres dans " & IniFileName, vbExclamation, "INI" Access.Application.Quit Else sProfileString = "" RetStr = Space(128) StrSize = Len(RetStr) Worked = GetPrivateProfileString(Sect, Keyname, "", RetStr, StrSize, IniFileName) If Worked Then iNoOfCharInIni = Worked sIniString = Left$(RetStr, Worked) End If End If ReadIniFileString = sIniString End Function And then, I need to use this function to put some values in strings. VBS doesn't seem to like any of my var declaration ((Dim) MyVar As MyType). If I'm able to adapt that code to VBS, I should be able to do the rest of my functions too. How can I adapt/convert this to VBS? Thank you.

    Read the article

  • I just wanted to DES 4096 bytes of data with a 128 bits key...

    - by badp
    ...and what the nice folks at OpenSSL gratiously provide me with is this. :) Now, since you shouldn't be guessing when using cryptography, I come here for confirmation: what is the function call I want to use? What I understood A 128 bits key is 16 byte large, so I'll need double DES (2 × 8 byte). This leaves me with only a few function calls: void DES_ede2_cfb64_encrypt(const unsigned char *in, unsigned char *out, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int *num, int enc); void DES_ede2_cbc_encrypt(const unsigned char *input, unsigned char *output, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int enc); void DES_ede2_cfb64_encrypt(const unsigned char *in, unsigned char *out, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int *num, int enc); void DES_ede2_ofb64_encrypt(const unsigned char *in, unsigned char *out, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int *num); In this case, I guess the function I want to call DES_ede2_cfb64_encrypt, although I'm not so sure -- I definitely don't need padding here and I'd have to care about what ivec and num are, and how I want to generate them... What am I missing?

    Read the article

  • Project Euler #119 Make Faster

    - by gangqinlaohu
    Trying to solve Project Euler problem 119: The number 512 is interesting because it is equal to the sum of its digits raised to some power: 5 + 1 + 2 = 8, and 8^3 = 512. Another example of a number with this property is 614656 = 28^4. We shall define an to be the nth term of this sequence and insist that a number must contain at least two digits to have a sum. You are given that a2 = 512 and a10 = 614656. Find a30. Question: Is there a more efficient way to find the answer than just checking every number until a30 is found? My Code int currentNum = 0; long value = 0; for (long a = 11; currentNum != 30; a++){ //maybe a++ is inefficient int test = Util.sumDigits(a); if (isPower(a, test)) { currentNum++; value = a; System.out.println(value + ":" + currentNum); } } System.out.println(value); isPower checks if a is a power of test. Util.sumDigits: public static int sumDigits(long n){ int sum = 0; String s = "" + n; while (!s.equals("")){ sum += Integer.parseInt("" + s.charAt(0)); s = s.substring(1); } return sum; } program has been running for about 30 minutes (might be overflow on the long). Output (so far): 81:1 512:2 2401:3 4913:4 5832:5 17576:6 19683:7 234256:8 390625:9 614656:10 1679616:11 17210368:12 34012224:13 52521875:14 60466176:15 205962976:16 612220032:17

    Read the article

  • Pointer reference and dereference

    - by ZhekakehZ
    I have the following code: #include <iostream> char ch[] = "abcd"; int main() { std::cout << (long)(int*)(ch+0) << ' ' << (long)(int*)(ch+1) << ' ' << (long)(int*)(ch+2) << ' ' << (long)(int*)(ch+3) << std::endl; std::cout << *(int*)(ch+0) << ' ' << *(int*)(ch+1) << ' ' << *(int*)(ch+2) << ' ' << *(int*)(ch+3) << std::endl; std::cout << int('abcd') << ' ' << int('bcd') << ' ' << int('cd') << ' ' << int('d') << std::endl; } My question is why the pointer of 'd' is 100 ? I think it should be: int('d') << 24; //plus some trash on stack after ch And the question is why the second and the third line of the stdout are different ? 6295640 6295641 6295642 6295643 1684234849 6579042 25699 100 1633837924 6447972 25444 100 Thanks.

    Read the article

  • how to pass arraylist as parameter to another screen

    - by user2867267
    how to pass arraylist as parameter to another activity in my condition im not using listview and checkif arraylist contain single element then pass that arrraylist as parameter toanother screen see thisline Category_name.get(position).toString()); how i remove position?? how to passs arraylist parameter toanother activity static ArrayList<Long> Menu_ID = new ArrayList<Long>(); static ArrayList<String> Category_name = new ArrayList<String>(); JSONArray school = json2.getJSONArray("data"); for (int i = 0; i < school.length(); i++) { JSONObject object = school.getJSONObject(i); Category_ID.add((long) i); Menu_ID.add(Long.parseLong(object.getString("menu_id"))); Category_name.add(object.getString("menu_title")); } Intent iMenuList = new Intent(MenuGroup.this, thirdstep.class); menuidvalue=""; menuidvalue =( Menu_ID.get(position)).toString(); iMenuList.putExtra("Menu_ID",menuidvalue); iMenuList.putExtra("menu_group", Category_name.get(position).toString()); startActivity(iMenuList);

    Read the article

  • Week 24: Karate Kid Chops, The A-Team Runs, and the OPN Team Delivers

    - by sandra.haan
    The 80's called and they want their movies back. With the summer line-up of movies reminding us to wax on and wax off one can start to wonder if there is anything new to look forward to this summer. The OPN Team is happy to report that - yes - there is. As Hannibal would say "I love it when a plan comes together"! And a plan we have; for the past 2 months we've been working to pull together the FY11 Oracle PartnerNetwork Kickoff. Listen in as Judson tells you more. While we can't offer you Bradley Cooper or Jackie Chan we can promise you an exciting line-up of guests including Safra Catz and Charles Phillips. With no lines to wait in or the annoyingly tall guy sitting in front of you this might just be the best thing you see all summer. Register now & Happy New Year, The OPN Communications Team

    Read the article

  • Inside NASA’s Shuttle Trainer

    - by Jason Fitzpatrick
    After more than 30 years of service, NASA has retired their full-scale shuttle training simulator. Take a photo tour and learn where you can visit the trainer and crawl around inside for a more hands-on experience. The trainer is currently on display at the Charles Simonyi Space Gallery at the Museum of Flight in Seattle, Washington. For those of us unable to visit the trainer in person, Wired Magazine has a full photo tour at the link below. Get Inside the Replica that Trained Every Shuttle Astronaut [Wired] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • What are the options for setting up a UNIX environment to learn C using Kernighan and Richie's The C Programming Language?

    - by ssbrewster
    I'm a novice programmer and have been experimenting with Javascript, jQuery and PHP but felt I wasn't getting a real depth of understanding of what I was doing. So, after reading Joel Spolsky's response to a question on this site (which I can't find now!), I took it back to basics and read Charles Petzold's 'Code' and am about to move onto Kernighan and Richie's The C Programming Language. I want to learn this in a UNIX environment but only have access to a Windows system. I have Ubuntu 12.04 running on a virtualised machine via VMWare Player, and done some coding in the terminal. Is using a Linux distro the only option for programming in a UNIX environment on Windows? And what are the next steps to start programming in C in UNIX and where do I get a compiler from?

    Read the article

  • Google I/O 2010 - Keynote Day 1

    Google I/O 2010 - Keynote Day 1 Google I/O 2010 - Keynote Day 1 Video footage from Day 1 keynote at Google I/O 2010 Vic Gundotra, Engineering Vice President, Google Sundar Pichai, Vice President, Product Management, Google Charles Pritchard, Founder, MugTug Jim Lanzone, CEO, Clicker Mike Shaver, VP Engineering, Mozilla Corporation Håkon Wium Lie, CTO, Opera Software Kevin Lynch, CTO, Adobe Systems Terry McDonell, Editor, Sports Illustrated Group Lars Rasmussen, Manager, Google Wave David Glazer, Engineering Director, Google Paul Maritz, President & CEO, VMware Ben Alex, Senior Staff Engineer, SpringSource Division of VMware, Bruce Johnson, Engineering Director, Google Kevin Gibbs, Software Engineer, Google For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 2 1 ratings Time: 02:05:08 More in Science & Technology

    Read the article

  • Free eBooks from Microsoft&ndash;We like free!

    - by Jim Duffy
    In a recent blog post I mentioned the availability of the Programming Windows Phone 7 ebook by Charles Petzold. Well I have good news, there are a number of additional FREE ebooks available from Microsoft to help you continue honing your tech skills. Moving to Microsoft Visual Studio 2010 Introducing Microsoft SQL Server 2008 R2 Own Your Future: Update Your Skills with Resources and Career Ideas from Microsoft Understanding Microsoft Virtualization Solutions (Second Edition) First Look Microsoft Office 2010 Windows 7 troubleshooting tips Introducing Windows Server 2008 R2 Deploying Windows 7, Essential Guidance I, for one, appreciate Microsoft making these resources available for free. I think it demonstrates their interest making sure we as developers and I.T. professionals have the resources we need to effectively solve the business problems we encounter. Have a day.

    Read the article

  • St. Louis IT Community Holiday Party

    - by Scott Spradlin
    The St. Louis .NET User Group is hosting a holiday party this year for the very first time in our 10 year history. The event will be held at the Bottleneck Blues Bar at the Ameristar Casino in St. Charles. It will be an open house style event meaning you can drop by any time from 6:00pm to 9:00pm and enjoy the Unhandled Exceptions...the band that played at the St. Louis Day of .NET 2011. $5.00 at the door gets you in and goes to support a local charity The Backstoppers. If you cannot come, you can make a donation online. Details at our group web site HTTP://www.stlnet.org

    Read the article

  • Easy to understand and interesting book on algorithms

    - by gasan
    Please advise me a book on algorithms, that would be easier to read and understand than Cormen's book1. It may be not so big and deep in explanation. I even want it to not be that big, however it shouldn't contain misconceptions or errors or inaccuracies. It should be a some kind of pre-Cormen's book, that will help later to understand more sophisticated conceptions. A beginner book (but still worth to read). 1 Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein

    Read the article

  • Wrapping up an Exciting Mobile World Congress

    - by Jacob Lehrbaum
    Its been a busy week here in Barcelona, with noticeably more energy at the show than in 2010. This year, we decided to move the Java booth to the App Planet and really engage with the increasing number of developers that are attending the event. Our booth featured 10 demos and a series of nearly 25 workshops featuring a variety of topics ranging from information about Java Verified, to the use of web technologies with Java ME, to sessions hosted by Operators such as Orange and Telefonica (see image to the left).One of the more popular topics in our booth was the use of Java in the Smart Grid. In our booth we were showing off some of the work of the Hydra Consortium whose goal it is to leverage the emerging smart grid infrastructure to securely enable the delivery of personal health data (weight, blood pressure, etc) from the home to your doctor. If you'd like to learn more about this innovative project, you can watch a video that was filmed at the event featuring Charles Palmer of Onzo. If you'd like to learn more about Java in the Smart Grid, check out our on-demand webinar

    Read the article

  • Watch Ask the Gu on Channel 9 Live

    Scott Guthrie was kind enough to join the Channel 9 Live crew for an discussion of everything Silverlight immediately following his Silverlight 4 launch keynote. You can check out the recorded video right on Channel 9. Scott does a great job fielding the questions from the audience and from Twitter. Charles Torre is a great moderator he asks some great questions in addition to the flurry that came in from Twitter. I love when he asks Scott how we can release Silverlight versions at such a fast...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MIX10 Windows Phone 7 Content Overview

    The tools available at MIX 10 and for public download are a Community Technology Preview, while not feature complete, the tools release at MIX 10 ships with a robust set of functionality. The white papers provide guidance on architecture, development, as well as design. In addition there are code samples and hands-on-labs that cover key topics such as Silverlight, the application bar, splash screen, navigation model, and XNA Game Studio. Training available now: Charles Petzolds preview of Programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to print PCL Directly to the printer? Vb.net VS 2010

    - by Justin
    Good day, I've been trying to upgrade an old Print Program that prints raw pcl directly to the printer. The print program takes a PCL Mask and a PCL Batch Spool File (just pcl with page turn commands, I think) merges them and sends them off to the printer. I'm able to send to the Printer a file stream of PCL but I get mixed results and i do not understand why printing is so difficult under .net. I mean yes there is the PrintDocument class, but to print PCL... Let's just say I'm ready to detach my printer from the network and burn it a live. Here is my class PrintDirect (rather a hybrid class) Imports System Imports System.Text Imports System.Runtime.InteropServices <StructLayout(LayoutKind.Sequential)> _ Public Structure DOCINFO <MarshalAs(UnmanagedType.LPWStr)> _ Public pDocName As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pOutputFile As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pDataType As String End Structure 'DOCINFO Public Class PrintDirect <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function OpenPrinter(ByVal pPrinterName As String, ByRef phPrinter As IntPtr, ByVal pDefault As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartDocPrinter(ByVal hPrinter As IntPtr, ByVal Level As Integer, ByRef pDocInfo As DOCINFO) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Ansi, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function WritePrinter(ByVal hPrinter As IntPtr, ByVal data As String, ByVal buf As Integer, ByRef pcWritten As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndDocPrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function ClosePrinter(ByVal hPrinter As IntPtr) As Long End Function 'Helps uses to work with the printer Public Class PrintJob ''' <summary> ''' The address of the printer to print to. ''' </summary> ''' <remarks></remarks> Public PrinterName As String = "" Dim lhPrinter As New System.IntPtr() ''' <summary> ''' The object deriving from the Win32 API, Winspool.drv. ''' Use this object to overide the settings defined in the PrintJob Class. ''' </summary> ''' <remarks>Only use this when absolutly nessacary.</remarks> Public OverideDocInfo As New DOCINFO() ''' <summary> ''' The PCL Control Character or the ascii escape character. \x1b ''' </summary> ''' <remarks>ChrW(27)</remarks> Public Const PCL_Control_Character As Char = ChrW(27) ''' <summary> ''' Opens a connection to a printer, if false the connection could not be established. ''' </summary> ''' <returns></returns> ''' <remarks></remarks> Public Function OpenPrint() As Boolean Dim rtn_val As Boolean = False PrintDirect.OpenPrinter(PrinterName, lhPrinter, 0) If Not lhPrinter = 0 Then rtn_val = True End If Return rtn_val End Function ''' <summary> ''' The name of the Print Document. ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Property DocumentName As String Get Return OverideDocInfo.pDocName End Get Set(ByVal value As String) OverideDocInfo.pDocName = value End Set End Property ''' <summary> ''' The type of Data that will be sent to the printer. ''' </summary> ''' <value>Send the print data type, usually "RAW" or "LPR". To overide use the OverideDocInfo Object</value> ''' <returns>The DataType as it was provided, if it is not part of the enumeration it is set to "UNKNOWN".</returns> ''' <remarks></remarks> Public Property DocumentDataType As PrintDataType Get Select Case OverideDocInfo.pDataType.ToUpper Case "RAW" Return PrintDataType.RAW Case "LPR" Return PrintDataType.LPR Case Else Return PrintDataType.UNKOWN End Select End Get Set(ByVal value As PrintDataType) OverideDocInfo.pDataType = [Enum].GetName(GetType(PrintDataType), value) End Set End Property Public Property DocumentOutputFile As String Get Return OverideDocInfo.pOutputFile End Get Set(ByVal value As String) OverideDocInfo.pOutputFile = value End Set End Property Enum PrintDataType UNKOWN = 0 RAW = 1 LPR = 2 End Enum ''' <summary> ''' Initializes the printing matrix ''' </summary> ''' <param name="PrintLevel">I have no idea what the hack this is...</param> ''' <remarks></remarks> Public Sub OpenDocument(Optional ByVal PrintLevel As Integer = 1) PrintDirect.StartDocPrinter(lhPrinter, PrintLevel, OverideDocInfo) End Sub ''' <summary> ''' Starts the next page. ''' </summary> ''' <remarks></remarks> Public Sub StartPage() PrintDirect.StartPagePrinter(lhPrinter) End Sub ''' <summary> ''' Writes a string to the printer, can be used to write out a section of the document at a time. ''' </summary> ''' <param name="data">The String to Send out to the Printer.</param> ''' <returns>The pcWritten as an integer, 0 may mean the writter did not write out anything.</returns> ''' <remarks>Warning the buffer is automatically created by the lenegth of the string.</remarks> Public Function WriteToPrinter(ByVal data As String) As Integer Dim pcWritten As Integer = 0 PrintDirect.WritePrinter(lhPrinter, data, data.Length - 1, pcWritten) Return pcWritten End Function Public Sub EndPage() PrintDirect.EndPagePrinter(lhPrinter) End Sub Public Sub CloseDocument() PrintDirect.EndDocPrinter(lhPrinter) End Sub Public Sub ClosePrint() PrintDirect.ClosePrinter(lhPrinter) End Sub ''' <summary> ''' Opens a connection to a printer and starts a new document. ''' </summary> ''' <returns>If false the connection could not be established. </returns> ''' <remarks></remarks> Public Function Open() As Boolean Dim rtn_val As Boolean = False rtn_val = OpenPrint() If rtn_val Then OpenDocument() End If Return rtn_val End Function ''' <summary> ''' Closes the document and closes the connection to the printer. ''' </summary> ''' <remarks></remarks> Public Sub Close() CloseDocument() ClosePrint() End Sub End Class End Class 'PrintDirect Here is how I print my file. I'm simple printing the PCL Masks, to show proof of concept. But I can't even do that. I can effectively create PCL and send it the printer without reading the file and it works fine... Plus I get mixed results with different stream reader text encoding as well. Dim pJob As New PrintDirect.PrintJob pJob.DocumentName = " test doc" pJob.DocumentDataType = PrintDirect.PrintJob.PrintDataType.RAW pJob.PrinterName = sPrinter.GetDevicePath '//This is where you'd stick your device name, sPrinter stands for Selected Printer from a dropdown (combobox). 'pJob.DocumentOutputFile = "C:\temp\test-spool.txt" If Not pJob.OpenPrint() Then MsgBox("Unable to connect to " & pJob.PrinterName, MsgBoxStyle.OkOnly, "Print Error") Exit Sub End If pJob.OpenDocument() pJob.StartPage() Dim sr As New IO.StreamReader(Me.txtFile.Text, Text.Encoding.ASCII) '//I've got best results with ASCII before, but only mized. Dim line As String = sr.ReadLine 'Fix for fly code on first run 'If line = 27 Then line = PrintDirect.PrintJob.PCL_Control_Character Do While (Not line Is Nothing) pJob.WriteToPrinter(line) line = sr.ReadLine Loop 'pJob.WriteToPrinter(ControlChars.FormFeed) pJob.EndPage() pJob.CloseDocument() sr.Close() sr.Dispose() sr = Nothing pJob.Close() I was able to get to print the mask, in the morning... now I'm getting strange printer characters scattered through 5 pages... I'm totally clueless. I must have changed something or the printer is at fault. You can access my test Check Mask, here http://kscserver.com/so/chk_mask.zip .

    Read the article

  • Why does operator<< not work with something returned by operator-?

    - by Felix
    Here's a small test program I wrote: #include <iostream> using namespace std; class A { public: int val; A(int _val=0):val(_val) { } A operator+(A &a) { return A(val + a.val); } A operator-(A &a) { return A(val - a.val); } friend ostream& operator<<(ostream &, A &); }; ostream& operator<<(ostream &out, A &a) { out<<a.val; return out; } int main() { A a(3), b(4), c = b - a; cout<<c<<endl; // this works cout<<(b-a)<<endl; // this doesn't return 0; } I can't seem to get why the line marked "this works" works and the one marked "this doesn't" doesn't. When I try to compile the program with the cout<<(b-a); line, here's what I get: [felix@the-machine C]$ g++ test.cpp test.cpp: In function ‘int main()’: test.cpp:26:13: error: no match for ‘operator<<’ in ‘std::cout << b.A::operator-(((A&)(& a)))’ /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:108:7: note: candidates are: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ostream<_CharT, _Traits>::__ostream_type& (*)(std::basic_ostream<_CharT, _Traits>::__ostream_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:117:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ostream<_CharT, _Traits>::__ios_type& (*)(std::basic_ostream<_CharT, _Traits>::__ios_type&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>, std::basic_ostream<_CharT, _Traits>::__ios_type = std::basic_ios<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:127:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(std::ios_base& (*)(std::ios_base&)) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:165:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(long int) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:169:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(long unsigned int) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:173:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(bool) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/bits/ostream.tcc:91:5: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(short int) [with _CharT = char, _Traits = std::char_traits<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:180:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(short unsigned int) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/bits/ostream.tcc:105:5: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(int) [with _CharT = char, _Traits = std::char_traits<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:191:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(unsigned int) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:200:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(long long int) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:204:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(long long unsigned int) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:209:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(double) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:213:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(float) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:221:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(long double) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/ostream:225:7: note: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(const void*) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__ostream_type = std::basic_ostream<char>] /usr/lib/gcc/i686-pc-linux-gnu/4.5.0/../../../../include/c++/4.5.0/bits/ostream.tcc:119:5: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ostream<_CharT, _Traits>::__streambuf_type*) [with _CharT = char, _Traits = std::char_traits<char>, std::basic_ostream<_CharT, _Traits>::__streambuf_type = std::basic_streambuf<char>] test.cpp:18:11: note: std::ostream& operator<<(std::ostream&, A&) [felix@the-machine C]$ Quite nasty.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >