Search Results

Search found 483 results on 20 pages for 'routines'.

Page 15/20 | < Previous Page | 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • cURL/PHP Request Executes 50% of the Time

    - by makavelli
    After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL: * About to connect() to www.virginia.edu port 443 (#0) * Trying 128.143.22.36... * connected * Connected to www.virginia.edu (128.143.22.36) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac * Closing connection #0 If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally. I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.

    Read the article

  • C++: how to build my own utility library?

    - by augustin
    I am starting to be proficient enough with C++ so that I can write my own C++ based scripts (to replace bash and PHP scripts I used to write before). I find that I am starting to have a very small collection of utility functions and sub-routines that I'd like to use in several, otherwise unrelated C++ scripts. I know I am not supposed to reinvent the wheel and that I could use external libraries for some of the utilities I'm creating for myself. However, it's fun to create my own utility functions, they are perfectly tailored to the job I have in mind, and it's for me a large part of the learning process. I'll see about using more polished external libraries when I am proficient enough to work on more serious, long term projects. So, the question is: how do I manage my personal utility library in a way that the functions can be easily included in my various scripts? I am using linux/Kubuntu, vim, g++, etc. and mostly coding CLI scripts. Don't assume too much in terms of experience! ;) Links to tutorials or places where relevant topics are properly documented are welcome.

    Read the article

  • 2 basic but interesting questions about .NET

    - by b-gen-jack-o-neill
    Hi, when I first saw C#, I thought this must be some joke. I was starting with programming in C. But in C# you could just drag and drop objects, and just write event code to them. It was so simple. Now, I still like C the most, becouse I am very attracted to the basic low level operations, and C is just next level of assembler, with few basic routines, so I like it very much. Even more becouse I write little apps for microcontrollers. But yeasterday I wrote very simple control program for my microcontroller based LED cube in asm, and I needed some way to simply create animation sequences to the Cube. So, I remembered C#. I have practically NO C# skills, but still I created simple program to make animation sequences in about hour with GUI, just with help of google and help of the embeded function descriptions in C#. So, to get to the point, is there some other reason then top speed, to use any other language than C#? I mean, it is so effective. I know that Java is a bit of similiar, but I expect C# to be more Windows effective since its directly from Microsoft. The second question is, what is the advantage of compiling into CIL, and than run by CLR, than directly compile it into machine code? I know that portability is one, but since C# is mainly for Windows, wouldn´t it be more powerfull to just compile it directly? Thanks.

    Read the article

  • Windows File I/O Reading

    - by eyeanand
    Currently working on open/read images in VC++. Some examples i came across on the internet use Windows.h I/O routines like ReadFile...but there seems to be inconsistency in there declaration. Here's what i have got. //So i have this function to load file BYTE* LoadFile ( int* width, int* height, long* size, LPCWSTR bmpfile ) { BITMAPFILEHEADER bmpheader; BITMAPINFOHEADER bmpinfo; DWORD bytesread = 0; HANDLE file = CreateFile ( bmpfile , GENERIC_READ, FILE_SHARE_READ,NULL, OPEN_EXISTING, FILE_FLAG_SEQUENTIAL_SCAN, NULL ); if ( NULL == file ) return NULL; if ( ReadFile ( file, &bmpheader, sizeof ( BITMAPFILEHEADER ),&bytesread, NULL ) == false ) { CloseHandle ( file ); return NULL; } . . . return appropriate value; } Now the ReadFile API function is declared as follows in WinBase.h WINBASEAPI BOOL WINAPI ReadFile( In HANDLE hFile, Out LPVOID lpBuffer, In DWORD nNumberOfBytesToRead, _Out_opt_ LPDWORD lpNumberOfBytesRead, _Inout_opt_ LPOVERLAPPED lpOverlapped ); And in MSDN examples... They call this function like this. ReadFile(hFile, chBuffer, BUFSIZE, &dwBytesRead, NULL) Which expects that "bytesRead" is sort of out parameter. so it gives me number of bytes read. But in my code ..it is giving error message. 'ReadFile' : cannot convert parameter 4 from 'LPDWORD *' to 'LPDWORD' so i just initialized bytesRead to 0 and passed by value.( which is wrong..but just to check if it works ). then it gives this exception Unhandled exception at 0x774406ae in ImProc.exe: 0xC0000005: Access violation writing location 0x00000000. Kindly suggest . Kindly tell if any code i missed out....including while forming the question itself. Thanks.

    Read the article

  • Extended Logging with Caller Info Attributes

    - by João Angelo
    .NET 4.5 caller info attributes may be one of those features that do not get much airtime, but nonetheless are a great addition to the framework. These attributes will allow you to programmatically access information about the caller of a given method, more specifically, the code file full path, the member name of the caller and the line number at which the method was called. They are implemented by taking advantage of C# 4.0 optional parameters and are a compile time feature so as an added bonus the returned member name is not affected by obfuscation. The main usage scenario will be for tracing and debugging routines as will see right now. In this sample code I’ll be using NLog, but the example is also applicable to other logging frameworks like log4net. First an helper class, without any dependencies and that can be used anywhere to obtain caller information: using System; using System.IO; using System.Runtime.CompilerServices; public sealed class CallerInfo { private CallerInfo(string filePath, string memberName, int lineNumber) { this.FilePath = filePath; this.MemberName = memberName; this.LineNumber = lineNumber; } public static CallerInfo Create( [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { return new CallerInfo(filePath, memberName, lineNumber); } public string FilePath { get; private set; } public string FileName { get { return this.fileName ?? (this.fileName = Path.GetFileName(this.FilePath)); } } public string MemberName { get; private set; } public int LineNumber { get; private set; } public override string ToString() { return string.Concat(this.FilePath, "|", this.MemberName, "|", this.LineNumber); } private string fileName; } Then an extension class specific for NLog Logger: using System; using System.Runtime.CompilerServices; using NLog; public static class LoggerExtensions { public static void TraceMemberEntry( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberEntry(logger, LogLevel.Trace, filePath, memberName, lineNumber); } public static void TraceMemberExit( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberExit(logger, LogLevel.Trace, filePath, memberName, lineNumber); } public static void DebugMemberEntry( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberEntry(logger, LogLevel.Debug, filePath, memberName, lineNumber); } public static void DebugMemberExit( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberExit(logger, LogLevel.Debug, filePath, memberName, lineNumber); } public static void LogMemberEntry( this Logger logger, LogLevel logLevel, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { const string MsgFormat = "Entering member {1} at line {2}"; InternalLog(logger, logLevel, MsgFormat, filePath, memberName, lineNumber); } public static void LogMemberExit( this Logger logger, LogLevel logLevel, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { const string MsgFormat = "Exiting member {1} at line {2}"; InternalLog(logger, logLevel, MsgFormat, filePath, memberName, lineNumber); } private static void InternalLog( Logger logger, LogLevel logLevel, string format, string filePath, string memberName, int lineNumber) { if (logger == null) throw new ArgumentNullException("logger"); if (logLevel == null) throw new ArgumentNullException("logLevel"); logger.Log(logLevel, format, filePath, memberName, lineNumber); } } Finally an usage example: using NLog; internal static class Program { private static readonly Logger Logger = LogManager.GetCurrentClassLogger(); private static void Main(string[] args) { Logger.TraceMemberEntry(); // Compile time feature // Next three lines output the same except for line number Logger.Trace(CallerInfo.Create().ToString()); Logger.Trace(() => CallerInfo.Create().ToString()); Logger.Trace(delegate() { return CallerInfo.Create().ToString(); }); Logger.TraceMemberExit(); } } NOTE: Code for helper class and Logger extension also available here.

    Read the article

  • Introducing functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Or can it be that many non-FP programmers are not really interested in understanding and using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Also, I asked myself if my impression is just plainly wrong due to lack of knowledge. E.g., do C# and C++11 support FP as extensively as, say, Scala or Caml do? In this case, my question would be simply non-existent. Or can it be that many non-FP programmers are not really interested in using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Low level programming - what's in it for me?

    - by back2dos
    For years I have considered digging into what I consider "low level" languages. For me this means C and assembly. However I had no time for this yet, nor has it EVER been neccessary. Now because I don't see any neccessity arising, I feel like I should either just schedule some point in time when I will study the subject or drop the plan forever. My Position For the past 4 years I have focused on "web technologies", which may change, and I am an application developer, which is unlikely to change. In application development, I think usability is the most important thing. You write applications to be "consumed" by users. The more usable those applications are, the more value you have produced. In order to achieve good usability, I believe the following things are viable Good design: Well-thought-out features accessible through a well-thought-out user interface. Correctness: The best design isn't worth anything, if not implemented correctly. Flexibility: An application A should constantly evolve, so that its users need not switch to a different application B, that has new features, that A could implement. Applications addressing the same problem should not differ in features but in philosophy. Performance: Performance contributes to a good user experience. An application is ideally always responsive and performs its tasks reasonably fast (based on their frequency). The value of performance optimization beyond the point where it is noticeable by the user is questionable. I think low level programming is not going to help me with that, except for performance. But writing a whole app in a low level language for the sake of performance is premature optimization to me. My Question What could low level programming teach me, what other languages wouldn't teach me? Am I missing something, or is it just a skill, that is of very little use for application development? Please understand, that I am not questioning the value of C and assembly. It's just that in my everyday life, I am quite happy that all the intricacies of that world are abstracted away and managed for me (mostly by layers written in C/C++ and assembly themselves). I just don't see any concepts, that could be new to me, only details I would have to stuff my head with. So what's in it for me? My Conclusion Thanks to everyone for their answers. I must say, nobody really surprised me, but at least now I am quite sure I will drop this area of interest until any need for it arises. To my understanding, writing assembly these days for processors as they are in use in today's CPUs is not only unneccesarily complicated, but risks to result in poorer runtime performance than a C counterpart. Optimizing by hand is nearly impossible due to OOE, while you do not get all kinds of optimizations a compiler can do automatically. Also, the code is either portable, because it uses a small subset of available commands, or it is optimized, but then it probably works on one architecture only. Writing C is not nearly as neccessary anymore, as it was in the past. If I were to write an application in C, I would just as much use tested and established libraries and frameworks, that would spare me implementing string copy routines, sorting algorithms and other kind of stuff serving as exercise at university. My own code would execute faster at the cost of type safety. I am neither keen on reeinventing the wheel in the course of normal app development, nor trying to debug by looking at core dumps :D I am currently experimenting with languages and interpreters, so if there is anything I would like to publish, I suppose I'd port a working concept to C, although C++ might just as well do the trick. Again, thanks to everyone for your answers and your insight.

    Read the article

  • Everything You Need to Know About Monitoring Oracle GoldenGate

    - by Irem Radzik
    By Joe deBuzna Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Having over 16 years of database replication experience with 6 of those split between complex Oracle GoldenGate installations across three continents and researching monitoring requirements for both GoldenGate core replication and the GoldenGate monitoring GUIs, I've seen GoldenGate used and monitored in every way conceivable. And definite patterns have emerged. Next week at OpenWorld, on Tuesday Oct 2nd at 5pm please come by to Mascone West-3005 for "Everything you need to know about Monitoring Oracle GoldenGate"session to hear me discuss how GoldenGate customers are monitoring their implementations today, common methods and tricks, what's new in the GUIs, and a what's on the roadmap ahead. As you may have seen in previous blog posts and in our launch webcast we have now Plug-in for Oracle Enterprise Manager in addition to the new Oracle GoldenGate Monitor product. For those of you who won't be at OpenWorld, please check out our Management Pack for Oracle GoldenGate data sheet and Oracle GoldenGate 11gR2 New Features white paper to learn more about the new Oracle GoldenGate 11gR2 release. In this latest release we also have enhanced conflict detection and resolution. It is a cornerstone of any Active-Active database replication solution. And in the latest release we just took ours to the next level with built in optimized resolution routines (no more dependency on sqlexec!). At OpenWorld we have a session CON8557 - Best Practice for Conflict Detection & resolution 3:30-4:30 on Wed Oct 3rd at Mascone West- 3005. Oracle Development Manager Bharath Aleti and I will highlight the most commonly used options and best practices gained from our interaction with numerous customers and consultants. Hope you can join us next week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

  • What Counts For a DBA: Replaceable

    - by Louis Davidson
    Replaceable is what every employee in every company instinctively strives not to be. Yet, if you’re an irreplaceable DBA, meaning that the company couldn’t find someone else who could do what you do, then you’re not doing a great job. A good DBA is replaceable. I imagine some of you are already reaching for the lighter fluid, about to set the comments section ablaze, but before you destroy a perfectly good Commodore 64, read on… Everyone is replaceable, ultimately. Anyone, anywhere, in any job, could be sitting at their desk reading this, blissfully unaware that this is to be their last day at work. Morbidly, you could be about to take your terminal breath. Ideally, it will be because another company suddenly offered you a truck full of money to take a new job, forcing you to bid a regretful farewell to your current employer (with barely a “so long suckers!” left wafting in the air as you zip out of the office like the Wile E Coyote wearing two pairs of rocket skates). I’ve often wondered what it would be like to be present at the meeting where your former work colleagues discuss your potential replacement. It is perhaps only at this point, as they struggle with the question “What kind of person do we need to replace old Wile?” that you would know your true worth in their eyes. Of course, this presupposes you need replacing. I’ve known one or two people whose absence we adequately compensated with a small rock, to keep their old chair from rolling down a slight incline in the floor. On another occasion, we bought a noise-making machine that frequently attracted attention its way, with unpleasant sounds, but never contributed anything worthwhile. These things never actually happened, of course, but you take my point: don’t confuse replaceable with expendable. Likewise, if the term “trained seal” comes up, someone they can teach to follow basic instructions and push buttons in the right order, then the replacement discussion is going to be over quickly. What, however, if your colleagues decide they’ll need a super-specialist to replace you. That’s a good thing, right? Well, usually, in my experience, no it is not. It often indicates that no one really knows what you do, or how. A typical example is the “senior” DBA who built a system just before 16-bit computing became all the rage and then settled into a long career managing it. Such systems are often central to the company’s operations and the DBA very skilled at what they do, but almost impossible to replace, because the system hasn’t evolved, and runs on processes and routines that others no longer understand or recognize. The only thing you really want to hear, at your replacement discussion, is that they need someone skilled at the fundamentals and adaptable. This means that the person they need understands that their goal is to be an excellent DBA, not a specialist in whatever the-heck the company does. Someone who understands the new versions of SQL Server and can adapt the company’s systems to the way things work today, who uses industry standard methods that any other qualified DBA/programmer can understand. More importantly, this person rarely wants to get “pigeon-holed” and so documents and shares the specialized knowledge and responsibilities with their teammates. Being replaceable doesn’t mean being “dime a dozen”. The company might need four people to take your place due to the depth of your skills, but still, they could find those replacements and those replacements could step right in using techniques that any decent DBA should know. It is a tough question to contemplate, but take some time to think about the sort of person that your colleagues would seek to replace you. If you think they would go looking for a “super-specialist” then consider urgently how you can diversify and share your knowledge, and start documenting all the processes you know as if today were your last day, because who knows, it just might be.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • Syntax of passing lambda

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

  • Four Easy Ways to Save a Rocky CRM Relationship

    - by Divya Malik
     Today, I am pleased to introduce our guest blogger Luke Christianson. Luke is  an Application Sales rep based out of Minneapolis, MN.  You can find him on LinkedIn and follow him on Twitter. In any relationship, sooner or later, the excitement fades away.  The honeymoon period gives way to the old routines you had, before you committed to each other and you eventually begin doing things apart from one another.  I’m not talking about a marriage…  Well, I guess I am.Commitment to a CRM tool and building a deep and lasting relationship is not much different than the basics of a traditional love story.  After your controlled CRM pilot program, and maybe the National Sales Meeting where you couldn’t escape those three wonderful letters, CRM, you will soon find that if you haven’t designed an environment where it’s going to enable your reps to make more money, the relationship is doomed.   . If you’re currently in a dysfunctional CRM relationship, here are 4 simple tips to re-engaging users and getting that spark back. Shadow a Sales Rep:   Chances are you can find out exactly what is preventing your sales reps from using the application by simply watching how they go about their day.  Sales reps are driven by money, not by additional administrative duties.  Your system needs to be setup so that they can get the information they need quickly, facilitate making key updates and run their business out of one easy-to-use application.  Increase your sales team’s productivity by 5% automatically:    Cancel the weekly forecast calls with your reps and require them update their opportunities in CRM.  Something else that I’ve seen work extremely well, is when you do Monthly or Quarterly reviews, do not let your sales reps bring anything into the room with them; no spreadsheets, notebooks, or computers.  Everything they need to tell you should be able to be put into CRM and fully accessible by the Sales Manager at any time.  Tool time:      Make sure the tools that you have selected meet both your short-term goals and your long term goals.   You need tools that can adapt like your business does.  You probably can’t wait two months for an update to a picklist value or for the addition of a simple workflow rule.  Do you feel the tools that are in place can create the experience you want for your users? and finally, if all else fails... Keep It Simple, Stupid:     Do you really need to require 15 fields to create an Opportunity?  Do you need to clutter the interface with different reports that don’t add daily value?  Most CRM systems on the market today are flexible enough today that your admin could clean up most of the unnecessary interface ‘noise’ in a few hours.  If they're not, see #3. Every strong relationship can be tedious at times, you’ll fight and eventually make amends, you may even threaten to upgrade to a newer model…  But be patient and think about what you want to achieve and you’ll find a partner for life.

    Read the article

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

  • SPTI problem with Mode Select

    - by Bob Murphy
    I'm running into a problem in which an attempt to do a "Mode Select" SCSI command using SPTI is returning an error status of 0x02 ("Check Condition"), and hope someone here might have some tips or suggestions. The code in question is intended to work with at a custom SCSI device. I wrote the original support for it using ASPI under WinXP, and am converting it to work with SPTI under 64-bit Windows 7. Here's the problematic code - and what's happening is, sptwb.spt.ScsiStatus is 2, which is a "Check Condition" error. Unfortunately, the device in question doesn't return useful information when you do a "Request Sense" after this problem occurs, so that's no help. void MSSModeSelect(const ModeSelectRequestPacket& inRequest, StatusResponsePacket& outResponse) { IPC_LOG("MSSModeSelect(): PathID=%d, TargetID=%d, LUN=%d", inRequest.m_Device.m_PathId, inRequest.m_Device.m_TargetId, inRequest.m_Device.m_Lun); int adapterIndex = inRequest.m_Device.m_PathId; HANDLE adapterHandle = prvOpenScsiAdapter(inRequest.m_Device.m_PathId); if (adapterHandle == INVALID_HANDLE_VALUE) { outResponse.m_Status = eScsiAdapterErr; return; } SCSI_PASS_THROUGH_WITH_BUFFERS sptwb; memset(&sptwb, 0, sizeof(sptwb)); #define MODESELECT_BUF_SIZE 32 sptwb.spt.Length = sizeof(SCSI_PASS_THROUGH); sptwb.spt.PathId = inRequest.m_Device.m_PathId; sptwb.spt.TargetId = inRequest.m_Device.m_TargetId; sptwb.spt.Lun = inRequest.m_Device.m_Lun; sptwb.spt.CdbLength = CDB6GENERIC_LENGTH; sptwb.spt.SenseInfoLength = 0; sptwb.spt.DataIn = SCSI_IOCTL_DATA_IN; sptwb.spt.DataTransferLength = MODESELECT_BUF_SIZE; sptwb.spt.TimeOutValue = 2; sptwb.spt.DataBufferOffset = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf); sptwb.spt.Cdb[0] = SCSIOP_MODE_SELECT; sptwb.spt.Cdb[4] = MODESELECT_BUF_SIZE; DWORD length = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf) + sptwb.spt.DataTransferLength; memset(sptwb.ucDataBuf, 0, sizeof(sptwb.ucDataBuf)); sptwb.ucDataBuf[2] = 0x10; sptwb.ucDataBuf[16] = (BYTE)0x01; ULONG bytesReturned = 0; BOOL okay = DeviceIoControl(adapterHandle, IOCTL_SCSI_PASS_THROUGH, &sptwb, sizeof(SCSI_PASS_THROUGH), &sptwb, length, &bytesReturned, FALSE); DWORD gle = GetLastError(); IPC_LOG(" DeviceIoControl() %s", okay ? "worked" : "failed"); if (okay) { outResponse.m_Status = (sptwb.spt.ScsiStatus == 0) ? eOk : ePrinterStatusErr; } else { outResponse.m_Status = eScsiPermissionsErr; } CloseHandle(adapterHandle); } A few more remarks, for what it's worth: This is derived from some old ASPI code that does the "Mode Select" flawlessly. This routine opens \\.\SCSI<whatever> at the beginning, via prvOpenScsiAdapter(), and closes the handle at the end. All the other routines for dealing with the device do the same thing, including the routine to do "Reserve Unit". Is this a good idea under SPTI, or should the call to "Reserve Unit" leave the handle open, so this routine and others in the sequence can use the same handle? This uses the IOCTL_SCSI_PASS_THROUGH. Should "Mode Select" use IOCTL_SCSI_PASS_THROUGH_DIRECT instead? Thanks in advance - any help will be greatly appreciated.

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Syntax of passing lambda causing hair loss (pulling out)

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • OpenSSL Handshake Failure (14094410) - Erroneous Client Certificate Check from Mobile Phone

    - by Clayton Sims
    I'm running a proxy server through Apache with modssl, which we're using to proxy POSTs from mobile devices to another internal server. This works successfully for most clients, but requests from a specific phone model (Nokia 2690) are showing a bizarre handshake failure. It looks as though OpenSSL is either requesting (or attempting to read an unsolicited) client certificate from the phone (which is especially bizarre because j2me's kssl implementation doesn't support client certs). I've disabled client certificates with the SSLVerifyClient none directive in both the virtual host conf and the modssl conf. The trace from error.log on debug level is (details redacted): [client 41.220.207.10] Connection to child 0 established (server www.myserver.org:443) [info] Seeding PRNG with 656 bytes of entropy [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/accept initialization [debug] ssl_engine_io.c(1882): OpenSSL: read 11/11 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 49/49 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90db] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 read client hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write certificate A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server done A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 flush data [debug] ssl_engine_io.c(1882): OpenSSL: read 5/5 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 2/2 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d5] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1879): OpenSSL: Read: SSLv3 read client certificate A [debug] ssl_engine_kernel.c(1898): OpenSSL: Exit: failed in SSLv3 read client certificate A [client 41.220.207.10] SSL library error 1 in handshake (server www.myserver.org:443) [info] SSL Library Error: 336151568 error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure [client 41.220.207.10] Connection closed to child 0 with abortive shutdown (server www.myserver.org:443) I've tried enabling all ciphers and all protocols temporarily with modssl, neither of which seemed to be the issue. The phone should be using RSA_RC4_128_MD5 and SSLv3, all of which are available. Am I missing something more fundamental about what's failing here? It seemed like the certificate request might have been part of a renegotiation failure. I tried enabling SSLInsecureRenegotiation On on the virtual host, in case it was an issue of the phone's SSL not supporting the new protocol, but to no avail. Currently running: Apache/2.2.16 (Ubuntu) mod_ssl/2.2.16 OpenSSL/0.9.8o Apache proxy_html/3.0.1

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

  • SSL connection error during handshake on Windows Server 2008 R2

    - by Thomas
    I have a Windows 2008 R2 Server that runs a HTTPS Tunneling service. The software uses a certificate that is provided via the Windows certificate store. The certificate is located in the local computer private certificates. It supports server and client authentication with signing and keyencipherment. Cert chain The certificate chain looks fine. It's a Thawte SSL123 certificate. Thawte Premium Server CA (SHA1) [?e0 ab 05 94 20 72 54 93 05 60 62 02 36 70 f7 cd 2e fc 66 66] thawte Primary Root CA [?1f a4 90 d1 d4 95 79 42 cd 23 54 5f 6e 82 3d 00 00 79 6e a2] Thawte DV SSL CA [3c a9 58 f3 e7 d6 83 7e 1c 1a cf 8b 0f 6a 2e 6d 48 7d 67 62] Server certificate Issues Most browsers accept the certificate without any warning. But IE 7 on Windows XP SP3 and Opera 12 on OSX just report an connection error. Opera complains: Secure connection: fatal error (552) https://www.example.com/ Opera was not able to connect to the server, because the server does not communicate via any secure protocol known to Opera. A connection test using openssl s_client -connect www.example.com:443 -state says: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A 52471:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-35.1/src/ssl/s23_lib.c:182: ssldump -aAHd host www.example.com during curl https://www.example.com/ reports: New TCP connection #1: localhost(53302) <-> www.example.com(443) 1 1 0.0235 (0.0235) C>SV3.1(117) Handshake ClientHello Version 3.1 random[32]= 50 77 56 29 e8 23 82 3b 7f e0 ae 2d c1 31 cb ac 38 01 31 85 4f 91 39 c1 04 32 a6 68 25 cd a0 c1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f Unknown value 0x9a Unknown value 0x99 Unknown value 0x96 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 0.0479 (0.0243) S>C TCP FIN 1 0.0481 (0.0002) C>S TCP FIN Thawte provides two Java based SSL Checkers. The Legacy Thawte SSL Certificate Installation Checker and the sslToolBox. Both validate the certificate under Windows XP but report connection errors under OSX and Windows 2008 R2.

    Read the article

  • Can't find windows 2000 domain after PDC Change

    - by Mark A Kruger
    This is a windows 2000 domain issue. I had an old win2000 PDC that was beginning to fail. So, trying to be pre-emptive, I installed a new BDC, then "demoted" the old PDC and took it off the network. Now it appears that no member server can "find" the domain anymore. No logins work (for services or a RDP or anything). What I've tried (based on googling): Verified sysvol is shared on all servers. Used nslookup to verify that DC's are being found. netdiag /fix meta data cleanup routines. verified no firewall issues (port 389 etc) seizing all roles to new PDC (I did that as part of the original promotion). LMHOST file and Netbios settings. At the moment it seems like I can get the DC's returned but cannot contact them. I'm at a loss. My latest attempt was to remove a member server from the domain and try to "re-add" it. When I do that I get this message: The query was for the SRV record for _ldap._tcp.dc._msdcs.cfwebtools.com The following domain controllers were identified by the query: db-dev1.cfwebtools.com file-prod1.cfwebtools.com cfwt-pdc2.cfwebtools.com However no domain controllers could be contacted. It then goes on to ask if I've checked my A record and made sure they are running. Is there a way to force this domain to be seen? I also shared sysvol (or double checked it) and restarted the dfsr service. More information. I got looking at sysvol and found it was not shared on 2 of these servers. Only one of them (db-dev1) has a "good" or at least "populated" sys vol store. So I tried doing a "d2" recovery of my PDC against that good sysvol. But it never synchs - or at least it does not seem to synch. I'm guessing if I could get sysvol and netlogin to kick in and replicate that would fix my issue. I think these DC's aren't responding because they are waiting for replication which is broken somehow. Would taking down all the DC's except for db-dev1 fix the issue - at least temporarily? I know I can't just copy the sysvol stuff over to the other 2 can I?

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20  | Next Page >