Search Results

Search found 7239 results on 290 pages for 'job satisfaction'.

Page 235/290 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Excel 2007 - "The macro may not be available in this workbook" Error

    - by Psycho Bob
    We use an Excel sheet that has been protected to prevent modification of it from end users. All in all they are only able to edit certain tabs to add information that will then be used to generate information on other tabs using equations and such. On the tab with the equations, a button is present called "Prep for Internal Hard Copy Print." This button runs a macro that selects the information on the tab, unprotects it, then sends a print job to the user's default printer that contains the unprotected content. Normally this works like a champ. This time around, however, the macro is throwing the following error: Cannot run the macro "FILENAME.xlsx'!MacroName'. The macro may not be available in this workbook or all macros may be disabled. As far as I can tell, the macros are still present within the workbook. This sheet is normally a .xlsm though the user saved it with a different filename as a .xlsx. Also, the macros appear only as MacroName in the .xlsm file and not "FILENAME.xlsx'!MacroName' as it does in the .xlsx. Finally, when I open the .xlsm it asks if I want to enable the macro content while the .xlsx does not prompt for this. Can anyone tell me what's going on with this sheet or know of a way that I can get the macros working in the .xlsx without having to start over with a different sheet?

    Read the article

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • Most useful free .NET libraries?

    - by Binoj Antony
    I have used a lot of free .NET libraries, some from Microsoft itself! Which ones have you found the most useful? Dependency Injection/Inversion of Control Unity Framework - Microsoft StructureMap - Jeremy Miller Castle Windsor NInject Spring Framework Autofac Managed Extensibility Framework Logging Logging Application Block - Microsoft Log4Net - Apache Error Logging Modules and Handlers(ELMAH) NLog Compression SharpZipLib DotNetZip YUI Compressor (CSS and JS compression/minification) AjaxMinifier (in other downloads) (JS compression. Also includes MSBuild task) Ajax Ajax Control Toolkit - Microsoft AJAXNet Pro Data Mapper XmlDataMapper AutoMapper ORM NHibernate Castle ActiveRecord Subsonic XmlDataMapper Charting/Graphics Microsoft Chart Controls for ASP.NET 3.5 SP1 Microsoft Chart Controls for Winforms ZedGraph Charting NPlot - Charting for ASP.NET and WinForms PDF Creators/Generators PDFsharp iTextSharp Unit Testing/Mocking NUnit Rhino Mocks Moq TypeMock.Net xUnit.net mbUnit Machine.Specifications Automated Web Testing Selenium Watin URL Rewriting url rewriter UrlRewriting.Net Url Rewriter and Reverse Proxy - Managed Fusion Controls Krypton - Free winform controls Source Grid - A Grid control Devexpress - free controls Unclassified CSLA Framework - Business Objects Framework AForge.net - AI, computer vision, genetic algorithms, machine learning Enterprise Library 4.1 - Logging, Exception Management, Validation, Policy Injection File helpers library C5 Collections - Collections for .NET Quartz.NET - Enterprise Job Scheduler for .NET Platform MiscUtil - Utilities by Jon Skeet Lucene.net - Text indexing and searching Json.NET - Linq over JSON Flee - expression evaluator PostSharp - AOP IKVM - brings the extensive world of Java libraries to .NET. Title of the question taken from here. [EDIT] Please provide links to these free libraries as well. Once we have a huge list of this, it can be arranged in categories! Please do not mention .NET Applications/EXEs here.

    Read the article

  • Managing a file-based public maven repository

    - by Roland Ewald
    I am looking for an easy way to manage a public file-based Maven repository. While we are using the open-source version of Artifactory internally, we now want to put a file-based repository of our published artifacts (and their dependencies) on a separate machine that is publicly available. There are several ways how to do this, but none of them seems ideal: Use Maven Dependency plugin: if it is configured correctly and executed with the goal dependency:copy-dependencies for the release-module of our project, it creates a local repository structure that is fine, but this structure does not contain the meta-data.xml files, nor the hash-sums. Use Artifactory to export repo: AFAIK Artifactory only allows to export a repository as a whole. This would include the non-published modules from our project (which would then need to be deleted manually). Also, all dependencies are sitting in another repository, so this needs to be done twice, and many dependencies are not even required by a published artifact (only by artifacts that are still for internal use only). Nevertheless, this method would also include the meta-data.xml files and the hash-sums for all files. To set up an initial version of the repository, I used a mixture of both methods: I first created the Maven repository for all required dependencies via dependency:copy-dependencies and then wrote a script to cherry-pick the meta-data.xml files (etc.) from Artifactory. This is terribly cumbersome, isn't there a better way to solve this? Maybe there is another Maven 3 - plugin that I am unaware of, or some other command-line tool that does the job? I basically just need a simple way to create a Maven repository that contains all artifacts a given artifact depends on (and no more), and also contains all meta-data expected in a remote repository. Any ideas?

    Read the article

  • Partial template specialization of free functions - best practices

    - by Poita_
    As most C++ programmers should know, partial template specialization of free functions is disallowed. For example, the following is illegal C++: template <class T, int N> T mul(const T& x) { return x * N; } template <class T> T mul<T, 0>(const T& x) { return T(0); } // error: function template partial specialization ‘mul<T, 0>’ is not allowed However, partial template specialization of classes/structs is allowed, and can be exploited to mimic the functionality of partial template specialization of free functions. For example, the target objective in the last example can be achieved by using: template <class T, int N> struct mul_impl { static T fun(const T& x) { return x * N; } }; template <class T> struct mul_impl<T, 0> { static T fun(const T& x) { return T(0); } }; template <class T, int N> T mul(const T& x) { return mul_impl<T, N>::fun(x); } It's more bulky and less concise, but it gets the job done -- and as far as users of mul are concerned, they get the desired partial specialization. My questions is: when writing templated free functions (that are intended to be used by others), should you automatically delegate the implementation to a static method function of a class, so that users of your library may implement partial specializations at will, or do you just write the templated function the normal way, and live with the fact that people won't be able to specialize them?

    Read the article

  • C# update GUI continuously from backgroundworker.

    - by Qrew
    I have created a GUI (winforms) and added a backgroundworker to run in a separate thread. The backgroundworker needs to update 2 labels continuously. The backgroundworker thread should start with button1 click and run forever. class EcuData { public int RPM { get; set; } public int MAP { get; set; } } private void button1_Click(object sender, EventArgs e) { EcuData data = new EcuData { RPM = 0, MAP = 0 }; BackWorker1.RunWorkerAsync(data); } private void BackWorker1_DoWork(object sender, DoWorkEventArgs e) { EcuData argumentData = e.Argument as EcuData; int x = 0; while (x<=10) { // // Code for reading in data from hardware. // argumentData.RPM = x; //x is for testing only! argumentData.MAP = x * 2; //x is for testing only! e.Result = argumentData; Thread.Sleep(100); x++; } private void BackWorker1_RunWorkerCompleted_1(object sender, RunWorkerCompletedEventArgs e) { EcuData data = e.Result as EcuData; label1.Text = data.RPM.ToString(); label2.Text = data.MAP.ToString(); } } The above code just updated the GUI when backgroundworker is done with his job, and that's not what I'm looking for.

    Read the article

  • LINQ - IEnumerable.Join on Anonymous Result Set in VB.NET

    - by user337501
    I've long since built a way around this, but it still keeps bugging me... it doesnt help that my grasp of dynamic LINQ queries is still shakey. For the example: Parent has fields (ParentKey, ParentField) Child has fields (ChildKey, ParentKey, ChildField) Pet has fields (PetKey, ChildKey, PetField) Child has a foreign key reference to Parent on Child.ParentKey = Parent.ParentKey Pet has a foreign key reference to Child on Pet.Childkey = Child.ChildKey Simple enough eh? Lets say I have LINQ like this... Dim Q = FROM p in DataContext.Parent _ Join c In DataContext.Child On c.ParentKey = p.ParentKey Consider this a "base query" on which I will perform other filtering actions. Now I want to join the Pet table like this: Q = Q.Join(DataContext.Pet, _ Function(a) a.c.ChildKey, _ Function(p As Pet) p.ChildKey, _ Function(a, p As Pet) p.ChildKey = a.c.ChildKey) The above Join call doesnt work. I sort of understand why it doesnt work, but hopefully it'll show you how I tried to accomplish this task. After all this was done I would have appended a Select to finish the job. Any ideas on a better way to do this? I tried it with the PredicateBuilder with little success. I might not know how to use it right but it felt like it wasnt gonna handle the joining.

    Read the article

  • Handle exceptions with WPF and MVVM

    - by Jon Cahill
    I am attempting to build an application using WPF and the MVVM pattern. I have my Views being populated from my ViewModel purely through databinding. I want to have a central place to handle all exceptions which occur in my application so I can notify the user and log the error appropriately. I know about Dispatcher.UnhandledException but this does not do the job as exception that occur during databinding are logged to the output windows. Because my View is databound to my ViewModel the entire application is pretty much controlled via databinding so I have no way to log my errors. Is there a way to generically handle the exceptions raised during databinding, without having to put try blocks around all my ViewModel public's? Example View: <Window x:Class="Test.TestView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="TestView" Height="600" Width="800" WindowStartupLocation="CenterScreen"> <Window.Resources> <BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter" /> </Window.Resources> <StackPanel VerticalAlignment="Center"> <Label Visibility="{Binding DisplayLabel, Converter={StaticResource BooleanToVisibilityConverter}}">My Label</Label> </StackPanel> </Window> The ViewModel: public class TestViewModel { public bool DisplayLabel { get { throw new NotImplementedException(); } } } It is an internal application so I do not want to use Wer as I have seen previously recommended.

    Read the article

  • MVP, WinForms - how to avoid bloated view, presenter and presentation model

    - by MatteS
    When implementing MVP pattern in winforms I often find bloated view interfaces with too many properties, setters and getters. An easy example with be a view with 3 buttons and 7 textboxes, all having value, enabled and visible properties exposed from the view. Adding validation results for this, and you could easily end up with an interface with 40ish properties. Using the Presentation Model, there'll be a model with the same number of properties aswell. How do you easily sync the view and the presentation model without having bloated presenter logic that pass all the values back and forth? (With that 80ish line presenter code, imagine with the presenter test that mocks the model and view will look like..160ish lines of code just to mock that transfer.) Is there any framework to handle this without resorting to winforms databinding? (you might want to use different views than a winforms view. According to some, this sync should be the presenters job..) Would you use AutoMapper? Maybe im asking the wrong questions, but it seems to me MVP easily gets bloated without some good solution here..

    Read the article

  • want to add url links to .csv datafeed using python

    - by abs
    Hi all ive looked through the current related questions but have not managed to find anything similar to my needs. Im in the process of creating a affiliate store using zencart - now one of the issues is that zencart is not designed for redirects and affiliate stores but it can be done. I will be changing the store so it acts like a showcase store showing prices. There is a mod called easy populate which allows me to upload datafeeds. This is all well and good however my affiliate link will not be in each product. I can do it manually after uploading the data feed and going to each product and then adding it as an image with a redirect link - However when there are over 500 items its going to be a long repetitive and time consuming job. I have been told that I can add the links to the data feed before uploading it to zencart and this should be done using python. Ive been reading about python for several days now and feel im looking for the wrong things. I was wondering if someone could please advise the simplest way for me to get this done. I hope the question makes sense thanks abs

    Read the article

  • jQuery DataTables: Problems with POST Server Side JSON output

    - by Tim
    Hello Everyone, I am trying to get my datatable to take a POST JSON output from my server. This is my client side code <script> $(document).ready(function() { $('#example').dataTable( { "bProcessing": true, "bServerSide": true, "sAjaxSource": "http://localhost/staff/jobs/my_jobs", "fnServerData": function ( sSource, aoData, fnCallback ) { $.ajax( { "dataType": 'json', "type": "POST", "url": sSource, "data": aoData, "success": fnCallback } ); } } ); } ); </script> Now I have copied and pasted the server side code found in the DataTables examples found here. When I change my sAjaxSource to view this page the table doesn't move beyond 'processing'. When I view the JSON directly I see this output. {"sEcho": 1, "iTotalRecords": 1, "iTotalDisplayRecords": 1, "aaData": [ ["Trident","First Ever Job"]] } Just for fun I went to the POST server-side example and copied some of the JSON they are using for their example and just PHP echoed it straight out of another page. This is the output of that page. {"sEcho": 1, "iTotalRecords": 1, "iTotalDisplayRecords": 1, "aaData": [ ["Trident","Internet Explorer 4.0"]] } Here is where it gets interesting. The JSON that has been processed by the server fails to work yet the JSON simply echo'd by the same server on a different page does work... yet both are almost identical in outputs. I hope someone can shed some light on this because as the tree said to the lumberjack... I'm stumped. Thanks, Tim

    Read the article

  • SQL Server Express 2005 Merge Replication using RMO causes Null Reference exception

    - by Craig Shearer
    I'm trying to use RMO to programmatically perform merge synchronization. I've basically copied the SQL Server example code, as follows: // Create a connection to the Subscriber. ServerConnection conn = new ServerConnection(subscriberName); MergePullSubscription subscription; try { // Connect to the Subscriber. conn.Connect(); // Define the pull subscription. subscription = new MergePullSubscription(subscriptionDbName, publisherName, publicationDbName, publicationName, conn, false); // If the pull subscription exists, then start the synchronization. if (subscription.LoadProperties()) { // Check that we have enough metadata to start the agent. if (subscription.PublisherSecurity != null || subscription.DistributorSecurity != null) { subscription.SynchronizationAgent.Synchronize(); } else { throw new ApplicationException("There is insufficent metadata to " + "synchronize the subscription. Recreate the subscription with " + "the agent job or supply the required agent properties at run time."); } } else { // Do something here if the pull subscription does not exist. throw new ApplicationException(String.Format( "A subscription to '{0}' does not exist on {1}", publicationName, subscriberName)); } } catch (Exception ex) { // Implement appropriate error handling here. throw new ApplicationException("The subscription could not be " + "synchronized. Verify that the subscription has " + "been defined correctly.", ex); } finally { conn.Disconnect(); } I've got the server merge publication defined correctly, but when I run the above code, I get a null reference exception on the call to: subscription.SynchronizationAgent.Synchronize(); The stack trace is as follows: at Microsoft.SqlServer.Replication.MergeSynchronizationAgent.StatusEventSinkMethod(String message, Int32 percent, Int32* returnValue) at Test.ConsoleTest.Program.SynchronizePullSubscription() in F:\Visual Studio Projects\Test\source\Test.ConsoleTest\Program.cs:line 124 It seems, from the stack trace, like something to do with the Status event, but I don't have a handler defined, and defining one makes no difference.

    Read the article

  • XMPP and Android interaction

    - by Ameya Phadke
    I am currently finding about how to build a XMPP client application on android 2.0.I came across this link which somewhat talks about the same problem.I am a newbie to android dev and thus found the solution given there to be difficult to digest. The system currently has Active MQ as a JMS provider.My job is to fed the messages coming from JMS to the XMPP server and then develope XMPP client on android 2.0 which will listen and show notification to the events pushed by the server. I have following concerns(which might sound foolish) 1.How do I push the events from JMS to the XMPP server which will in turn push them on android? 2.Which XMPP server implementation I should use?I have 3 options * Openfire: Very mature (was a commercial product), but sounds like it's heavyweight, written in Java * Prosody: Lightweight and easy to use, written in Lua. Doesn't have PubSub module yet * Tigase: Also lightweight, written in Java, supports PubSub How do I test and setup these servers.Do I need PubSub funcationality for my app? 3.For XMPP client I came across Smack API given here which is updated like 2 years back.Can anyone please tell me how do I make use of it for Android 2.0.If possible can anyone please mail me latest working Smack jar files. Thanks, Ameya

    Read the article

  • "Admin module" taking over [Yii framework]

    - by Flavius
    Hi I have an "admin" module and I want it to serve "dynamic controllers", i.e. to provide a default behaviour for controllers which don't really exist ("virtual controllers"). I've invented a lightweight messaging mechanism for loose communication between modules. I'd like to use it such that when e.g. ?r=admin/users/index is requested, it will call the "virtual controller" "UserController" of AdminModule, which would, by default, use this messaging mechanism to notify the real module "UsersModule" it can answer to the request. I thought about simulating this behaviour in AdminModule::init(), but at that point I have no way of deciding whether the action can be processed by a real controller or not, or at least I don't know how to do it. This is because of the way Yii works: bottom-up, the controller is the one that renders the view AND the application layout (or the module's, if it exists), for example. I don't think the module even has a word to say about handling a given controller+action or not. To recap, I'm looking for a kind of CWebModule::missingController($controllerId,$actionId), just like CController::missingAction($actionId), or a workaround to simulate that. That would possibly be in CWebModule::init() or somewhere where I can find out whether the controller actually exists, in which case it's his job to handle it the $actionID and $controllerID whether the module $controllerID exists (I didn't type it wrong, in r=admin/users/index, "users" is the actual module, as specified in the application's config).

    Read the article

  • Advice for last year college graduates

    - by Tomh
    Hey guys, I know there are many "advice" questions around this site. But I wanted to to narrow mine down to last year college students, in my case my last year as Master student in computer science. So far is a list of things I've done during my time in college (which I can recommend others to do aswell): Code a lot I've written several hobby projects, had part time jobs, entered the Imagine cup from Microsoft, took programming extensive courses and did freelance gigs. Read a lot I've bought most top books from the recommended book topics here, to be honest I have not read them all. learn different languages I've tried several languages including Haskell, Java, Python, Ruby, Lisp, Prolog, C#, PHP, JS, AS3 and possibly some more I forgot. Tried to start a blog Joel recommends to learn how to write, I tried starting a couple of blogs to improve upon this, I gave up on all instances after writing about three posts. It was just not my thing... Have a portfolio of launched projects/programs I'm busy with this, have a couple of finished, working projects I worked on to show to people. So this is my last year. Is there anything else you can recommend a last year college student to do before hitting the job market? Personally I'm tempted to spend my time on the following: Practice algorithm design Learn and memorize the usage of the low level API's of your favorite language Polish your portfolio Why? Because those first two will make sure you pass the majority of the interviews, here in Holland (I could be wrong). I rather not spend my time on those first two points, but I have to be realistic and thats just my experience on what kind of questions you'll get when you apply. The third point is my hope that I won't have to answer questions about the amount of standard types in c# for example if they can see I get projects done and launched. But I'm still graduating, so I don't know anything :), and many of you might be hiring grads on a recent base and could tell me and other interested people what you wish that the recent grads you interviewed would have done before they applied.

    Read the article

  • Intellij Javafx artifact - how do you make it?

    - by Louis Sayers
    I've been trying all day to turn my javafx application into a jar file. I'm using Java 1.7 update 7. Oracle has some information, but it just seems scattered all over the place. Intellij is almost doing the job, but I get the following error: java.lang.NoClassDefFoundError: javafx/application/Application Which seems to say that I need to tell java where the jfxrt.jar is... If I add this jar to my classpath info for the manifest build in intellij - ctrl+shift+alt+s - Artifacts - Output Layout tab - Class Path, then I get another error: Could not find or load main class com.downloadpinterest.code.Main It seems strange to have to include jfxrt.jar in my class path though... I've tried to create an ant script as well, but I feel like IntelliJ is 90% of the way there - and I just need a bit of help to figure out why I need to include jfxrt.jar, and why my Main class isn't being found (I'm guessing I need to add it to the classpath somehow?). Could anyone clue me up as to what is happening? I had a basic gui before which worked fine, but JavaFX seems to be making life complicated!

    Read the article

  • Dealing with "Coder's Block" (or blank form syndrome)

    - by robsoft
    I know this is the sort of somewhat open-ended question that we're discouraged from asking, but there are lots of open-ended questions around already, and this is something quite relevant to me right now. Do you ever get those times when you're about to start work on a new function/feature of an established system, and you get "coder's block"?. It's like a mental freeze at the sight of a large, completely unpopulated dialog, or an empty code file with just the stub reference headers etc. Do you ever have that 'ulp' moment that seems to sap all your momentum and leave you wide open to distractions (surfing the web for inspiration, checking out 'crackoverflow' etc)? Not that I'd wish it on anyone, but hopefully some of you do, and hopefully some of you can suggest tips or strategies for overcoming the situation, regaining your momentum and becoming productive again. I usually try to reduce what I'm about to do down to absurdly small steps, in the hope that as the job becomes just a series of 'doh' tasks, I'll kickstart myself into working through them. However sometimes, particularly when a deadline is looming, I'll get overwhelmed by this approach as I realise I probably don't have enough time to do all of those tiny steps properly. Those are the darkest moments, (often literally) just before dawn! This situation can be particularly crippling if you mostly work alone, too. Any thoughts or suggestions? Any methods that you found helpful yourself?

    Read the article

  • Why a Swing app stops my Java servlet ?

    - by Frank
    I have a Swing runnable app which updates messages, then I have a Java servlet that gets messages from Paypal IPN (Instant Payment Notification), when the servlet starts up, in the init(), I starts the Swing runnable app which opens a desktop window, but 30 minutes later an error in the Swing caused the servlet to stop, how can that happen ? Because the runnable is running on it's own thread, servlet started that thread, why an error in that thread will cause the servlet to stop ? public class License_Manager extends JPanel implements Runnable { License_Manager() { Do_GUI(); ... start(); } public static void main(String[] args) { // Schedule a job for the event-dispatching thread : creating and showing this application's GUI. javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { Create_And_Show_GUI(); } }); } } public class PayPal_Servlet extends HttpServlet { public void init(ServletConfig config) throws ServletException { super.init(config); License_Manager.main(null); } protected void processRequest(HttpServletRequest request,HttpServletResponse response) throws ServletException,IOException { } } And besides the error don't even have anything to do with my code, it looks like this : Exception in thread "AWT-EventQueue-0" java.lang.ArrayIndexOutOfBoundsException: 17 = 0 at java.util.Vector.elementAt(Vector.java:427) at javax.swing.DefaultListModel.getElementAt(DefaultListModel.java:70) at javax.swing.plaf.basic.BasicListUI.paintCell(BasicListUI.java:191) at javax.swing.plaf.basic.BasicListUI.paintImpl(BasicListUI.java:304) at javax.swing.plaf.basic.BasicListUI.paint(BasicListUI.java:227) at javax.swing.plaf.ComponentUI.update(ComponentUI.java:143) at javax.swing.JComponent.paintComponent(JComponent.java:763) at javax.swing.JComponent.paint(JComponent.java:1029) at javax.swing.JComponent.paintChildren(JComponent.java:864) at javax.swing.JComponent.paint(JComponent.java:1038) at javax.swing.JViewport.paint(JViewport.java:747) at javax.swing.JComponent.paintChildren(JComponent.java:864) at javax.swing.JComponent.paint(JComponent.java:1038) at javax.swing.JComponent.paintToOffscreen(JComponent.java:5124) at javax.swing.BufferStrategyPaintManager.paint(BufferStrategyPaintManager.java:278) at javax.swing.RepaintManager.paint(RepaintManager.java:1220) at javax.swing.JComponent._paintImmediately(JComponent.java:5072) at javax.swing.JComponent.paintImmediately(JComponent.java:4882) at javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:803) at javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:714) at javax.swing.RepaintManager.seqPaintDirtyRegions(RepaintManager.java:694) at javax.swing.SystemEventQueueUtilities$ComponentWorkRequest.run(SystemEventQueueUtilities.java:128) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)

    Read the article

  • Security error accessing Service outside of FlexBuilder

    - by MikeHoss
    I'm very new to Flex and I have what I think it a head-scratcher. I am building a little Flash app that will consume some web services over HTTP. When I am in Flexbuilder and run my app there, it works fine. When I goto to my FlexBuilder project on my OS and double-click on it, it works fine. When I zip up my bin-debug file, I get this error: Security error accessing url faultCode:Channel.Security.Error faultString: 'Security error accessing url' faultDetail:'Destination: DefaultHTTP' So I googled that and got information on about the crossdomain.xml file. Well, I can't put a crossdomain file in the service I am calling, but I can put one somewhere else. So I put the following lines in Flex app: Security.allowDomain("vx1391"); Security.loadPolicyFile("http://vx1391:8080/job/Remote%20FIT%20Runner/ws/trunk/flash-cross-domain.xml"); My cross-domain.xml file is wide-open: &lt;cross-domain-policy&gt; &lt;allow-access-from domain="*"/&gt; </cross-domain-policy> Which I know is bad in a prod enivironment, but right now I just need to get this working locally but outside of FlexBuilder. Anyone want to help out this Flex-noob?

    Read the article

  • solr JOIN query

    - by Sfairas
    I need to run a JOIN query on a solr index. I've got two xmls that I have indexed, person.xml and subject.xml. Person: <doc> <field name="id">P39126</field> <field name="family">Smith</field> <field name="given">John</field> <field name="subject">S1276</field> <field name="subject">S1312</field> </doc> Subject: <doc> <field name="id">S1276</field> <field name="topic">Abnormalities, Human</field> </doc> I need to only display information from the person doc but each query should match fields in both person and subject. In the case the query matches only the subject doc I need to display all docs from the person that have a matching id. Is this possible to do without running two seperate queries? Something like a JOIN query would do the job. Any help?

    Read the article

  • How to keep track of TextPointer in WPF RichTextBox?

    - by Alan Spark
    I'm trying to get my head around the TextPointer class in a WPF RichTextBox. I would like to be able to keep track of them so that I can associate information with areas in the text. I am currently working with a very simple example to try and figure out what is going on. In the PreviewKeyDown event I am storing the caret position and then in the PreviewKeyUp event I am creating a TextRange based on the before and after caret positions. Here is a code sample that illustrates what I am trying to do: // The caret position before typing private TextPointer caretBefore = null; private void rtbTest_PreviewKeyDown(object sender, KeyEventArgs e) { // Store caret position caretBefore = rtbTest.CaretPosition; } private void rtbTest_PreviewKeyUp(object sender, KeyEventArgs e) { // Get text between before and after caret positions TextRange tr = new TextRange(caretBefore, rtbTest.CaretPosition); MessageBox.Show(tr.Text); } The problem is that the text that I get is blank. For example, if I type the character 'a' then I would expect to find the text "a" in the TextRange. Does anyone know what is going wrong? It could be something very simple but I've spent an afternoon getting nowhere. I am trying to embrace the new WPF technology but find that the RichTextBox in particular is so complicated that it makes even doing simple things like this difficult. If anyone has any links that do a good job of explaining the TextPointer, I would appreciate it if you can let me know.

    Read the article

  • Linq IQueryable variables

    - by kevinw
    Hi i have a function that should return me a string but what is is doing is bringing me back the sql expression that i am using on the database what have i done wrong public static IQueryable XMLtoProcess(string strConnection) { Datalayer.HameserveDataContext db = new HameserveDataContext(strConnection); var xml = from x in db.JobImports where x.Processed == false select new { x.Content }; return xml; } this is the code sample this is what i should be getting back <PMZEDITRI TRI_TXNNO="11127" TRI_TXNSEQ="1" TRI_CODE="600" TRI_SUBTYPE="1" TRI_STATUS="Busy" TRI_CRDATE="2008-02-25T00:00:00.0000000-00:00" TRI_CRTIME="54540" TRI_PRTIME="0" TRI_BATCH="" TRI_REF="" TRI_CPY="main" C1="DEPL" C2="007311856/001" C3="14:55" C4="CUB2201" C5="MR WILLIAM HOGG" C6="CS12085393" C7="CS" C8="Blocked drain" C9="Scheme: CIS Home Rescue edi tests" C10="MR WILLIAM HOGG" C11="74 CROMARTY" C12="OUSTON" C13="CHESTER LE STREET" C14="COUNTY DURHAM" C15="" C16="DH2 1JY" C17="" C18="" C19="" C20="" C21="CIS" C22="0018586965 ||" C23="BD" C24="W/DE/BD" C25="EX-DIRECTORY" C26="" C27="/" C28="CIS Home Rescue" C29="CIS Home Rescue Plus Insd" C30="Homeserve Claims Management Ltd|Upon successful completion of this repair the contractor must submit an itemised and costed Homeserve Claims Management Ltd Job Sheet." N1="79.9000" N2="68.0000" N3="11.9000" N4="0" N5="0" N6="0" D1="2008-02-25T00:00:00.0000000-00:00" T2="EX-DIRECTORY" T4="Blocked drain" TRI_SYSID="9" TRI_RETRY="3" TRI_RETRYTIME="0" /> can anyone help me please

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >