Search Results

Search found 230 results on 10 pages for 'philosophy'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Make user download pdf instead of saving to a location

    - by chupinette
    Hello!I was trying out the following code which actually saves the pdf file to C:/xampp/ I want to create a link so that when the user clicks on it. It prompts it to save the pdf file. <?php // create handle for new PDF document $pdf = pdf_new(); // open a file pdf_open_file($pdf, "try1.pdf"); // start a new page (A4) pdf_begin_page($pdf, 595, 842); pdf_set_parameter($pdf, 'FontOutline', 'Arial=c:\windows\fonts\arial.ttf'); // get and use a font object $font = pdf_findfont($pdf, "Arial", "host", 1); pdf_setfont($pdf, $font, 10); // print text pdf_show_xy($pdf, "There are more things in heaven and earth, Horatio,", 50, 750); pdf_show_xy($pdf, "than are dreamt of in your philosophy", 50, 730); // add an image under the text //$image = pdf_open_image_file($pdf, "jpeg", "shakespeare.jpg"); pdf_place_image($pdf, $image, 50, 650, 0.25); // end page pdf_end_page($pdf); // close and save file pdf_close($pdf); ?>

    Read the article

  • Namespaces combined with TFS / Source Control explanation

    - by Christian
    As an ISV company we slowly run into the "structure your code"-issue. We mainly develop using Visual Studio 2008 and 2010 RC. Languages c# and vb.net. We have our own Team Foundation Server and of course we use Source Control. When we started developing based on the .NET Framework, we also begun using Namespaces in a primitive way. With the time we 'became more mature', i mean we learned to use the namespaces and we structured the code more and more, but only in the solution scope. Now we have about 100 different projects and solutions in our Source Safe. We realized that many of our own classes are coded very redundant, i mean, a Write2Log, GetExtensionFromFilename or similar Function can be found between one and 20 times in all these projects and solutions. So my idea is: Creating one single kind of root folder in Source Control and start an own namespace-hierarchy-structure below this root, let's name it CompanyName. A Write2Log class would then be found in CompanyName.System.Logging. Whenever we create a new solution or project and we need a log function, we will 'namespace' that solution and place it accordingly somewhere below the CompanyName root folder. To have the logging functionality we then import (add) the existing project to the solution. Those 20+ projects/solutions with the write2log class can then be maintained in one single place. To my questions: - is that a good idea, the philosophy of namespaces and source control? - There must be a good book explaining the Namespaces combined with Source Control, yes? any hints/directions/tips? - how do you manage your 50+ projects?

    Read the article

  • looking for a license key algorithm.

    - by giulio
    There are a lot of questions relating to license keys asked on stackoverflow. But they don't answer this question. Can anyone provide a simple license key algorithm that is technology independent and doesn't required a diploma in mathematics to understand ? The license key algorithm is similar to public key encryption. I just need something simple that can be implemented in any platform .Net/Java and uses simple data like characters. Preferably no byte translations required. So if a person presents a string, a complementary string can be generated that is the authorisation code. Below is a common scenario that it would be used for. Customer downloads s/w which generates a unique key upon initial startup/installation. S/w runs during trial period. At end of trial period an authorisation key is required. Customer goes to designated web-site, enters their code and get authorisation code to enable s/w, after paying :) Don't be afraid to describe your answer as though you're talking to a 5 yr old as I am not a mathemtician. Just need a decent basic algorithm, we're not launching nukes... NB: Please no philosophy on encryption nor who is Diffie-Hellman. I just need a basic solution.

    Read the article

  • So can unique_ptr be used safely in stl collections?

    - by DanDan
    I am confused with unique_ptr and rvalue move philosophy. Let's say we have two collections: std::vector<std::auto_ptr<int>> autoCollection; std::vector<std::unique_ptr<int>> uniqueCollection; Now I would expect the following to fail, as there is no telling what the algorithm is doing internally and maybe making internal pivot copies and the like, thus ripping away ownership from the auto_ptr: std::sort(autoCollection.begin(), autoCollection.end()); I get this. And the compiler rightly disallows this happening. But then I do this: std::sort(uniqueCollection.begin(), uniqueCollection.end()); And this compiles. And I do not understand why. I did not think unique_ptrs could be copied. Does this mean a pivot value cannot be taken, so the sort is less efficient? Or is this pivot actually a move, which in fact is as dangerous as the collection of auto_ptrs, and should be disallowed by the compiler? I think I am missing some crucial piece of information, so I eagerly await someone to supply me with the aha! moment.

    Read the article

  • Has form post behavior changed in modern browsers? (or How are double clicks handled by the browser)

    - by Alex Czarto
    Background: We are in the process of writing a registration/payment page, and our philosophy was to code all validation and error checking on the server side first, and then add client side validation as a second step (un-obstructive jQuery). We wanted to disable double clicks server side, so we wrote some locking, thread-safe code to handle simultaneous posts/race conditions. When we tried to test this, we realized that we could not cause a simultaneous post or race condition to occur. I thought that (in older browsers anyway) double clicking a submit button worked as follows: User double clicks submit button. Browser sends a post on the first click On the second click, browser cancels/ignores initial post, and initiates a second post (before the first post has returned with a response). Browser waits for second post to return, ignoring initial post response. I thought that from the server side it looked like this: Server gets two simultaneous post requests, executes and responds to them both (unaware that no one is listening to the first response). From our testing (FireFox 3.0, IE 8.0) this is what actually happens: User double clicks submit button Browser sends a post for the first click Browser queues up second click, but waits for the response from the first click. Response returns from first click (response is ignored?). Browser sends a post for the second click. So from a server side: Server receives a single post which it executes and responds to. Then, server receives a second request wich it executes and responds to. My question is, has this always worked this way (and I'm losing my mind)? Or is this a new feature in modern browsers that prevents simultaneous posts to be sent to the server? It seems that for server side double click prevention, we don't have to worry about simultaneous posts or race conditions. Only need to worry about queued up posts. Thanks in advance for any feedback / comments. Alex

    Read the article

  • pattern for the following condition in java

    - by zahir hussain
    hi i want to know how to write pattern.. for example : the word is "AboutGoogle AdWords Drive traffic and customers to your site. Pay through Cheque, Net Banking or Credit Card. Google Toolbar Add a search box to your browser. Google SMS To find out local information simply SMS to 54664. Gmail Free email with 7.2GB storage and less spam. Try Gmail today. Our ProductsHelp Help with Google Search, Services and ProductsGoogle Web Search Features Translation, I'm Feeling Lucky, CachedGoogle Services & Tools Toolbar, Google Web APIs, ButtonsGoogle Labs Ideas, Demos, ExperimentsFor Site OwnersAdvertising AdWords, AdSenseBusiness Solutions Google Search Appliance, Google Mini, WebSearchWebmaster Central One-stop shop for comprehensive info about how Google crawls and indexes websitesSubmit your content to Google Add your site, Google SitemapsOur CompanyPress Center News, Images, ZeitgeistJobs at Google Openings, Perks, CultureCorporate Info Company overview, Philosophy, Diversity, AddressesInvestor Relations Financial info, Corporate governanceMore GoogleContact Us FAQs, Feedback, NewsletterGoogle Logos Official Logos, Holiday Logos, Fan LogosGoogle Blog Insights to Google products and cultureGoogle Store Pens, Shirts, Lava lamps©2010 Google - Privacy Policy - Terms of Service" I have to search some word... for example "google insights" so how to write the code in java... i just write small code... check my code and answer my question... that code only use for find the search word, where is that. but i need to display some words front of search word and display some words rear of search workd... similar to google search... my code is Pattern p = Pattern.compile("(?i)(.*?)"+search+""); Matcher m = p.matcher(full); String title=""; while (m.find() == true) { title=m.group(1); System.out.println(title); } the full is orignal content, search s search word... thanks and advance

    Read the article

  • Who likes #regions in Visual Studio?

    - by Nicholas
    Personally I can't stand region tags, but clearly they have wide spread appeal for organizing code, so I want to test the temperature of the water for other MS developer's take on this idea. My personal feeling is that any sort of silly trick to simplify code only acts to encourage terrible coding behavior, like lack of cohesion, unclear intention and poor or incomplete coding standards. One programmer told me that code regions helped encourage coding standards by making it clear where another programmer should put his or her contributions. But, to be blunt, this sounds like a load of horse manure to me. If you have a standard, it is the programmer's job to understand what that standard is... you should't need to define it in every single class file. And, nothing is more annoying than having all of your code collapsed when you open a file. I know that cntrl + M, L will open everything up, but then you have the hideous "hash region definition" open and closing lines to read. They're just irritating. My most stead fast coding philosophy is that all programmer should strive to create clear, concise and cohesive code. Region tags just serve to create noise and redundant intentions. Region tags would be moot in a well thought out and intentioned class. The only place they seem to make sense to me, is in automatically generated code, because you should never have to read that outside of personal curiosity.

    Read the article

  • git commit best practices

    - by Ivan Z. Siu
    I am using git to manage a C++ project. When I am working on the projects, I find it hard to organize the changes into commits when changing things that are related to many places. For example, I may change a class interface in a .h file, which will affect the corresponding .cpp file, and also other files using it. I am not sure whether it is reasonable to put all the stuff into one big commit. Intuitively, I think the commits should be modular, each one of them corresponds to a functional update/change, so that the collaborators could pick things accordingly. But seems that sometimes it is inevitable to include lots of files and changes to make a functional change actually work. Searching did not yield me any good suggestion or tips. Hence I wonder if anyone could give me some best practices when doing commits. Thanks! PS. I've been using git for a while and I know how to interactively add/rebase/split/amend/... What I am asking is the PHILOSOPHY part.

    Read the article

  • C++ Matrix class hierachy

    - by bpw1621
    Should a matrix software library have a root class (e.g., MatrixBase) from which more specialized (or more constrained) matrix classes (e.g., SparseMatrix, UpperTriangluarMatrix, etc.) derive? If so, should the derived classes be derived publicly/protectively/privately? If not, should they be composed with a implementation class encapsulating common functionality and be otherwise unrelated? Something else? I was having a conversation about this with a software developer colleague (I am not per se) who mentioned that it is a common programming design mistake to derive a more restricted class from a more general one (e.g., he used the example of how it was not a good idea to derive a Circle class from an Ellipse class as similar to the matrix design issue) even when it is true that a SparseMatrix "IS A" MatrixBase. The interface presented by both the base and derived classes should be the same for basic operations; for specialized operations, a derived class would have additional functionality that might not be possible to implement for an arbitrary MatrixBase object. For example, we can compute the cholesky decomposition only for a PositiveDefiniteMatrix class object; however, multiplication by a scalar should work the same way for both the base and derived classes. Also, even if the underlying data storage implementation differs the operator()(int,int) should work as expected for any type of matrix class. I have started looking at a few open-source matrix libraries and it appears like this is kind of a mixed bag (or maybe I'm looking at a mixed bag of libraries). I am planning on helping out with a refactoring of a math library where this has been a point of contention and I'd like to have opinions (that is unless there really is an objective right answer to this question) as to what design philosophy would be best and what are the pros and cons to any reasonable approach.

    Read the article

  • Most elegant way to break CSV columns into separate data structures using Python?

    - by Nick L
    I'm trying to pick up Python. As part of the learning process I'm porting a project I wrote in Java to Python. I'm at a section now where I have a list of CSV headers of the form: headers = [a, b, c, d, e, .....] and separate lists of groups that these headers should be broken up into, e.g.: headers_for_list_a = [b, c, e, ...] headers_for_list_b = [a, d, k, ...] . . . I want to take the CSV data and turn it into dict's based on these groups, e.g.: list_a = [ {b:val_1b, c:val_1c, e:val_1e, ... }, {b:val_2b, c:val_2c, e:val_2e, ... }, {b:val_3b, c:val_3c, e:val_3e, ... }, . . . ] where for example, val_1b is the first row of the 'b' column, val_3c is the third row of the 'c' column, etc. My first "Java instinct" is to do something like: for row in data: for col_num, val in enumerate(row): col_name = headers[col_num] if col_name in group_a: dict_a[col_name] = val elif headers[col_cum] in group_b: dict_b[col_name] = val ... list_a.append(dict_a) list_b.append(dict_b) ... However, this method seems inefficient/unwieldy and doesn't posses the elegance that Python programmers are constantly talking about. Is there a more "Zen-like" way I should try- keeping with the philosophy of Python?

    Read the article

  • Breaking dependencies when you can't make changes to other files?

    - by codemuncher
    I'm doing some stealth agile development on a project. The lead programmer sees unit testing, refactoring, etc as a waste of resources and there is no way to convince him otherwise. His philosophy is "If it ain't broke don't fix it" and I understand his point of view. He's been working on the project for over a decade and knows the code inside and out. I'm not looking to debate development practices. I'm new to the project and I've been tasked with adding a new feature. I've worked on legacy projects before and used agile development practices with good result but those teams were more receptive to the idea and weren't afraid of making changes to code. I've been told I can use whatever development methodology I want but I have to limit my changes to only those necessary to add the feature. I'm using tdd for the new classes I'm writing but I keep running into road blocks caused by the liberal use of global variables and the high coupling in the classes I need to interact with. Normally I'd start extracting interfaces for these classes and make their dependence on the global variables explicit by injecting them as constructor arguments or public properties. I could argue that the changes are necessary but considering the lead never had to make them I doubt he would see it my way. What techniques can I use to break these dependencies without ruffling the lead developer's feathers? I've made some headway using: Extract Interface (for the new classes I'm creating) Extend and override the wayward classes with test stubs. (luckily most methods are public virtual) But these two can only get me so far.

    Read the article

  • Why is there so much XML in Java these days?

    - by BD at Rivenhill
    This is really more of a philosophy/design issue. I did some work in Java back in the middle 90's and again in the early 2000's and now I'm coming back to it after spending a lot of time in C/C++ and it seems like there was an explosion of XML dependency while I was gone. Major build system tools like ant and maven depend on XML for their configuration, but I'm actually more concerned with all the frameworks, such as Spring, Hibernate, etc. My experience is that powerful supporting libraries like these are where a developer can really get leverage for building programs with lots of features without writing a lot of code, but it really seems like I'm getting one language for the price of two here. I write a bunch of Java classes, but then I also write a bunch of XML files to glue them together. The things that get done in the XML are things that I can see reasonable ways of doing in straight code without the middleman, and they don't really seem to be treated exactly like configuration files: they change rarely and they end up getting committed to source code control like the Java code itself, but they are distributed with the resulting application and need to be unpacked and installed in the classpath in order to get the application to work. I'm working with server applications that are not web-based, so maybe the domain is a bit different from what most people are doing, but I just can't help feeling that I must be doing something wrong here. Can someone point me to a good source of information for why these design choices were made and what problems they are meant to solve so that I can analyze my own experiences in this context?

    Read the article

  • ASP.Net Custom Paging (w/ C#)

    - by André Alçada Padez
    Cenario: I have a GridView bound to a DataSource, every column is sortable. my main query is something like: select a, b, c, d, e, f from table order by somedate desc i added a filter form where i can define values to each one of the fields and get the results of a where form. As a result from this, i had to do a custom sorting so that when i sort by a field, i am sorting the filtered query and not the main one. Now i have to do custom paging, for the same reason, but i don't understand the philosophy of it: I want to guarantee that i can: filter the results sort by a column when i click on page 2, i get page two of the filtered and sorted results I don't know what i have to do, so i can bind the GV with this. My sorting Method, that is working just fine looks something like: string condition = GetConditions(); //gets a string like " where a>1 and b>2" depending on the filter the user defines string query = "select a, b, c, d, e, f from table "; string direction = (e.SortDirection == SortDirection.Ascending)? "asc": "desc"; string order = " order by " + e.SortExpression + " " + direction; UtilizadoresDataSource.SelectCommand = query + condition + order; i've never done custom paging, i am trying: GetConditions() //no problem here How can i find out how the GridView is sorted (by what field and sortingorder)? thank you very much

    Read the article

  • Django + GAE (Google App Engine) : most convenient path for a beginner?

    - by mac
    Some background info first: Goal: a medium-level complexity web app that I will need to maintain and possibly extend for a few years. Experience: good knowledge of python, some experience of MVC frameworks (in PHP). Desiderata: using django and google app engine. I read extensively about the compatibility issues between GAE and Django, and I am aware of the GAE patch, the norel project, and other similar pieces of code. I have also understood that the SDK provides some of the features of django "out of the box". Yet, given that I have no previous experience with neither Django nor GAE, I am unable to evaluate to which extent using a patched version of Django will strip away important features, or how far the framework provided in the SDK is compatible with Django. So I am rather confused on what would be the best way to proceed in my situation: Should I simply use a patched version of Django as the differences with the original Django are so minor that I would hardly notice them? Should I write my app completely in "regular django" and try to port it to GAE only afterwards, when I will have got a grasp on Django internals and philosophy? Should I write my app using the framework provided with the SDK and port it to django only afterwards? Should I... ? Thank you in advance for your time and advice.

    Read the article

  • SQL SERVER – Beginning SQL Server: One Step at a Time – SQL Server Magazine

    - by pinaldave
    I am glad to announce that along with SQLAuthority.com, I will be blogging on the prominent site of SQL Server Magazine. My very first blog post there is already live; read here: Beginning SQL Server: One Step at a Time. My association with SQL Server Magazine has been quite long, I have written nearly 7 to 8 SQL Server articles for the print magazine and it has been a great experience. I used to stay in the United States at that time. I moved back to India for good, and during this process, I had put everything on hold for a while. Just like many things, “temporary” things become “permanent” – coming back to SQLMag was on hold for long time. Well, this New Year, things have changed – once again, I am back with my online presence at SQLMag.com. Everybody is a beginner at every task or activity at some point of his/her life: spelling words for the first time; learning how to drive for the first time, etc. No one is perfect at the start of any task, but every human is different. As time passes, we all develop our interests and begin to study our subject of interest. Most of us dream to get a job in the area of our study – however things change as time passes. I recently read somewhere online (I could not find the link again while writing this one) that all the successful people in various areas have never studied in the area in which they are successful. After going through a formal learning process of what we love, we refuse to stop learning, and we finally stop changing career and focus areas. We move, we dare and we progress. IT field is similar to our life. New IT professionals come to this field every day. There are two types of beginners – a) those who are associated with IT field but not familiar with other technologies, and b) those who are absolutely new to the IT field. Learning a new technology is always exciting and overwhelming for enthusiasts. I am working with database (in particular) for SQL Server for more than 7 years but I am still overwhelmed with so many things to learn. I continue to learn and I do not think that I should ever stop doing so. Just like everybody, I want to be in the race and get ahead in learning the technology. For the same, I am always looking for good guidance. I always try to find a good article, blog or book chapter, which can teach me what I really want to learn at this stage in my career and can be immensely helpful. Quite often, I prefer to read the material where the author does not judge me or assume my understanding. I like to read new concepts like a child, who takes his/her first steps of learning without any prior knowledge. Keeping my personal philosophy and preference in mind, I will be blogging on SQL Server Magazine site. I will be blogging on the beginners stuff. I will be blogging for them, who really want to start and make a mark in this area. I will be blogging for all those who have an extreme passion for learning. I am happy that this is a good start for this year. One of my resolutions is to help every beginner. It is totally possible that in future they all will grow and find the same article quite ‘easy‘ – well when that happens, it indicates the success of the article and material! Well, I encourage everybody to read my SQL Server Magazine blog – I will be blogging there frequently on various topics. To begin, we will be talking about performance tuning, and I assure that I will not shy away from other multiple areas. Read my SQL Server Magazine Blog: Beginning SQL Server: One Step at a Time I think the title says it all. Do leave your comments and feedback to indicate your preference of subject and interest. I am going to continue writing on subject, and the aim is of course to help grow in this field. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • HR According to Batman

    - by D'Arcy Lussier
    Any idea who that guy is running alongside the Caped Crusader? That’s Nightwing, but you may know him as Robin…well, the first Robin anyway. There were actually like 5 Robin’s according to Wikipedia: Dick Grayson, the original, who’s parents were circus performers killed by a gangster. Jason Todd, who was caught trying to steal tires off of the Batmobile. Tim Drake, who saw Dick’s parents die and figured out who Batman and Robin were. and a few others that get into recent time travel/altered reality storylines. What does this have to do with HR? Well, it somewhat ties in with an article by Alex Papadimoulis from 2008. In the article he talks about the “Cravath System”. The Craveth system was developed by a law firm called Cravath, Swaine & Moore back in the 19th century. In a nutshell, they believed in hiring the best and brightest straight out of school. These aspiring lawyers would then begin a fight for survival in the firm, with the strong surviving. In what’s termed the “Up and Out” rule, employees needed to be promoted within 3 years or leave the company. They should achieve partner within 7 – 8 years and no later than 10 after initially coming on board (read all about the system on Wikipedia here). Back to Alex’s article, he quotes from a book published in 1947 about the lawfirm: Under the “Cravath system” of taking a substantial number of men annually and keeping a current constantly moving up in the office, and its philosophy of tenure, men are constantly leaving… it is often difficult to keep the best men long enough to determine whether they shall be made partners, for Cravath-trained men are always in demand, usually at premium salaries. And so we see a pattern forming here: 1. Hire a whole whack of smart college graduates 2. Put them to work 3. The ones that stick around should move up the ladder. The ones that don’t stick around served the company well and left to expound the quality of the Cravath firm. Those that didn’t fall into either of those categories were just let go. There’s some interesting undercurrents to these ideas. If you stick around, you better keep your feet moving! I was at a Microsoft shindig a few months back, and was talking to a Microsoft employee. He shared that at MS you have 5 years to achieve a “senior” position within the company. Once you hit that mark, you can stay there for the rest of your career (he told about a guy who’s a “senior” developer and has been for the last 20+ years working on audio drivers for Windows), but you *must* hit that mark within the timeframe. What we see with Microsoft is Cravath’s system in action, whether intentional or not: bring in smart young people and see which ones stick. You need to give people something to work towards. Saying “You must reach this level or else!” is one way to look at it. The other way is to see achieving a higher rank in the organization as something for ambitious employees to reach towards. It’s important for an organization to always have the next generation of executives waiting in the wings, and unless you’re encouraging that early on you may find yourself in a position of needing to fill positions that nobody has been working towards. Now, you might suggest that this isn’t that big of a deal because you could just hire someone from outside the organization, but the Cravath system holds to the tenet of promoting internally; develop your own talent, since your business is the best place for the future leadership to learn teh business from. It’s OK for people to quit. Alex’s article really drives this point home, but its worth noting here also: its OK for your people to quit. In fact its inevitable…and more inevitable that it’ll be good people that leave. Some will stay and work towards the internal awards of promotion, but a number will get experience, serve the organization well, and then move on to something else. This should be expected and treated as a natural business occurrence. The idea of an alumni of an organization begins to come into play here: “That guy used to work for <insert company here>”. There’s a benefit in that: those best and brightest will be drawn to your organization and your reputation will permeate your market through former staff that are sought after because of how well you nurtured them. The Batman Hook All of this brings us back to Batman and his HR practice: when Dick decided he’d had enough of the Robin schtick, he quit and became his own…but he was always associated with Batman and people understood where his training had come from. To the Dark Knight’s credit, he continued training partners under the Robin brand. Luckily he didn’t have to worry about firing any of them (the ship sort of sails when you reveal a secret identity), although there was that unfortunate “quitting” of the second Robin when the Joker blew him up…but regardless, we see the Cravath system at work: bring in talent, expect great things, and be ok with whatever they decide for their careers. It’s an interesting way to approach HR, and luckily for us our business isn’t as dangerous or over-the-top as the caped crusader’s.

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • Row Count Plus Transformation

    As the name suggests we have taken the current Row Count Transform that is provided by Microsoft in the Integration Services toolbox and we have recreated the functionality and extended upon it. There are two things about the current version that we thought could do with cleaning up Lack of a custom UI You have to type the variable name yourself In the Row Count Plus Transformation we solve these issues for you. Another thing we thought was missing is the ability to calculate the time taken between components in the pipeline. An example usage would be that you want to know how many rows flowed between Component A and Component B and how long it took. Again we have solved this issue. Credit must go to Erik Veerman of Solid Quality Learning for the idea behind noting the duration. We were looking at one of his packages and saw that he was doing something very similar but he was using a Script Component as a transformation. Our philosophy is that if you have to write or Copy and Paste the same piece of code more than once then you should be thinking about a custom component and here it is. The Row Count Plus Transformation populates variables with the values returned from; Counting the rows that have flowed through the path Returning the time in seconds between when it first saw a row come down this path and when it saw the final row. It is possible to leave both these boxes blank and the component will still work.   All input columns are passed through the transformation unaltered, you are not permitted to change or add to the inputs or outputs of this component. Optionally you can set the component to fire an event, which happens during the PostExecute phase of the execution. This can be useful to improve visibility of this information, such that it is captured in package logging, or can be used to drive workflow in the case of an error event. Properties Property Data Type Description OutputRowCountVariable String The name of the variable into which the amount of row read will be passed (Optional). OutputDurationVariable String The name of the variable into which the duration in seconds will be passed. (Optional). EventType RowCountPlusTransform.EventType The type of event to fire during post execute, included in which are the row count and duration values. RowCountPlusTransform.EventType Enumeration Name Value Description None 0 Do not fire any event. Information 1 Fire an Information event. Warning 2 Fire a Warning event. Error 3 Fire an Error event. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Count Plus Transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Count Plus Transformation for SQL Server 2005 Row Count Plus Transformation for SQL Server 2008 Row Count Plus Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.1.0.43 - Bug fix for duration. For long running processes the duration second count may have been incorrect. (8 Sep 2006) Version 1.1.0.42 - SP1 Compatibility Testing. Added the ability to raise an event with the count and duration data for easier logging or workflow. (18 Jun 2006) Version 1.0.0.1 - SQL Server 2005 RTM. Made available as general public release. (20 Mar 2006) Screenshot Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component?

    Read the article

  • Where Facebook Stands Heading Into 2013

    - by Mike Stiles
    In our last blog, we looked at how Twitter is positioned heading into 2013. Now it’s time to take a similar look at Facebook. 2012, for a time at least, seemed to be the era of Facebook-bashing. Between a far-from-smooth IPO, subsequent stock price declines, and anxiety over privacy, the top social network became a target for comedians, politicians, business journalists, and of course those who were prone to Facebook-bash even in the best of times. But amidst the “this is the end of Facebook” headlines, the company kept experimenting, kept testing, kept innovating, and pressing forward, committed as always to the user experience, while concurrently addressing monetization with greater urgency. Facebook enters 2013 with over 1 billion users around the world. Usage grew 41% in Brazil, Russia, Japan, South Korea and India in 2012. In the Middle East and North Africa, an average 21 new signups happen per minute. Engagement and time spent on the site would impress the harshest of critics. Facebook, while not bulletproof, has become such an integrated daily force in users’ lives, it’s getting hard to imagine any future mass rejection. You want to see a company recognizing weaknesses and shoring them up. Mobile was a weakness in 2012 as Facebook was one of many caught by surprise at the speed of user migration to mobile. But new mobile interfaces, better mobile ads, speed upgrades, standalone Messenger and Pages mobile apps, and the big dollar acquisition of Instagram, were a few indicators Facebook won’t play catch-up any more than it has to. As a user, the cool thing about Facebook is, it knows you. The uncool thing about Facebook is, it knows you. The company’s walking a delicate line between the public’s competing desires for customized experiences and privacy. While the company’s working to make privacy options clearer and easier, Facebook’s Paul Adams says data aggregation can move from acting on what a user is engaging with at the moment to a more holistic view of what they’re likely to want at any given time. To help learn about you, there’s Open Graph. Embedded through diverse partnerships, the idea is to surface what you’re doing and what you care about, and help you discover things via your friends’ activities. Facebook’s Director of Engineering, Mike Vernal, says building mobile social apps connected to Facebook in such ways is the next wave of big innovation. Expect to see that fostered in 2013. The Facebook site experience is always evolving. Some users like that about Facebook, others can’t wait to complain about it…on Facebook. The Facebook focal point, the News Feed, is not sacred and is seeing plenty of experimentation with the insertion of modules. From upcoming concerts, events, suggested Pages you might like, to aggregated “most shared” content from social reader apps, plenty could start popping up between those pictures of what your friends had for lunch.  As for which friends’ lunches you see, that’s a function of the mythic EdgeRank…which is also tinkered with. When Facebook changed it in September, Page admins saw reach go down and the high anxiety set in quickly. Engagement, however, held steady. The adjustment was about relevancy over reach. (And oh yeah, reach was something that could be charged for). Facebook wants users to see what they’re most likely to like, based on past usage and interactions. Adding to the “cream must rise to the top” philosophy, they’re now even trying out ordering post comments based on the engagement the comments get. Boy, it’s getting competitive out there for a social engager. Facebook has to make $$$. To do that, they must offer attractive vehicles to marketers. There are a myriad of ad units. But a key Facebook marketing concept is the Sponsored Story. It’s key because it encourages content that’s good, relevant, and performs well organically. If it is, marketing dollars can amplify it and extend its reach. Brands can expect the rollout of a search product and an ad network. That’s a big deal. It takes, as Open Graph does, the power of Facebook’s user data and carries it beyond the Facebook environment into the digital world at large. No one could target like Facebook can, and some analysts think it could double their roughly $5 billion revenue stream. As every potential revenue nook and cranny is explored, there are the users themselves. In addition to Gifts, Facebook thinks users might pay a few bucks to promote their own posts so more of their friends will see them. There’s also word classifieds could be purchased in News Feeds, though they won’t be called classifieds. And that’s where Facebook stands; a wildly popular destination, a part of our culture, with ever increasing functionalities, the biggest of big data, revenue strategies that appeal to marketers without souring the user experience, new challenges as a now public company, ongoing privacy concerns, and innovations that carry Facebook far beyond its own borders. Anyone care to write a “this is the end of Facebook” headline? @mikestilesPhoto via stock.schng

    Read the article

  • Can Kind People Finish First?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In an earlier post, I expressed my undying love for KIND Snacks' products. This month's Oracle Profit magazine features an interview with KIND Healthy Snacks Founder and CEO Daniel Lubetzky entitled "Better Business". Lubetzky expresses his vision for making KIND a "not for profit only" company.  All great companies start with a good idea. In this case, that one great idea was to offer a healthy snack with ingredients you can "see and pronounce". That's one of things I really like about this company--that coupled with the fact that their snacks taste great. They compete in an over crowded playing field but I've found that it's rare to find an energy snack that both tastes good and is good for you.  A couple of interesting facts I learned from reading this article: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 9 out of 10 consumers who try a KIND bar will purchase a KIND product again and recommend it to others KIND has the highest Net Promoter Score among the top 10 brands in the nutritional bar category (I confess I've never heard about this rating before but now that I have it's pretty cool) KIND's coporate mantra, "Do the Kind Thing" both encourages people to do random acts of kindness and provides easy mechanisms for doing so. Not coincidentally, I think, KIND is indeed a story about how nice guys can finish first. KIND has doubled in size every year for the last ten  years and now employees over 300 people, with sales exceeding $120M annually. Growth Applies Pressures One thing I know for certain from interacting our with fast growing customers over the last fifteen years is that growth applies myriad pressures across the organization--resources, processes, technology systems, and leadership agility. And it's easy to forget that Oracle was once an entrepreneurial startup and experienced all those same pressures that other growing companies are experiencing today. When asked by Profit Editor in Chief Aaron Lazenby, " What sort of pressure does KIND"s growth and success place on operations?", Lubetzky responded, "We have a demand planning process right now that is manual to a significant extent, and it just takes so much management time. It takes us days and sometimes weeks to produce information that is critical to our business—and by the time we get the results, we need revised data. Our sales leadership could go out selling, but instead they’re talking to our team about forecasts." Hitching Your Wagon to Oracle Lubetzky and his team selected Oracle for what I believe is our company's greatest strength: hitch your wagon to Oracle and you can trust that we will be there for the long run with the solutions you need and financial staying power. In Lubetzky's words, "The KIND philosophy requires you to have a long-term view of things; taking shortcuts may be the fastest way to get things done, but in the long term that can come back and bite you. Oracle is the type of company—and has the kind of platform—that is here for the long term. It’s not going to go away tomorrow. And Oracle is going to invest all the necessary resources into staying ahead of the game and improving." o next time you're in the supermarket or an REI (my favorite store in the world) or any of the other 80,000 locations that carry KIND, give one a try. Maybe some day you'll want to become a KIND Brand Ambassador.   Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • Good style for handling constructor failure of critical object

    - by mtlphil
    I'm trying to decide between two ways of instantiating an object & handling any constructor exceptions for an object that is critical to my program, i.e. if construction fails the program can't continue. I have a class SimpleMIDIOut that wraps basic Win32 MIDI functions. It will open a MIDI device in the constructor and close it in the destructor. It will throw an exception inherited from std::exception in the constructor if the MIDI device cannot be opened. Which of the following ways of catching constructor exceptions for this object would be more in line with C++ best practices Method 1 - Stack allocated object, only in scope inside try block #include <iostream> #include "simplemidiout.h" int main() { try { SimpleMIDIOut myOut; //constructor will throw if MIDI device cannot be opened myOut.PlayNote(60,100); //..... //myOut goes out of scope outside this block //so basically the whole program has to be inside //this block. //On the plus side, it's on the stack so //destructor that handles object cleanup //is called automatically, more inline with RAII idiom? } catch(const std::exception& e) { std::cout << e.what() << std::endl; std::cin.ignore(); return 1; } std::cin.ignore(); return 0; } Method 2 - Pointer to object, heap allocated, nicer structured code? #include <iostream> #include "simplemidiout.h" int main() { SimpleMIDIOut *myOut; try { myOut = new SimpleMIDIOut(); } catch(const std::exception& e) { std::cout << e.what() << std::endl; delete myOut; return 1; } myOut->PlayNote(60,100); std::cin.ignore(); delete myOut; return 0; } I like the look of the code in Method 2 better, don't have to jam my whole program into a try block, but Method 1 creates the object on the stack so C++ manages the object's life time, which is more in tune with RAII philosophy isn't it? I'm still a novice at this so any feedback on the above is much appreciated. If there's an even better way to check for/handle constructor failure in a siatuation like this please let me know.

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >