Search Results

Search found 2088 results on 84 pages for 'jobs'.

Page 78/84 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • Deep in the Heart of Texas

    - by Applications User Experience
    Author: Erika Webb, Manager, Fusion Applications UX User Assistance When I was first working in the usability field, the only way I could consider conducting a usability study was to bring a potential user to a lab environment where I could show them whatever I was interested in learning more about and ask them questions. While I hate to reveal just how long I have been working in this field, let's just say that pads of paper and a stopwatch were key tools for any test I conducted. Over the years, I have worked in simple labs with basic video taping equipment and not much else, and I have worked in corporate environments with sophisticated usability labs and state-of-the-art equipment. Years ago, we conducted all usability studies at the location of the user. If we wanted to see if there were any differences between users in New York, Chicago, and Los Angeles, we went to those places to run the test. A lab environment is very useful for many test situations. However, there has always been a debate in the usability field about whether bringing someone into a lab environment, however friendly we make it, somehow intrinsically changes the behavior of the user as compared to having them work in their own environment, at their own desk, and on their own computer. We developed systems to create a portable usability lab, so that we could go to the users that we needed to test.  Do lab environments change user behavior patterns? Then 9/11 hit. You may not remember, but no planes flew for weeks afterwards. Companies all over the world couldn't fly-in employees for meetings. Suddenly, traveling to the location of the users had an additional difficulty. The company I was working for at the time had usability specialists stuck in New York for days before they could finally rent a car and drive home to Colorado. This changed the world pretty suddenly, and technology jumped on the change. Companies offering Internet meeting tools were strugglinguntil no one could travel. The Internet boomed with collaboration tools that enabled people to work together wherever they happened to be. This change in technology has made a huge difference in my world. We use collaborative tools to bring our product concepts and ideas to the user across the Internet. As a global company, we benefit from having users from all over the world inform our designs. We now run usability studies with users all over the world in a single day, a feat we couldn't have accomplished 10 years ago by plane! Other technology companies have started to do more of this type of usability testing, since the tools have improved so dramatically. Plus, in our busy world, it's not always easy to find users who can take the time away from their jobs to come to our labs. reaching users where it is convenient for them greatly improves the odds that people do participate. I manage a team of usability specialists who live in India and California, whlie I live in Colorado. We have wonderful labs that we bring users into to show them our products. But very often, we run our studies remotely. We used to take the lab to the users now we use the labs, but we let the users stay where they are. We gain users who might not have been able to leave work to come to our labs, and they get to use the system they are familiar with. And we gain users nearly anywhere that we can set up an Internet connection, as long as the users have a phone, a broadband connection, and a compatible Web browser (with no pop-up blockers). After we recruit participants in a traditional manner, we send them an invitation to participate through the use of a telephone conference call and Web conferencing tool. At Oracle, we use Oracle Web Conference part of Oracle Collaboration Suite, which enables us to give the user control of the mouse, while we present a prototype or wireframe pictures. We can record the sessions over the Web and phone conference. We send the users instructions, plus tips to ensure that we won't have problems sharing screens. In some cases, when time is tight, we even run a five-minute "test session" with users a day in advance to be sure that we can connect. Prior to the test, we send users a participant script that contains information about the study, including any questionnaires. This is exactly the same script we give to participants who come to the labs. We ask users to print this before the beginning of the session. We generally run these studies by having a usability engineer in our usability labs, so that we can record the session as though the user were in the lab with us. Roughly 80% of our application software usability testing at Oracle is performed using remote methods. The probability of getting a   remote test participant decreases the higher up the person is in the target organization. We have a methodology checklist available to help our usability engineers work through the remote processes.

    Read the article

  • The Oracle Graduate Experience...A Graduates Perspective by Angelie Tierney

    - by david.talamelli
    [Note: Angelie has just recently joined Oracle in Australia in our 2011 Graduate Program. Last week I shared my thoughts on our 2011 Graduate Program, this week Angelie took some time to share her thoughts of our Graduate Program. The notes below are Angelie's overview from her experience with us starting with our first contact last year - David Talamelli] How does the 1 year program work? It consists of 3 weeks of training, followed by 2 rotations in 2 different Lines of Business (LoB's). The first rotation goes for 4 months, while your 2nd rotation goes for 7, when you are placed into your final LoB for the program. The interview process: After sorting through the many advertised graduate jobs, submitting so many resumes and studying at the same time, it can all be pretty stressful. Then there is the interview process. David called me on a Sunday afternoon and I spoke to him for about 30 minutes in a mini sort of phone interview. I was worried that working at Oracle would require extensive technical experience, but David stressed that even the less technical, and more business-minded person could, and did, work at Oracle. I was then asked if I would like to attend a group interview in the next weeks, to which I said of course! The first interview was a day long, consisting of a brief introduction, a group interview where we worked on a business plan with a group of other potential graduates and were marked by 3 Oracle employees, on our ability to work together and presentation. After lunch, we then had a short individual interview each, and that was the end of the first round. I received a call a few weeks later, and was asked to come into a second interview, at which I also jumped at the opportunity. This was an interview based purely on your individual abilities and would help to determine which Line of Business you would go to, should you land a graduate position. So how did I cope throughout the interview stages? I believe the best tool to prepare for the interview, was to research Oracle and its culture and to see if I thought I could fit into that. I personally found out about Oracle, its partners as well as competitors and along the way, even found out about their part (or Larry Ellison's specifically) in the Iron Man 2 movie. Armed with some Oracle information and lots of enthusiasm, I approached the Oracle Graduate Interview process. Why did I apply for an Oracle graduate position? I studied a Bachelor of Business/Bachelor of Science in IT, and wanted to be able to use both my degrees, while have the ability to work internationally in the future. Coming straight from university, I wasn't sure exactly what I wanted to do in terms of my career. With the program, you are rotated across various lines of business, to not only expose you to different parts of the business, but to also help you to figure out what you want to achieve out of your career. As a result, I thought Oracle was the perfect fit. So what can an Oracle ANZ Graduate expect? First things first, you can expect to line up for your visitor pass. Really. Next you enter a room full of unknown faces, graduates just like you, and then you realise you're in this with 18 other people, going through the same thing as you. 3 weeks later you leave with many memories, colleagues you can call your friends, and a video of your presentation. Vanessa, the Graduate Manager, will also take lots of photos and keep you (well) fed. Well that's not all you leave with, you are also equipped with a wealth of knowledge and contacts within Oracle, both that will help you throughout your career there. What training is involved? We started our Oracle experience with 3 weeks of training, consisting of employee orientation, extensive product training, presentations on the various lines of business (LoB's), followed by sales and presentation training. While there was potential for an information overload, maybe even death by Powerpoint, we were able to have access to the presentations for future reference, which was very helpful. This period also allowed us to start networking, not only with the graduates, but with the managers who presented to us, as well as through the monthly chinwag, HR celebrations and even with the sharing of tea facilities. We also had a team bonding day when we recorded a "commercial" within groups, and learned how to play an Irish drum. Overall, the training period helped us to learn about Oracle, as well as ourselves, and to prepare us for our transition into our rotations. Where to now? I'm now into my 2nd week of my first graduate rotation. It has been exciting to finally get out into the work environment and utilise that knowledge we gained from training. My manager has been a great mentor, extremely knowledgeable, and it has been good being able to participate in meetings, conference calls and make a contribution towards the business. And while we aren't necessarily working directly with the other graduates, they are still reachable via email, Pidgin and lunch and they are important as a resource and support, after all, they are going through a similar experience to you. While it is only the beginning, there is a lot more to learn and a lot more to experience along the way, especially because, as we learned during training, at Oracle, the only constant is change.

    Read the article

  • Ok it has been pointed out to me

    - by Ratman21
    That it seems my blog is more of poor me or pity me or I deserve a job blog.   Hmmm I wont say, I have not wined here as I have used this blog to vent my frustration on the whole out of work thing (lack of money, self worth, family issues and the never end bills coming my way) but, it was also me trying to reach to others in the same boat as well as advertising, hay I am out here, employers.   It was also said, that I don’t have any thing listed here on me, like a cover letter or resume. Well there is but, it was so many months and post ago. Also what I had posted is not current. So here is my most current cover and resume.   Scott L Newman 45219 Dutton Way Callahan, Fl. 32011 To Whom It May Concern: I am really interested in the IT vacancie that you have listed for your company. Maybe I don’t have all the qualifications you want (hold on don’t hit delete yet) yet! But maybe I do, as I have over 20 + years experience in "IT” RIGHT NOW.   Read the rest of my cover and my resume. You will see what my “IT” skills are and it will Show that I can to this work! I can bring to your company along with my, can do attitude, a broad range of skills, including: Certified CompTIA A+, Security+  and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem  non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I don’t just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes but, I did it.) I would welcome the opportunity to further discuss this position with you. If you have questions or would like to schedule an interview, please contact me by phone at 904-879-4880 or on my cell 352-356-0945 or by e-mail at [email protected] or leave a message on my web site (http://beingscottnewman.webs.com/). I have enclosed/attached my resume for your review and I look forward to hearing from you.   Thank you for taking a moment to consider my cover letter and resume. I appreciate how busy you are. Sincerely, Scott L. Newman    Scott L. Newman 45219 Dutton Way, Callahan, FL 32011? H (904)879-4880 C (352)356-0945 ? [email protected] Web - http://beingscottnewman.webs.com/                                                       ______                                                                                       OBJECTIVE To obtain a Network Operation or Helpdesk position.     PROFILE Information Technology Professional with 20+ years of experience. Volunteer website creator and back-up sound technician at True Faith Christian Fellowship. CompTIA A+, Network+ and Security+ Certified.   TECHNICAL AND PROFESSIONAL SKILLS   §         Technical Support §         Frame Relay §         Microsoft Office Suite §         Inventory Management §         ISDN §         Windows NT/98/XP §         Client/Vendor Relations §         CICS §         Cisco Routers/Switches §         Networking/Administration §         RPG §         Helpdesk §         Website Design/Dev./Management §         Assembler §         Visio §         Programming §         COBOL IV §               EDUCATION ? New HorizonsComputerLearningCenter, Jacksonville, Florida – CompTIA A+, Security+ and Network+ Certified.             Currently working on CCNA Certification ?MottCommunity College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese     PROFESSIONAL             TrueFaithChristianFellowshipChurch – Callahan, FL, October 2009 – Present Web site Tech ·        Web site Creator/tech, back up song leader and back up sound technician. Note church web site is (http://ambassadorsforjesuschrist.webs.com/) U.S. Census (temp employee) Feb. 23 to March 8, 2010 ·        Enumerator for NassauCounty   ThomasCreekBaptistChurch – Callahan, FL,     June 2008 – September 2009 Churchsound and video technician      ·        sound and video technician           Fidelity National Information Services ? Jacksonville, FL ? February 01, 2005 to October 28, 2008 Client Server Dev/Analyst I ·        Monitored Multiple Debit Card sites, Check Authorization customers and the Card Auth system (AuthNet) for problems with the sites, connections, servers (on our LAN) and/or applications ·        Night (NOC) Network operator for a large Wide Area Network (WAN) ·        Monitored Multiple Check Authorization customers for problems with circuits, routers and applications ·        Resolved circuit and/or router issues or assist circuit carrier in resolving issue ·        Resolved application problems or assist application support in resolution ·        Liaison between customer and application support ·        Maintained and updated the NetOps Operation procedures Guide ·        Kept the listing of equipment on the raised floor updated ·        Involved in the training of all Night Check and Card server operation operators ·        FNIS acquired Certegy in 2005. Was one of 3 kept on.   Certegy ? St.Pete, FL ? August 31, 2003 to February 1, 2005 Senior NetOps Operator(FNIS acquired Certegy in 2005 all of above jobs/skills were same as listed in FNIS) ·        Converting Documentation to Adobe format ·        Sole trainer of day/night shift System Management Center operators (SMC) ·        Equifax spun off Card/Check Dept. as Certegy. Certegy terminated contract with EDS. One of six in the whole IT dept that was kept on.   EDS  (Certegy Account) ? St.Pete, FL ? July 1, 1999 to August 31, 2003 Senior NetOps Operator ·        Equifax outsourced the NetOps dept. to EDS in 1999. ·        Same job skills as listed above for FNIS.   Equifax ? St.Pete&Tampa, FL ? January 1, 1991 to July 1, 1999 NetOps/Tandem Operator ·        All of the above for FNIS, except for circuit and router issues ·        Operated, monitored and trouble shot Tandem mainframe and servers on LAN ·        Supported in the operation of the Print, Tape and Microfiche rooms ·        Equifax acquired TelaCredit in 1991.   TelaCredit ? Tampa, FL ? June 28, 1989 to January 1, 1991 Tandem Operator ·        Operated and monitored Tandem Non-stop systems for Card and Check Auths ·        Operated multiple high-speed Laser printers and Microfiche printers ·        Mounted, filed and maintained 18 reel-to-reel mainframe tape drives, cartridges tape drives and tape library.

    Read the article

  • When does "proper" programming no longer matter?

    - by Kai Qing
    I've been a full time programmer for about 8 years now. Web based mostly, ranging in weird jobs for clients. Never anything I "want" to do. So my experience is limited to what I've been contracted to do, having no real incentive to master anything in particular. So here's my scenario and ultimately what I wonder about... I've been building an android game in my spare time. It's using the libgdx library so quite a bit of the heavy lifting is done for me. I don't read much of the docs cause unless it's in tutorial format I will just not care, and ultimately most of my questions have already been asked on stackoverflow. I get along fine and my game works as expected... Suspiciously well, even. So much so that I wonder why one should bother to be "proper" when coding if the end result is ultimately the same. To be more specific, I used a hashtable because I wanted something close to an associative array. Human readable key values. In other places to achieve similar things, I use a vector. I know libgdx has vector2 and vector3 classes, but I've never used them. When I come across weird problems and search stackoverflow for help, I see a lot of people just reaming the questions that use a certain datatype when another one is technically "proper." Like using an ArrayList because it does not require defined bounds versus re-defining an int[] with new known boundaries. Or even something trivial like this: for(int i = 0; i < items.length; i ++) { // do something } I know it evaluates item.length on every iteration. I just don't care. I know items will never be more than 15 to 20 items. So why bother caring if I evaluate items.length on every iteration? So I wonder - why does everyone get all up in arms over this? Who cares if I use a less efficient datatype to get the job done? I ran some tests to see how the app performs using the lazy, get it done fast and don't look back method I just described versus the proper, follow the tutorial and use the exact data types suggested by the community. The results: Same thing. Average 45 fps. I opened every app on the phone and galaxy tab. Same deal. No difference. My game is pretty graphic intensive. It's not like it's just a simple thing. I expected it to perform kind of badly since I don't care to optimize image assets or... well, you probably get the idea. I'm making the game for fun. As a joke, really. But in doing so I'm working outside the normal scope of my job, which is to always follow the rules and do it the right way. So to say, I am without bounds here and this has caused me to wonder why I ever really care to be "proper" So I guess my question to you is this: Is there a threshold when it no longer matters to be proper? Is there a lasting, longer term consequence to the lazy, get it done and don't look back route? Is it ok to say - "so long as it gets the job done, I don't care?" Disclaimer: When I program my game, I am almost always drunk. I do it to remember why I got into this stuff to begin with because the monotony of client based web work will make you hate being a programmer. I'm having a blast and my game is not crashing, tests well, performs well, looks good on all devices so far and has no noticeable negative impact on any of my testing devices. I expected failure because I was being so drunkenly careless with my code, but to my surprise, it had no noticeable impact. I am now starting to question the need to be careful. Help me regain the ability to care! ... or explain why it's not a bad thing to not care. Secondary disclaimer: I am aware of the benefits of maintainability. For myself and others. Agreed. But it's not like someone happening across my inefficient int[] loop won't know what it does. As an experienced programmer those kinds of things are just clear on sight. I document the complex stuff for myself knowing I was drunk and will probably need a reminder. Those notes would clarify any confusion for someone who might ever gaze upon my ridiculous game - though the reality is that either I maintain it myself or it fades into time. I'm ok with that. But if it doesn't slow the device down, or crash, then crossing the t's and dotting the i's might actually require more time than it's worth.

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Infiniband: a highperformance network fabric - Part I

    - by Karoly Vegh
    Introduction:At the OpenWorld this year I managed to chat with interesting people again - one of them answering Infiniband deepdive questions with ease by coffee turned out to be one of Oracle's IB engineers, Ted Kim, who actually actively participates in the Infiniband Trade Association and integrates Oracle solutions with this highspeed network. This is why I love attending OOW. He granted me an hour of his time to talk about IB. This post is mostly based on that tech interview.Start of the actual post: Traditionally datatransfer between servers and storage elements happens in networks with up to 10 gigabit/seconds or in SANs with up to 8 gbps fiberchannel connections. Happens. Well, data rather trickles through.But nowadays data amounts grow well over the TeraByte order of magnitude, and multisocket/multicore/multithread Servers hunger data that these transfer technologies just can't deliver fast enough, causing all CPUs of this world do one thing at the same speed - waiting for data. And once again, I/O is the bottleneck in computing. FC and Ethernet can't keep up. We have half-TB SSDs, dozens of TB RAM to store data to be modified in, but can't transfer it. Can't backup fast enough, can't replicate fast enough, can't synchronize fast enough, can't load fast enough. The bad news is, everyone is used to this, like back in the '80s everyone was used to start compile jobs and go for a coffee. Or on vacation. The good news is, there's an alternative. Not so-called "bleeding-edge" 8gbps, but (as of now) 56. Not layers of overhead, but low latency. And it is available now. It has been for a while, actually. Welcome to the world of Infiniband. Short history:Infiniband was born as a result of joint efforts of HPAQ, IBM, Intel, Sun and Microsoft. They planned to implement a next-generation I/O fabric, in the 90s. In the 2000s Infiniband (from now on: IB) was quite popular in the high-performance computing field, powering most of the top500 supercomputers. Then in the middle of the decade, Oracle realized its potential and used it as an interconnect backbone for the first Database Machine, the first Exadata. Since then, IB has been booming, Oracle utilizes and supports it in a large set of its HW products, it is the backbone of the famous Engineered Systems: Exadata, SPARC SuperCluster, Exalogic, OVCA and even the new DB backup/recovery box. You can also use it to make servers talk highspeed IP to eachother, or to a ZFS Storage Appliance. Following Oracle's lead, even IBM has jumped the wagon, and leverages IB in its PureFlex systems, their first InfiniBand Machines.IB Structural Overview: If you want to use IB in your servers, the first thing you will need is PCI cards, in IB terms Host Channel Adapters, or HCAs. Just like NICs for Ethernet, or HBAs for FC. In these you plug an IB cable, going to an IB switch providing connection to other IB HCAs. Of course you're going to need drivers for those in your OS. Yes, these are long-available for Solaris and Linux. Now, what protocols can you talk over IB? There's a range of choices. See, IB isn't accepting package loss like Ethernet does, and hence doesn't need to rely on TCP/IP as a workaround for resends. That is, you still can run IP over IB (IPoIB), and that is used in various cases for control functionality, but the datatransfer can run over more efficient protocols - like native IB. About PCI connectivity: IB cards, as you see are fast. They bring low latency, which is just as important as their bandwidth. Current IB cards run at 56 gbit/s. That is slightly more than double of the capacity of a PCI Gen2 slot (of ~25 gbit/s). And IB cards are equipped usually with two ports - that is, altogether you'd need 112 gbit/s PCI slots, to be able to utilize FDR IB cards in an active-active fashion. PCI Gen3 slots provide you with around ~50gbps. This is why the most IB cards are configured in an active-standby way if both ports are used. Once again the PCI slot is the bottleneck. Anyway, the new Oracle servers are equipped with Gen3 PCI slots, an the new IB HCAs support those too. Oracle utilizes the QDR HCAs, running at 40gbp/s brutto, which translates to a 32gbp/s net traffic due to the 10:8 signal-to-data information ratio. Consolidation techniques: Technology never stops to evolve. Mellanox is working on the 100 gbps (EDR) version already, which will be optical, since signal technology doesn't allow EDR to be copper. Also, I hear you say "100gbps? I will never use/need that much". Are you sure? Have you considered consolidation scenarios, where (for example with Oracle Virtual Network) you could consolidate your platform to a high densitiy virtualized solution providing many virtual 10gbps interfaces through that 100gbps? Technology never stops to evolve. I still remember when a 10mbps network was impressively fast. Back in those days, 16MB of RAM was a lot. Now we usually run servers with around 100.000 times more RAM. If network infrastrucure speends could grow as fast as main memory capacities, we'd have a different landscape now :) You can utilize SRIOV as well for consolidation. That is, if you run LDoms (aka Oracle VM Server for SPARC) you do not have to add physical IB cards to all your guest LDoms, and you do not need to run VIO devices through the hypervisor either (avoiding overhead). You can enable SRIOV on those IB cards, which practically virtualizes the PCI bus, and you can dedicate Physical- and Virtual Functions of the virtualized HCAs as native, physical HW devices to your guests. See Raghuram's excellent post explaining SRIOV. SRIOV for IB is supported since LDoms 3.1.  This post is getting lengthier, so I will rename it to Part I, and continue it in a second post. 

    Read the article

  • Day 1 - Finding Like Minds

    - by dapostolov
    So, is being a Game Developer any different from being an IT Developer? I picture a poorly lit environment where I get to purchase my own desk lamp; I'm thinking one of those huge lava lamps that pump out so much heat you could fry an egg on it. To my right: a "great wall" of empty coke cans dwarf me. Eating my last slice of pizza I look across my desk to see a fellow developer with a smug look on his face;  he's just coded his latest module for the game and it looks like he's in nirvana. My duty, of course, is to remind him to keep focused on the job at hand. So, picking up my trusty elastic and aerodynamically crafted paper bullet I begin a 10 minute war of welts and laughter which is promptly abrupted by our Project Manager demanding more details from our morning Scrum meeting. After providing about 5 minutes of geek speak and several words of comfort to make his eyes glaze over...it hits me, the idea for the module...beckoning my developer friend over, we quickly shoo the Project Manager away and begin our brainstorming frenzy ... now, where'd I put that full can of coke? OK. OK. This isn't probably the most ideal game developer environment, but it definitely sounds fun to me...and from what I gather is nothing like most game development companies. But I'm not doing this blog series to "go pro"; like I stated in my first post I want to make a 2D game from an idea my best friend and I drummed up long, long ago. I'm in this for the passion AND I want to see how easy it is for us .Net Developers to create a game. So where do I start? Where can I find like minded individuals? What technologies are there? What do I need to make a video game? The questions are endless....AND...since I already have an idea ... lets start with ... Technology (yes, I'm a geek, live with it...) Technology OK. Predominantly, games are still made in C++ or even C. I'm not sure how much assembly code is floating around lately, however, that is not my concern. I do know C / C++ from my past, enough to even get me by, but I'm mainly interested in a recent, not-so-new, technology called XNA. What is XNA? XNA allows us .Net Developers to make 2D / 3D games for windows, Xbox*, and Windows Mobile 7*. * = for a nominal fee *cough* The following link is your one stop shop to XNA game development: http://creators.xna.com/en-US/education/gettingstarted The above site hosts information such as: - getting started - a sample/instructional shooter game in 2D / 3D with code (if I'm taking too long for you in this blog series) - downloads - starter kits... http://creators.xna.com/en-US/education/starterkits/ And of course...forums. You can also subscribe and pay for their premium membership which gets you some pretty awesome tutorials, resources, downloads, and premium community support. Some general Wiki information about XNA: http://en.wikipedia.org/wiki/XNA_%28Microsoft%29 Community Support OK. Let's move on to industry and community support. Apart from XNA, there are some really cool sites out there, I just haven't found all of them yet. However, I found a really cool Game Development website called Gamastura. You can click on the following link to get you there: http://www.gamasutra.com/ The site is 100% dedicated to "The Art & Business of Making Games". Armed with blogs, twitter, jobs/resumes and most importantly industry news; one could subscribe to the feed and got lost in the wealth of information it provides. On a side note: I remember Gamasutra being around when my best friend and I wanted to make a video game...meaning, they've been around for a while now. I think the most beneficial aspect of this site is to understand the industry you want to get into. Otherwise, it's just a cool site to keep up to date with the industry in general. Another Community Support option is LinkedIn. Amongst the land of extremely bloated achievements and responsibilities lay 3 groups (that I have found) that deal with game development.: http://www.linkedin.com/groups?gid=59205 - Game Developers http://www.linkedin.com/groups?gid=824817 - DirectX Game Developer Network http://www.linkedin.com/groups?gid=756587 - DirectX Developers The Game Developers group in LinkedIn is by far the most active of the three and could possibly provide a wealth of support. What I've done thus far: - I lightly researched the XNA technology - I looked around for some community sites to assist me - I downloaded the XNA Game Studio 3.1 on my PC and installed it on my IDE - I even tried both tutorials! http://creators.xna.com/en-US/education/gettingstarted/bgintro/chapter1   Best Regards D.

    Read the article

  • Upgrading Agent Controllers in Oracle Enterprise Manager Ops Center 12c

    - by S Stelting
    Oracle Enterprise Manager Ops Center 12c recently released an upgrade for Solaris Agent Controllers. In this week's blog post, we'll show you how to upgrade agent controllers. Detailed instructions about upgrading Agent Controllers are available in the product documentation here. This blog post uses an Enterprise Controller which is configured for connected mode operation. If you'd like to apply the agent update in a disconnected installation, additional instructions are available here. Step 1: Download Agent Controller Updates With a connected mode Ops Center installation, you can check for product updates at any time by selecting the Enterprise Controller from the left-hand Administration navigation tab. Select the right-hand Action link “Ops Center Downloads” to open a pop-up dialog displaying any new product updates. In this example, the Enterprise Controller has already been upgraded to the latest version (Update 1, also shown as build version 2076) so only the Agent Controller updates will appear. There are three updates available: one for Solaris 10 X86, one for Solaris 8-10 SPARC, and one for all versions of Solaris 11. Note that the last update in the screen shot is the Solaris 11 update; for details on any of the downloads, place your mouse over the information icon under the details column for a pop-up text region. Select the software to download and click the Next button to display the Ops Center license agreement. Review and click the check box to accept the license agreement, then click the Next button to begin downloading the software. The status screen shows the current download status. If desired, you can perform the downloads as a background job. Simply click the check box, then click the next button to proceed to the summary screen. The summary screen shows the updates to be downloaded as well as the current status. Clicking the Finish button will close the dialog and return to the Browser UI. The download job will continue to run in Ops Center and progress can still be viewed from the jobs menu at the bottom of the browser window. Step 2: Check the Version of Existing Agent Controllers After the download job completes, you can check the availability of agent updates as well as the current versions of your Agent Controllers from the left-hand Assets navigation tab. Select “Operating Systems” from the pull-down tab lets to display only OS assets. Next, select “Solaris” in the left-hand tab to display the Solaris assets. Finally, select the Summary tab in the center display panel to show which versions of agent controllers are installed in your data center. Notice that a few of the OS assets are not displayed in the Agent Controllers tab. Ops Center will not display OS instances which do not have an Agent Controller installation. This includes Enterprise Controllers and Proxy Controllers (unless the agent has been activated on the OS instance) and and OS instances using agentless management. For Agent Controllers which support an update, the version of agent software (in this example, 2083) appears to the right of the currently installed version. Step 3: Upgrade Your Agent Controllers If desired, you can upgrade agent controllers from the previous screen by selecting the desired systems and clicking the upgrade button. Alternatively, you can click the link “Upgrade All Agent Controllers” in the right-hand Actions menu: In either case, a pop-up dialog lets you start the upgrade process. The first screen in the dialog lets you choose the upgrade method: Ops Center provides three ways to upgrade agent controllers: Automatic Upgrade: If Agent Controllers are running on all assets, Ops Center can automatically upgrade the software to the latest version without requiring any login credentials to the system SSH using a single set of credentials: If all assets use the same login credentials, you can apply a single set to all assets for the upgrade process. The log-in credentials are the same ones used for asset discovery and management, which are stored in the Plan Management navigation tab under Credentials. SSH using individual credentials: If assets use different login credentials, you can select a different set for each asset. After selecting the upgrade method, click the Next button to proceed to the summary screen. Click the Finish button to close the pop-up dialog and start the upgrade job for the agent controllers. The upgrade job runs a series of tasks in parallel, and will upgrade all agents which have been selected. Once the job completes, the OS instances in your data center will be upgraded and running the latest version of Agent Controller software.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • Session Report - Modern Software Development Anti-Patterns

    - by Janice J. Heiss
    In this standing-room-only session, building upon his 2011 JavaOne Rock Star “Diabolical Developer” session, Martijn Verburg, this time along with Ben Evans, identified and explored common “anti-patterns” – ways of doing things that keep developers from doing their best work. They emphasized the importance of social interaction and team communication, along with identifying certain psychological pitfalls that lead developers astray. Their emphasis was less on technical coding errors and more how to function well and to keep one’s focus on what really matters. They are the authors of the highly regarded The Well-Grounded Java Developer and are both movers and shakers in the London JUG community and on the Java Community Process. The large room was packed as they gave a fast-moving, witty presentation with lots of laughs and personal anecdotes. Below are a few of the anti-patterns they discussed.Anti-Pattern One: Conference-Driven DeliveryThe theme here is the belief that “Real pros hack code and write their slides minutes before their talks.” Their response to this anti-pattern is an expression popular in the military – PPPPPP, which stands for, “Proper preparation prevents piss-poor performance.”“Communication is very important – probably more important than the code you write,” claimed Verburg. “The more you speak in front of large groups of people the easier it gets, but it’s always important to do dry runs, to present to smaller groups. And important to be members of user groups where you can give presentations. It’s a great place to practice speaking skills; to gain new skills; get new contacts, to network.”They encouraged attendees to record themselves and listen to themselves giving a presentation. They advised them to start with a spouse or friends if need be. Learning to communicate to a group, they argued, is essential to being a successful developer. The emphasis here is that software development is a team activity and good, clear, accessible communication is essential to the functioning of software teams. Anti-Pattern Two: Mortgage-Driven Development The main theme here was that, in a period of worldwide recession and economic stagnation, people are concerned about keeping their jobs. So there is a tendency for developers to treat knowledge as power and not share what they know about their systems with their colleagues, so when it comes time to fix a problem in production, they will be the only one who knows how to fix it – and will have made themselves an indispensable cog in a machine so you cannot be fired. So developers avoid documentation at all costs, or if documentation is required, put it on a USB chip and lock it in a lock box. As in the first anti-pattern, the idea here is that communicating well with your colleagues is essential and documentation is a key part of this. Social interactions are essential. Both Verburg and Evans insisted that increasingly, year by year, successful software development is more about communication than the technical aspects of the craft. Developers who understand this are the ones who will have the most success. Anti-Pattern Three: Distracted by Shiny – Always Use the Latest Technology to Stay AheadThe temptation here is to pick out some obscure framework, try a bit of Scala, HTML5, and Clojure, and always use the latest technology and upgrade to the latest point release of everything. Don’t worry if something works poorly because you are ahead of the curve. Verburg and Evans insisted that there need to be sound reasons for everything a developer does. Developers should not bring in something simply because for some reason they just feel like it or because it’s new. They recommended a site run by a developer named Matt Raible with excellent comparison spread sheets regarding Web frameworks and other apps. They praised it as a useful tool to help developers in their decision-making processes. They pointed out that good developers sometimes make bad choices out of boredom, to add shiny things to their CV, out of frustration with existing processes, or just from a lack of understanding. They pointed out that some code may stay in a business system for 15 or 20 years, but not all code is created equal and some may change after 3 or 6 months. Developers need to know where the code they are contributing fits in. What is its likely lifespan? Anti-Pattern Four: Design-Driven Design The anti-pattern: If you want to impress your colleagues and bosses, use design patents left, right, and center – MVC, Session Facades, SOA, etc. Or the UML modeling suite from IBM, back in the day… Generate super fast code. And the more jargon you can talk when in the vicinity of the manager the better.Verburg shared a true story about a time when he was interviewing a guy for a job and asked him what his previous work was. The interviewee said that he essentially took patterns and uses an approved book of Enterprise Architecture Patterns and applied them. Verburg was dumbstruck that someone could have a job in which they took patterns from a book and applied them. He pointed out that the idea that design is a separate activity is simply wrong. He repeated a saying that he uses, “You should pay your junior developers for the lines of code they write and the things they add; you should pay your senior developers for what they take away.”He explained that by encouraging people to take things away, the code base gets simpler and reflects the actual business use cases developers are trying to solve, as opposed to the framework that is being imposed. He told another true story about a project to decommission a very long system. 98% of the code was decommissioned and people got a nice bonus. But the 2% remained on the mainframe so the 98% reduction in code resulted in zero reduction in costs, because the entire mainframe was needed to run the 2% that was left. There is an incentive to get rid of source code and subsystems when they are no longer needed. The session continued with several more anti-patterns that were equally insightful.

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • What to leave when you're leaving

    - by BuckWoody
    There's already a post on this topic - sort of. I read this entry, where the author did a good job on a few steps, but I found that a few other tips might be useful, so if you want to check that one out and then this post, you might be able to put together your own plan for when you leave your job.  I once took over the system administrator (of which the Oracle and SQL Server servers were a part) at a mid-sized firm. The outgoing administrator had about a two- week-long scheduled overlap with me, but was angry at the company and told me "hey, I know this is going to be hard on you, but I want them to know how important I was. I'm not telling you where anything is or what the passwords are. Good luck!" He then quit that day. It took me about three days to find all of the servers and crack the passwords. Yes, the company tried to take legal action against the guy and all that, but he moved back to his home country and so largely got away with it. Obviously, this isn't the way to leave a job. Many of us have changed jobs in the past, and most of us try to be very professional about the transition to a new team, regardless of the feelings about a particular company. I've been treated badly at a firm, but that is no reason to leave a mess for someone else. So here's what you should put into place at a minimum before you go. Most of this is common sense - which of course isn't very common these days - and another good rule is just to ask yourself "what would I want to know"? The article I referenced at the top of this post focuses on a lot of documentation of the systems. I think that's fine, but in actuality, I really don't need that. Even with this kind of documentation, I still perform a full audit on the systems, so in the end I create my own system documentation. There are actually only four big items I need to know to get started with the systems: 1. Where is everything/everybody?The first thing I need to know is where all of the systems are. I mean not only the street address, but the closet or room, the rack number, the IU number in the rack, the SAN luns, all that. A picture here is worth a thousand words, which is why I really like Visio. It combines nice graphics, full text and all that. But use whatever you have to tell someone the physical locations of the boxes. Also, tell them the physical location of the folks in charge of those boxes (in case you aren't) or who share that responsibility. And by "where" in this case, I mean names and phones.  2. What do they do?For both the servers and the people, tell them what they do. If it's a database server, detail what each database does and what application goes to that, and who "owns" that application. In my mind, this is one of hte most important things a Data Professional needs to know. In the case of the other administrtors or co-owners, document each person's responsibilities.   3. What are the credentials?Logging on/in and gaining access to the buildings are things that the new Data Professional will need to do to successfully complete their job. This means service accounts, certificates, all of that. The first thing they should do, of course, is change the passwords on all that, but the first thing they need is the ability to do that!  4. What is out of the ordinary?This is the most tricky, and perhaps the next most important thing to know. Did you have to use a "special" driver for that video card on server X? Is the person that co-owns an application with you mentally unstable (like me) or have special needs, like "don't talk to Buck before he's had coffee. Nothing will make any sense"? Do you have service pack requirements for a specific setup? Write all that down. Anything that took you a day or longer to make work is probably a candidate here. This is my short list - anything you care to add? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Revisiting .NET, but what should I focus on?

    - by Wayne M
    After about a two-year hiatus, I'm brushing up on my .NET skills to find a .NET job (my previous two positions have very little development, or development using legacy technologies, so apart from a few very minor apps I have not touched .NET in close to two years). I'm aware of things like ASP.NET MVC, and I have previously read on things like NHibernate and DI/IOC, albeit I have yet to use them apart from very trivial "Hello World" type applications. I have a subscription to Rob Conery's Tekpub website and occasionally watch these videos when I have free time. My concern is this: I don't live in a very technical area. I would be surprised if any but the most tech-savvy companies have heard of, let alone use, ASP.NET MVC, NHibernate (or even LINQ/EF), or know about IoC. I would be willing to bet a large sum of money that 95% of the possible jobs I could obtain will use the following: Visual Source Safe, if any VCS at all ASP.NET 2.0 Webforms (3.5 if lucky) Raw ADO.NET on top of a very thin implementation of the Gateway pattern Stored Procedures in the database for most CRUD operations Gratuitous use of code-behind, with a Service layer if I'm lucky If I were extremely lucky, I might find a shop that has heard of ORMs and either uses one, or has wrote their own data abstraction. Also if I were lucky, the company would be using Model-View-Presenter. In light of this I'm not sure what I should focus on learning. Personally, I would prefer to be using the latest stuff - ASP.NET MVC, NHibernate, jQuery, WCF etc. Reality says I should go back to the basics, since it looks like most potential opportunities aren't going to be anywhere near the cutting edge, or anywhere close to it. And, as much as I would like to find a position and start to show the other developers the benefits, in my past experience this has usually resulted in my being fired for "not being a team player" and doing things the bad old way. So, I am curious how you would approach a situation like this? What should I focus on, in order to A) Reaquaint myself with .NET, and B) Prepare myself to obtain a .NET job again that is more than likely going to use techniques that I and most other knowledgeable developers will scoff at?

    Read the article

  • Flash Builder missing fundamental things

    - by Bart van Heukelom
    All of a sudden Flash Builder 4 is missing all kinds of fundamental things and is generating incorrect errors. I've had the same issue yesterday, where I fixed it by downloading a new Flex SDK and importing that into FB. I did this again, but this time it fixed nothing. I don't think it's something I did, like removing critical references from the build path. The errors also appeared on projects I was not working on at the time. It occurs for ActionScript, Flex and Flex Library projects alike. Update: I find this stack trace in the Flash Builder error log: !ENTRY com.adobe.flexbuilder.project 4 43 2010-05-11 11:55:47.495 !MESSAGE Uncaught exception in compiler !STACK 0 java.lang.NullPointerException at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2592) at macromedia.asc.parser.VariableBindingNode.evaluate(VariableBindingNode.java:64) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2233) at macromedia.asc.parser.ListNode.evaluate(ListNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2578) at macromedia.asc.parser.VariableDefinitionNode.evaluate(VariableDefinitionNode.java:48) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2503) at macromedia.asc.parser.WithStatementNode.evaluate(WithStatementNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2891) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2905) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3643) at macromedia.asc.parser.ClassDefinitionNode.evaluate(ClassDefinitionNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3371) at macromedia.asc.parser.ProgramNode.evaluate(ProgramNode.java:80) at flex2.compiler.as3.As3Compiler.analyze4(As3Compiler.java:709) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:3089) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:2977) at flex2.compiler.CompilerAPI.batch2(CompilerAPI.java:528) at flex2.compiler.CompilerAPI.batch(CompilerAPI.java:1274) at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1496) at flex2.tools.oem.Application.compile(Application.java:1188) at flex2.tools.oem.Application.recompile(Application.java:1133) at flex2.tools.oem.Application.compile(Application.java:819) at flex2.tools.flexbuilder.BuilderApplication.compile(BuilderApplication.java:344) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder$MyBuilder.mybuild(ASApplicationBuilder.java:276) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder.build(ASApplicationBuilder.java:127) at com.adobe.flexbuilder.multisdk.compiler.internal.ASBuilder.build(ASBuilder.java:190) at com.adobe.flexbuilder.multisdk.compiler.internal.ASItemBuilder.build(ASItemBuilder.java:74) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.buildItem(FlexProjectBuilder.java:480) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.build(FlexProjectBuilder.java:306) at com.adobe.flexbuilder.project.compiler.internal.FlexIncrementalBuilder.build(FlexIncrementalBuilder.java:157) at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:627) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:170) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:201) at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:253) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:256) at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:309) at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:341) at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:140) at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:238) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • Hive NR map progress inconsistent and regurlarly restart from 0%

    - by user92471
    I have a Yarn MR (with two ec2 instances to mapreduce) job on a dataset of approximately a thousand avro records, and the map phase is behaving erratically. See the progress below. Of course i checked the logs on resourcemanager and nodemanagers and saw nothing suspicious, but these logs are too verbose What is going on there ? hive> select * from nikon where qs_cs_s_aid='VIEW' limit 10; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1352281315350_0020, Tracking URL = http://blabla.ec2.internal:8088/proxy/application_1352281315350_0020/ Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=blabla.com:8032 -kill job_1352281315350_0020 Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0 2012-11-07 11:14:40,976 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:15:06,136 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 10.38 sec 2012-11-07 11:15:07,253 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:08,371 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:09,491 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:10,643 Stage-1 map = 2%, reduce = 0%, Cumulative CPU 15.42 sec (...) 2012-11-07 11:15:35,441 Stage-1 map = 28%, reduce = 0%, Cumulative CPU 37.77 sec 2012-11-07 11:15:36,486 Stage-1 map = 28%, reduce = 0%, Cumulative CPU 37.77 sec here restart at 16% ? 2012-11-07 11:15:37,692 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:38,815 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:39,865 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:41,064 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec 2012-11-07 11:15:42,181 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec 2012-11-07 11:15:43,299 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec here restart at 0% ? 2012-11-07 11:15:44,418 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:16:02,076 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 6.86 sec 2012-11-07 11:16:03,193 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 6.86 sec 2012-11-07 11:16:04,259 Stage-1 map = 2%, reduce = 0%, Cumulative CPU 8.45 sec (...) 2012-11-07 11:16:31,291 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 35.34 sec 2012-11-07 11:16:32,414 Stage-1 map = 26%, reduce = 0%, Cumulative CPU 37.93 sec here restart at 11% ? 2012-11-07 11:16:33,459 Stage-1 map = 11%, reduce = 0%, Cumulative CPU 19.53 sec 2012-11-07 11:16:34,507 Stage-1 map = 11%, reduce = 0%, Cumulative CPU 19.53 sec 2012-11-07 11:16:35,731 Stage-1 map = 13%, reduce = 0%, Cumulative CPU 21.47 sec (...) 2012-11-07 11:16:46,839 Stage-1 map = 17%, reduce = 0%, Cumulative CPU 24.14 sec here restart at 0% ? 2012-11-07 11:16:47,939 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:16:56,653 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 7.54 sec 2012-11-07 11:16:57,814 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 7.54 sec (...) Needless to say the job crashes after some time with an Error: java.io.IOException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: -56

    Read the article

  • H12 timeout error on Heroku

    - by snowangel
    Can anyone shed some light on what's causing this timeout error on Heroku (at 2012-07-08T08:58:33+00:00)? The docs say that it's because of some long running process. I've set config.assets.initialize_on_precompile = false in config/application.rb. EmBP-2:bc Emma$ heroku restart Restarting processes... done EmBP-2:bc Emma$ heroku logs --tail 2012-07-08T08:47:21+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.1" 200 311723 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:47:21+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.0" 200 1311615 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:51:32+00:00 heroku[slugc]: Slug compilation started 2012-07-08T08:54:05+00:00 heroku[api]: Release v145 created by [email protected] 2012-07-08T08:54:05+00:00 heroku[api]: Deploy 8814b2f by [email protected] 2012-07-08T08:54:05+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:54:06+00:00 heroku[slugc]: Slug compilation finished 2012-07-08T08:54:09+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 22429 -c ./config/unicorn.rb` 2012-07-08T08:54:10+00:00 app[worker.1]: [Worker(host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1)] Exiting... 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.320616 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376765 #1] INFO -- : master complete 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376272 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011695 #1] INFO -- : worker=0 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011386 #1] INFO -- : listening on addr=0.0.0.0:22429 fd=3 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.017917 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.019309 #1] INFO -- : master process ready 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.018250 #5] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.016768 #1] INFO -- : worker=1 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020863 #8] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020617 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:54:12+00:00 app[worker.1]: SQL (2.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1') 2012-07-08T08:54:12+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:54:13+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:54:14+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:54:20+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:54:20+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:47+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.467693 #8] INFO -- : worker=1 ready 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.823800 #5] INFO -- : worker=0 ready 2012-07-08T08:54:48+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:54:48+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:54:48+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1 2012-07-08T08:54:48+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:49+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Starting job worker 2012-07-08T08:57:54+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:57:56+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047386 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047753 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047999 #1] INFO -- : master complete 2012-07-08T08:57:57+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:58+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:57:58+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Exiting... 2012-07-08T08:57:59+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 29766 -c ./config/unicorn.rb` 2012-07-08T08:58:01+00:00 app[worker.1]: SQL (27.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1') 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070527 #1] INFO -- : listening on addr=0.0.0.0:29766 fd=3 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070782 #1] INFO -- : worker=0 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.074498 #1] INFO -- : worker=1 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.075702 #1] INFO -- : master process ready 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076732 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076957 #5] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089022 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089299 #8] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:58:02+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:58:10+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:58:11+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:33+00:00 heroku[router]: Error H12 (Request timeout) -> GET codicology.co.uk/ dyno=web.1 queue= wait= service=30000ms status=503 bytes=0 2012-07-08T08:58:33+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.0" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:33+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.1" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:42+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:58:42+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1 2012-07-08T08:58:42+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:58:42+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:58:43+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] Starting job worker 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:24+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:25+00:00 app[web.1]: I, [2012-07-08T08:59:25.555052 #5] INFO -- : worker=0 ready 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: Started GET "/" for 82.69.50.215 at 2012-07-08 08:59:25 +0000 2012-07-08T08:59:26+00:00 app[web.1]: Processing by PagesController#home as HTML 2012-07-08T08:59:26+00:00 app[web.1]: I, [2012-07-08T08:59:26.043501 #8] INFO -- : worker=1 ready 2012-07-08T08:59:26+00:00 app[web.1]: Rendered pages/home.html.haml within layouts/application (5.7ms) 2012-07-08T08:59:26+00:00 app[web.1]: (1.1ms) SELECT COUNT(*) FROM "delayed_jobs" 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_header.html.erb (4.2ms) 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_footer.html.haml (1.4ms) 2012-07-08T08:59:26+00:00 app[web.1]: Completed 200 OK in 326ms (Views: 258.4ms | ActiveRecord: 65.2ms)

    Read the article

  • Android: Getting Error: Conversion to Dalvik format failed

    - by Rupesh C
    I am building an app on android and running into an error and while searching on net, came across your posting on this and changed the eclipse.ini to increase Xms and Xmx params but still this error does not go away. I am using Eclipse IDE for Java with Andrioid SDK 2.1 on Mac OS. Please help or please point me to someone who might know. Btw, this error only happens when i add external jar files (which i need for my project). here are the list of external jar files that i have in my classpath.) // httpclient-4.0.1.jar from apache // httpcore -4.0.1.jarfrom apache // commons-codec-1.3.jar from apache //commons-logging-1.1.1.jar from apache // json_simple-1.1.jar from google Here is the complete error: UNEXPECTED TOP-LEVEL EXCEPTION: java.lang.IllegalArgumentException: already added: Lorg/apache/commons/logging/impl/AvalonLogger; [2010-05-02 21:57:05 - MyApp]     at com.android.dx.dex.file.ClassDefsSection.add(ClassDefsSection.java:123) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.dex.file.DexFile.add(DexFile.java:143) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processClass(Main.java:301) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processFileBytes(Main.java:278) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.access$100(Main.java:56) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:229) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.cf http://com.android.dx.cf.direct.ClassPathOpener.pro .direct.ClassPathOpener.processArchive(ClassPathOpener.java:244) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:130) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processOne(Main.java:247) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.run(Main.java:139) [2010-05-02 21:57:05 - MyApp]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [2010-05-02 21:57:05 - MyApp]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [2010-05-02 21:57:05 - MyApp]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [2010-05-02 21:57:05 - MyApp]     at java.lang.reflect.Method.invoke(Method.java:592) [2010-05-02 21:57:05 - MyApp]     at com.android.ide.eclipse.adt.internal.sdk.DexWrapper.run(Unknown Source) [2010-05-02 21:57:05 - MyApp]     at com.android.ide.eclipse.adt.internal.build.ApkBuilder.executeDx(Unknown Source) [2010-05-02 21:57:05 - MyApp]     at com.android.ide.eclipse.adt.internal.build.ApkBuilder.build(Unknown Source) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:627) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:170) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:201) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:253) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:256) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:309) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:341) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:140) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:238) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) [2010-05-02 21:57:05 - MyApp] 4 errors; aborting [2010-05-02 21:57:05 - MyApp] Conversion to Dalvik format failed with error 1 Thanks, Rupesh

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84  | Next Page >