Search Results

Search found 2769 results on 111 pages for 'puzzled late at night'.

Page 12/111 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • What's Up for "We're Almost There" Wednesday

    - by Oracle OpenWorld Blog Team
     By Karen Shamban Wow - can't believe we're looking at Wednesday already!  Still so much to do, places to go, people to talk with. The last day for the Exhibition Halls is Wednesday, so be sure to spend time there if you haven't done so already. And don't forget (as if you would) that the famed Oracle Appreciation Event is Wednesday night on Treasure Island.  Here are just some of the big things happening Wednesday, October 3: Registration Moscone West, Moscone South, Hilton San Francisco, Westin St. Francis, Hotel Nikko, 7:00 a.m. - 6:30 p.m. Oracle OpenWorld Keynote featuring Oracle executives John Fowler, Edward Screven, and Juan Loiaza Moscone North Hall D, 8:00 a.m. - 9:45 a.m. Exhibition Halls Open Moscone South and Moscone West, 9:45 a.m. - 4:00 p.m. General Sessions Various times and locations Sessions, Demos, Labs Various times and locations Oracle Appreciation Event, featuring Pearl Jam, with Kings of Leon and X Treasure Island, 7:30 p.m. - 1:00 a.m. (note: must have approved wristband to attend) After what is sure to be a late night, it's good to know that the Thursday keynotes don't start until 9:00 a.m. They're going to be really great, so you won't want to miss them!

    Read the article

  • Oracle Open World / Public Sector / Identity Platform

    - by user12604761
    For those attending Oracle Open World (Oct. 1st - 3rd, 2012 at the Moscone Center in San Francisco), the following details are recommended:  OOW Focus on Public Sector. Also, Oracle's foundational Identity and Access Management and Database Security products that support government security ICAM solutions are covered extensively during the event, the following will be available: The focus is on Oracle's Modern Identity Management Platform.   Integrated Identity Governance Mobile Access Management Complete Access Management Low Risk Upgrades The options for attendees include 18 sessions for Identity and Access Management, 9 Identity and Access Management demonstration topics at the Identity Management Demo Grounds, and 2 hands on labs, as well as 21 database security sessions. Oracle Public Sector Reception at OOW:  Join Oracle's Public Sector team on Monday, October 1 for a night of food and sports in a casual setting at Jillian’s, adjacent to Moscone Center on Fourth Street. In addition to meeting the Public Sector team, you can enjoy Monday Night Football on several big screen TVs in a fun sports atmosphere. When: Monday, October 1, 6:30 p.m.–9:30 p.m. Where: Jillian's, 101 Fourth Street, San Francisco 

    Read the article

  • You've been working on a platform for as long as you remember. Not anymore. How does it feel?

    - by Shinnok
    How does it feel to work on a platform for as long as you remember, you've been encouraged to innovate, to improve and give all in day and night for that platform, be it either an operating system, a hardware architecture or a software framework/library and then be forced to switch bases because that platform has been abandoned over the night? It has happened before, many times, for eg. to SGI/IRIX and more recently to SUN/Open Solaris and now Nokia/Symbian. Have you been part of such a shift? If so then please share the story and describe your feelings at that time and if it is the case, how did you manage the situation? Reorientation? Giving up on the field and turned to other things you've been constantly putting aside like family? Many did so(for eg. people at Netscape). You may not think of it being such a big deal, but it is, after you've been working 10 to 20+ years on a platform/technology and then be faced to switch your expertise and mindset, the feeling tends to become really strong and some people really give up this crazy field and start enojoying a normal life. Would love to hear your stories.

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • ATI HD5450 w/ Ubuntu 14?

    - by Oliwb
    So, I'm running Ubuntu 12.04 right now. Last night I realised that I'm way behind as we're up to 14! Decided to run the updater and figured I'd take the path of least resistance (but lengthy choice) of going 12.04 - 12.10 - 13.xx - 14.xx. So I download the first packet and then get an error message about my graphics card perhaps not working in 12.10. Now part of the reason I was looking to upgrade is because I get (and have always had) this strange occasional flickering - now that I have two screens it's just on the second monitor.....oddly this is not the same port that was giving issues before). The graphics card is an ATI Radeon HD5450 and I have the Catalyst (I think it's 13 or 14) driver installed - last night. It could be that the graphics card has never worked properly...I bought the PC new and with an "upgraded" video card and it's always suffered with this flicker. I just figured that the drivers weren't right or something. So I have 3 questions: 1) is my video card broken or is the driver letting it down and causing the flicker? 2) will it be able to handle the upgrade to 14 via 13? Or should I cut my losses and get something newer? 3) if I should get something newer....what should I get ( Thanks in advance....

    Read the article

  • Accidentally Changed Dual Monitor Setup and don't know how to reset it

    - by user203783
    I have a dual monitor setup with my laptop and an external Asus monitor running under Ubuntu 11.10. When I first started using Ubuntu, it synced both screens. What showed up on my laptop showed up on my eternal monitor. Last night, I accidentally knocked the HDMI cable from its port (not that unusual in the tight space I work in). When I plugged it back in, as usual, the external monitor only displayed my wallpaper. Usually, I just restart Ubuntu and it resets, but last night what I now realize was the display console came up and somehow I changed the setup. Now the two screens show different jobs so to speak. Also, the external monitor doesn't display the ribbon, making switching between Firefox tabs and windows or apps jarring. I write for a living and need to quickly check bits of research, notes or sources while writing and the extra switching between mouse pad and external mouse to get one or the other screen to respond interrupts my work flow. The only technical help I can offer is when the display console popped up, the graphical representation of the screens was that the external screen appeared layered over by about one-quarter of the laptop screen. Again, these were icons. I tried to replicate that after I obviously screwed things up. If someone could help. I would appreciate it. If I have to use terminal, that's fine. I started using computers for writing back when they were green DOS prompts. There's a certain elegance to prompt commands to me. Maybe it's the writer in me. Again, if anyone can help me return my screens to sync under Ubuntu 11.10, I would greatly appreciate it.

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • Stir Trek: Iron Man Edition Recap and Photos

    - by Brian Jackett
    If you’ve noticed my blogging activity has reduced in frequency and technical content lately it’s primarily due to all of the conferences I’ve been attending, speaking at, or planning in the past few months.  This past Friday myself and six other dedicated individuals put on Stir Trek: Iron Man Edition as the culmination of a few months of hard work.  For those unfamiliar, Stir Trek is a web developer conference that was founded last year as an event to showcase content from Microsoft’s MIX conference and end the day with a private showing of the then just-released Star Trek movie.  This year’s conference expanded from 2 to 4 content tracks and upped the number of tickets from 350 to 600.  Even more amazing was the fact that we had 592 people show up day of the event for the lowest drop-off percentage of any conference I’ve been to before.   Nerd Dinner and Swag Bags     The night before Stir Trek: Iron Man Edition we hosted a nerd dinner at the Polaris Shopping mall food court with about 30 in attendance.  Nerd dinners are a great time to meet others passionate about technology and socialize before the whirlwind of the conference hits.  After the nerd dinner 20+ volunteers headed to the conference location and helped us stuff swag bags.  This in and of itself was a monumental task of putting together 600 swag bags with numerous leaflets, sponsor items, and t-shirts.  A big thanks goes out to all who assisted us that night so that we could finish in just under 2 hours instead of taking all night.  My sleep schedule also thanks you. Morning of Stir Trek     After getting a decent amount of sleep I arrived at Marcus Crosswoods theater at 6am to begin setting up for the day.  Myself and Jody Morgan were in charge of registration so we got tables set up, laid out swag bags, and organized our volunteer crew to assist with checking-in attendees.  Despite having 600+ people registration went fairly smoothly and got the day off to a great start.  I especially appreciated the 3+ cups of coffee from Crimson Cup, a local coffee shop.  For any of you that know me you’ll know that I rarely drink coffee except a few times a year when I really need the energy, so that says a lot about how good their coffee is.   Conference Starts     Once registration was completed the day kicked off with Molly Holzschlag keynoting.  Unfortunately Molly suffered from an ear infection and wasn’t able to fly so she had a virtual keynote and a session later in the day.  I was working behind the scenes on various tasks so I was only able to drop in very briefly on the keynote and rest of the morning sessions.  Throughout the day I tried to grab at least 1 or 2 pics of each presenter.  See my album below for the full set of pics.      For lunch we ordered around 150 pizzas from Mellow Mushroom, a local pizza place (notice the theme of supporting local businesses.)  Early on we were concerned about Mellow Mushroom being able to supply that many pizzas and get them delivered (still hot) to the theater, but they did an excellent job day of the event.  I wish I had gotten some pictures of the old school VW van they delivered the pizza in, but I was just a bit busy running around trying to get theaters ready for lunch.  We had attendees from last year who specifically requested that we have Mellow Mushroom supply lunch this year and I’m glad everything worked out being able to use them again.     During the afternoon I was able to attend a few sessions and hear some great content from various speakers.  It was also nice to just sit down and get off my feet for a bit.  After the last sessions the day concluded with a raffle.  There were a few logistical and technical issues that hampered our ability to smoothly conduct the raffle.  To those of you that agree the raffle wasn’t the smoothest experience I would like to say that the Stir Trek planning committee has already begun meeting to discuss ways of improving the conference for next year.  We are also accepting feedback (both positive and negative) at the following link: click here.  If you don’t wish to use the Joind In site you can also email me directly and I’ll be sure to pass along the feedback.   Iron Man 2 Movie     Last but not least, what Stir Trek event would be complete without the feature movie.  This year’s movie was Iron Man 2.  The theater had some really cool props and promotions (see pic below) for the movie.  I really enjoyed Iron Man 2, but I would recommend brushing up on the Iron Man comics and Marvel’s plans for future movies to understand some of the plot elements that come up.  Also make sure you stay through to the end of the movie credits to see a sneak peak of something special, that’s all I’ll say. Conclusion     Again a big thanks goes out to all of the speakers, sponsors, attendees, movie theater staff, volunteers, and everyone else involved in making this event great.  Also big thanks to my fellow Stir Trek planning committee members: Jeff Blankenburg, Matt Casto, Carey Payette, Jody Morgan, Rick Kierner, and Sarah Dutkiewitcz.  I am grateful for everything I learned while helping plan this event and look forward to being involved again next year.  For those interested we are currently targeting Thor as our movie theme for 2011 and then The Avengers for 2012.  These are tentative based on release dates that could shift as we get closer, but for now look solid.   Photos Pics on Facebook (includes tagging)     Stir Trek: Iron Man Edition photos on Facebook Pics on Live site (higher res)      View Full Album         -Frog Out

    Read the article

  • Ok it has been pointed out to me

    - by Ratman21
    That it seems my blog is more of poor me or pity me or I deserve a job blog.   Hmmm I wont say, I have not wined here as I have used this blog to vent my frustration on the whole out of work thing (lack of money, self worth, family issues and the never end bills coming my way) but, it was also me trying to reach to others in the same boat as well as advertising, hay I am out here, employers.   It was also said, that I don’t have any thing listed here on me, like a cover letter or resume. Well there is but, it was so many months and post ago. Also what I had posted is not current. So here is my most current cover and resume.   Scott L Newman 45219 Dutton Way Callahan, Fl. 32011 To Whom It May Concern: I am really interested in the IT vacancie that you have listed for your company. Maybe I don’t have all the qualifications you want (hold on don’t hit delete yet) yet! But maybe I do, as I have over 20 + years experience in "IT” RIGHT NOW.   Read the rest of my cover and my resume. You will see what my “IT” skills are and it will Show that I can to this work! I can bring to your company along with my, can do attitude, a broad range of skills, including: Certified CompTIA A+, Security+  and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem  non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I don’t just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes but, I did it.) I would welcome the opportunity to further discuss this position with you. If you have questions or would like to schedule an interview, please contact me by phone at 904-879-4880 or on my cell 352-356-0945 or by e-mail at [email protected] or leave a message on my web site (http://beingscottnewman.webs.com/). I have enclosed/attached my resume for your review and I look forward to hearing from you.   Thank you for taking a moment to consider my cover letter and resume. I appreciate how busy you are. Sincerely, Scott L. Newman    Scott L. Newman 45219 Dutton Way, Callahan, FL 32011? H (904)879-4880 C (352)356-0945 ? [email protected] Web - http://beingscottnewman.webs.com/                                                       ______                                                                                       OBJECTIVE To obtain a Network Operation or Helpdesk position.     PROFILE Information Technology Professional with 20+ years of experience. Volunteer website creator and back-up sound technician at True Faith Christian Fellowship. CompTIA A+, Network+ and Security+ Certified.   TECHNICAL AND PROFESSIONAL SKILLS   §         Technical Support §         Frame Relay §         Microsoft Office Suite §         Inventory Management §         ISDN §         Windows NT/98/XP §         Client/Vendor Relations §         CICS §         Cisco Routers/Switches §         Networking/Administration §         RPG §         Helpdesk §         Website Design/Dev./Management §         Assembler §         Visio §         Programming §         COBOL IV §               EDUCATION ? New HorizonsComputerLearningCenter, Jacksonville, Florida – CompTIA A+, Security+ and Network+ Certified.             Currently working on CCNA Certification ?MottCommunity College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese     PROFESSIONAL             TrueFaithChristianFellowshipChurch – Callahan, FL, October 2009 – Present Web site Tech ·        Web site Creator/tech, back up song leader and back up sound technician. Note church web site is (http://ambassadorsforjesuschrist.webs.com/) U.S. Census (temp employee) Feb. 23 to March 8, 2010 ·        Enumerator for NassauCounty   ThomasCreekBaptistChurch – Callahan, FL,     June 2008 – September 2009 Churchsound and video technician      ·        sound and video technician           Fidelity National Information Services ? Jacksonville, FL ? February 01, 2005 to October 28, 2008 Client Server Dev/Analyst I ·        Monitored Multiple Debit Card sites, Check Authorization customers and the Card Auth system (AuthNet) for problems with the sites, connections, servers (on our LAN) and/or applications ·        Night (NOC) Network operator for a large Wide Area Network (WAN) ·        Monitored Multiple Check Authorization customers for problems with circuits, routers and applications ·        Resolved circuit and/or router issues or assist circuit carrier in resolving issue ·        Resolved application problems or assist application support in resolution ·        Liaison between customer and application support ·        Maintained and updated the NetOps Operation procedures Guide ·        Kept the listing of equipment on the raised floor updated ·        Involved in the training of all Night Check and Card server operation operators ·        FNIS acquired Certegy in 2005. Was one of 3 kept on.   Certegy ? St.Pete, FL ? August 31, 2003 to February 1, 2005 Senior NetOps Operator(FNIS acquired Certegy in 2005 all of above jobs/skills were same as listed in FNIS) ·        Converting Documentation to Adobe format ·        Sole trainer of day/night shift System Management Center operators (SMC) ·        Equifax spun off Card/Check Dept. as Certegy. Certegy terminated contract with EDS. One of six in the whole IT dept that was kept on.   EDS  (Certegy Account) ? St.Pete, FL ? July 1, 1999 to August 31, 2003 Senior NetOps Operator ·        Equifax outsourced the NetOps dept. to EDS in 1999. ·        Same job skills as listed above for FNIS.   Equifax ? St.Pete&Tampa, FL ? January 1, 1991 to July 1, 1999 NetOps/Tandem Operator ·        All of the above for FNIS, except for circuit and router issues ·        Operated, monitored and trouble shot Tandem mainframe and servers on LAN ·        Supported in the operation of the Print, Tape and Microfiche rooms ·        Equifax acquired TelaCredit in 1991.   TelaCredit ? Tampa, FL ? June 28, 1989 to January 1, 1991 Tandem Operator ·        Operated and monitored Tandem Non-stop systems for Card and Check Auths ·        Operated multiple high-speed Laser printers and Microfiche printers ·        Mounted, filed and maintained 18 reel-to-reel mainframe tape drives, cartridges tape drives and tape library.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Algorithm - find the minimal time

    - by exTyn
    I've found this problem somewhere on the internet, but I'm not sure about the proper solution. I think, that it has to be done by greedy algorithm, however I haven't spend much time thinking about that. I suppose, You may enjoy solving this problem, and I will get my answer. Win-win situation :). Problem N people come to a river in the night. There is a narrow bridge, but it can only hold two people at a time. Because it's night, the torch has to be used when crossing the bridge. Every person can cross the bridge in some (given) time (person n1 can cross the bridge in t1 time, person n2 in t2 time etc.). When two people cross the bridge together, they must move at the slower person's pace. What is the mimimal time for the whole grup to cross the bridge?

    Read the article

  • Ignoring 'A' and 'The' when sorting with XSLT

    - by ChrisV
    I would like to have a list sorted ignoring any initial definite/indefinite articles 'the' and 'a'. For instance: The Comedy of Errors Hamlet A Midsummer Night's Dream Twelfth Night The Winter's Tale I think perhaps in XSLT 2.0 this could be achieved along the lines of: <xsl:template match="/"> <xsl:for-each select="play"/> <xsl:sort select="if (starts-with(title, 'A ')) then substring(title, 2) else if (starts-with(title, 'The ')) then substring(title, 4) else title"/> <p><xsl:value-of select="title"/></p> </xsl:for-each> </xsl:template> However, I want to use in-browser processing, so have to use XSLT 1.0. Is there any way to achieve this in XLST 1.0?

    Read the article

  • how to calculate a bill from several tables on mysql?

    - by Audel
    I'm using mysql to create an hotel booking system, but i am struggling a little bit to calculate the final bill. I need a SELECT command to get data from several tables and make some calculations. Basically I just need to get the 'night cost' from a table called 'room_types'. Then, use DATEDIFF function to get the difference of days between the 'checkin' and 'checkout' columns in the table 'room_booking' and multiply the difference with the night cost and display the total. These are the tables I would be using: are room_booking, room_types, booking, and room. One booking may have several room bookings, so Im looking for a table that looks something like this: +------------+------------+---------------+------------------+ | bookingid | Room price | nights stayed | total room price | +------------+------------+---------------+------------------+ | B001 | 30.00 | 4 | 120.00 | +------------+------------+---------------+------------------+ | B001 | 40.00 | 3 | 120.00 | +------------+------------+---------------+------------------+ booking id comes from table 'booking' room price from 'room_types', 'nights stayed' is calculated from the table room_booking, using the datediff command between checkin and checkout . I hope i was clear

    Read the article

  • Beginner question: What is binding?

    - by JDelage
    Hi, I was trying to understand the difference between early and late binding, and in the process realized that the concept of binding is nebulous to me. I think I understand that it relates to the way data-as-a-word-of-memory is linked to type-as-a-set-of-language-features but I am not sure those are the right concepts. Also, how does understanding this deeply help people become better programmers? Please note: This question is not "what is late v. early binding" or "what are the trade-offs between the 2". Those already exist here. Thanks, JDelage

    Read the article

  • Administrators vs Programmers: Who's got more people Interaction / Working hours?

    - by sanksjaya
    Well, I've heard programmers get to interact with other programmers quiet a lot. But, who gets to meet a lot of new people on a daily basis at work without getting the feeling "Goosh! I'm stuck with him/this for another year :(" - Admins or Coders? And what kind of people domain do each get to interact with? Secondly, I've had this myth for a long time that unlike programmers, Network/System/Security Admins get locked-up in a den and juiced up late nights and early mornings. Most of the time they had to slip out of work without being noticed. But recently one of my seniors from my grad school told he had to work late and on weekends for a product release. How true and often does this happen with programmers and admins?

    Read the article

  • free RSS feed caching

    - by cherouvim
    Hello I've got an application which serves an rss feed of headlines and I need to provide this rss feed to other consumers. I don't want to provide the rss directly from my server though, due to limited server resources, so I need to proxy (cache) it through some service which will handle the load. Assuming the rss feed URL of my application is http://example.com/rss I initially provided my consumers with the url http://ajax.googleapis.com/ajax/services/feed/load?v=1.0&q=http%3A%2F%2Fexample.com%2Frss which solved my server load problem but introduced a liveness problem. The headlines are minutes to hours late from the actual feed (haven't exactly measured how much late). I've also tried distributing through feedburner so the url became something like http://feeds.feedburner.com/example123?format=xml but the liveness problem still exists. Is there a public and free solution for this problem? Anything below 5 minutes of liveness delay would be totally acceptable. thanks

    Read the article

  • How do you fix "Too many open files" problem in Hudson?

    - by Randyaa
    We use Hudson as a continuous integration system to execute automated builds (nightly and based on CVS polling) of a lot of our projects. Some projects poll CVS every 15 minutes, some others poll every 5 minutes and some poll every hour. Every few weeks we'll get a build that fails with the following output: FATAL: java.io.IOException: Too many open files java.io.IOException: java.io.IOException: Too many open files at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) The next build always worked (with 0 changes) so we always chalked it up to 2 build jobs being run at the same time and happening to have too many files open during the process. This weekend we had a build fail Friday night (automatic nightly build) with the message and every other nightly build also failed. Somehow this triggered Hudson to continuously build every project which failed until the issue was resolved. This resulted in a build every 30 minutes or so of every project until sometime Saturday night when the issue magically disappeared.

    Read the article

  • SQL Server missing tables and stored procedures

    - by Robo
    I have an application on a client's site that processes data each night, last night SQL Server 2005 gave the error "Could not find stored procedure 'xxxx'". The stored procedure does exist in the database, has the right permission as far as I can tell, the application runs fine in other nights as well. In previous occasions, the SQL Server has also gave error saying 'database object not found', and refers to a table in the database that does exists. So, on rare occasions, the server thinks certain stored procedures and tables does not exist in the database. The objects it refers to are often ones that are frequently used. Is the database somehow corrupted, is there some sort of repair/health check I can do?

    Read the article

  • sql insert statement with a lot of same where clause and one different where cluase

    - by william
    I m sry if the title is not clear. Here's my proble. I created a new table which will show total, average and maximum values. I have to insert the results into that table. That table will have only 4 rows. No Appointment, Appointment Early, Appointment Late and Appointment Punctual. So.. I have sth like.. insert into newTable select 'No Appointment' as 'Col1', avg statement, total statement, max statement from orgTable where (general conditions) and (unique condition to check NO APPOINTMENT); I have to do that same thing for another 3 rows.. where only the unique condition is different to check early, punctual or late.. So..the statement is super long. I wanna reduce the size.. How can I achieve that?

    Read the article

  • How to inplement a nightly process in .NET?

    - by Abe Miessler
    I have a set of tasks that I would like to execute every night. These tasks include querying a database, moving and renaming some images and lastly updating a database table. My first thought had been to create a SQL Server job and use xp_cmdshell to move the files but after a bit of research i decided against it. My question now is what is the best way to implement this as a .NET application? Should I create a Windows service? A console application that is scheduled to run once per night? Some other cool way that I don't even know about?

    Read the article

  • jQuery resize event for div children

    - by Frank Michael Kraft
    $("div.content-left > *, div.content-main > *" ).live('resize',function(){alert("Size changed")}); does not work, because the resize event only applies to window resizes. But in my case the divs change size late, because the content is loaded late by an ajax request - or by clicking a panel menu. I definatley need to avoid to register on all individual events of the children (ajax, click), because that can be very many different events and then it is not maintainable.

    Read the article

  • draw graph in android with dynamically data

    - by Tikam
    In my project i want to draw a graph with dynamically updated data, and data is came from remote device and i update it from my locally sqlite data base. I have to draw a graph dynamically with having two paremeter as horizontally hours of day from {Mid night, 1, 2, ...., 11, Noon, 1 , 11, Mid Night} and vertically parameter as {One , two, Three, Four}. At particular hour i get data value from my sqlite and want to draw it on graph, and any particular hour have different value like "One", "two" etc. and i want to draw graph with hour help. Thanks in advance

    Read the article

  • changing body class based on user's local time

    - by John
    I'm trying to add a body class of 'day' if it's 6am-5pm and 'night' if "else" based on the user's local time. I tried the following but it didn't work. Any ideas? In the head: <script> function setTimesStyles() { var currentTime = new Date().getHours(); if(currentTime > 5 && currentTime < 17) { document.body.className = 'day'; } else { document.body.className = 'night'; } } </script> In the body: <body onload="setTimeStyles();"> Also, is there a more elegant way to achieve what I need?

    Read the article

  • SQL SERVER – Developer Training Kit for SQL Server 2012

    - by pinaldave
    Developer Training Kit is my favorite part of any product. The reason behind is very simple because it give the single resource which gives complete overview of the product in nutshell. A developer can learn from many places – books, webcasts, tutorials, blogs, etc. However, I have found that developer training kits are the best starting point for any product. Start with them first, see what are the new features as well what is the new message a product is coming up with. Once it is learned the very next step should be to identify the right learning material to explore the preferred topic. The SQL Server 2012 Developer Training Kit includes technical content including labs, demos and presentations designed to help you learn how to develop SQL Server 2012 database and BI solutions. New and updated content will be released periodically and can be downloaded on-demand using the Web Installer. Download SQL Server 2012 Developer Training Kit Web Installer. This training kit was available earlier this year but it is never late to explore it if you have not referred it earlier. Additionally, if you do not want to download complete kit all together I suggest you refer to Wiki here. This wiki contains all the same presentations and demo notes which web installer contains. Refer to SQL Server 2012 Developer Training Kit Wiki Wiki contains following module and details about Hands On Labs Module 1: Introduction to SQL Server 2012 Module 2: Introduction to SQL Server 2012 AlwaysOn Module 3: Exploring and Managing SQL Server 2012 Database Engine Improvements Module 4: SQL Server 2012 Database Server Programmability Module 5: SQL Server 2012 Application Development Module 6: SQL Server 2012 Enterprise Information Management Module 7: SQL Server 2012 Business Intelligence Hands-On Labs: SQL Server 2012 Database Engine Hands-On Labs: Visual Studio 2010 and .NET 4.0 Hands-On Labs: SQL Server 2012 Enterprise Information Management Hands-On Labs: SQL Server 2012 Business Intelligence Hands-On LabsHands-On Labs: Windows Azure and SQL Azure As I said, if you have not downloaded this so far, it is never late to explore it. Trust me you will atleast learn one thing if you just explore the content. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >