Search Results

Search found 659 results on 27 pages for 'fantastic fourier'.

Page 24/27 | < Previous Page | 20 21 22 23 24 25 26 27  | Next Page >

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • MSCC: Career & IT Fair 2014

    Already a couple of weeks ago, I've been addressed by Ibraahim and Yunus to see whether it would be interesting to participate in the 1st Career & IT Fair organised by the UoM Computer Club. Well, luckily we met at the Global Windows Azure Bootcamp and I wasn't too sure whether it would be possible for me to attend after all. The main reason is given because of work demand and furthermore due to the fact that the Mauritius Software Craftsmanship Community currently has no advertising material at all. Here's the brief statement of the event: "The UOM Students' Computer Club in collaboration with the UOM Students' Union and UOM CSE Department is organising a 'Career & IT Fair' on the 23rd and 24th April 2014. This event has for objective to provide a platform to tertiary students, secondary students as well as vocational students, the opportunity to meet job recruiters." Luckily, I was reminded that the 23rd is a Wednesday, and therefore I decided that it might be interesting to move our weekly Code & Coffee session to the university and hence be able to attend the career fair. As it turned out it was a great choice and thankfully Pritvi, Nadim as well as Ishwon volunteered to be around at the "community booth". Thankfully, the computer club gave us - the MSCC and the LUGM - one of their spaces in the lobby area of the Paul Octave Wiéhé Auditorium. My impression about the event Very well and professionally organised. Seriously, the lads over at the UoM Computer Club did a great job in organising their 2 days event, and felt very comfortable at any time. Actually, it was kind of amusing to some of the members constantly running around and checking everything. Even though that the whole process went smooth and easy off the hand. There were a couple of interesting pieces of information and announcements during the opening ceremony. For example, the Computer Science faculty is a very young one and has been initiated back in 1988 only - just by 4 staff members at that time. Now, after 25 years they have achieved quite a lot and there are currently 1.000+ active students attending the numerous lectures and courses. But there is no room to rest on previous achievements, and I was kind of surprised to hear that there are plans to extend the campus, and offer new lectures in the fields of nanotechnology, big data handling, and - crossing fingers - the introduction and establishment of a space control centre. Mauritius is already part of the Square Kilometre Array (SKA) and hopefully there will be more activities into that direction in the near future. Community - Awareness and collaboration As stated earlier, I could only spent one morning but luckily other members of the MSCC and the LUGM stayed during the whole two days and provided answers to any interested person. As for me, I took the opportunity to get in touch with the other companies in the lobby. Mainly, to create some awareness about our IT communities but also to see whether there might be options for future engagement in common activities, too. So far, I was able to speak to representatives of the following companies: ACCA Mauritius Business at Work Infomil LinkByNet Microsoft Indian Ocean Islands & French Pacific Spherinity Training Institute Spoon Consulting Ltd. State Informatics Ltd. Unfortunately, I only had a quick chat with an HR representative of LinkByNet but I fully count on our MSCC members like Nitin or LUGM member Ronny to spread our intentions over there.  So far, all of the representatives were really interested in our concepts and activities and I'm currently catching up with an introduction flyer for the MSCC that I'm going to send out to all those contacts via mail. It would be great to have more craftsmen as well as professional support on board. Some pictures from the event MSCC: Fantastic outlook for the near future. Announcements were made on Big data, nanotechnology, and space control centre in Mauritius. Interesting! MSCC: The lobby area was cramped with students. Great way to exchange and network. Good luck to all candidates! Passing the relay staff to... I recommend you to continue to read about the first Career & IT Fair on Ish's blog. He has a great summary and more details on those two days of IT activities than I have. Thanks and feel free to leave a comment (or two)... 

    Read the article

  • Simple Scripting for your Exalogic Storage

    - by Trond Strømme
    As part of my job in Oracle ACS (Advanced Customer Services) I'm handling lots of different systems and customers. Among the recent systems I worked with have been Oracle's Exalogic engineered systems. One of the things I'd never had much exposure to as a system developer/architect/middleware guy/Java dude has been storage; outside of consuming it for my photography needs.. Well, I'm always ready for a new challenge... I'd downloaded the 7000 series storage simulator when it was released in the good old Sun days, found it fun and instructive to play around with, but as I never touched storage in any way (besides consuming it..) I forgot about it. A couple of years ago when I started working with Exalogic engineered systems it again came into light as an invaluable learning and testing tool for the embedded storage in an Exalogic;  Oracle's Sun ZFS Storage 7320 Appliance.  aaaanyway... I've been "booted" into a part-time role as the interim storage/system admin/middleware/Java guy for a client and found I needed to create the occasional report or summary or whatever.. of what's using the storage in the 7320 (as default configured for an Exalogic, 40T of disk in a mirrored configuration, yielding 18T of actual space.) Reading the nice documentation and some articles on the Oracle Technology Network I saw great possibilities with the embedded ECMAScript3/JavaScript engine in the 7000 series.  In my personal opinion anyone who's dealing with Exalogic administration, or exposed to any of the 7000 series of storage appliances and servers that Oracle offers should have a VirtualBox instance of it kicking around. For development and testing it's a fantastic tool. (It can save you from explaining (most) of the embarrassing FAILS you can do if you test something in a production system to your management...) So download, and install.  A small sidestep, if after firing up the 7000 series simulator in VirtualBox you've forgotten what it's IP address is, the following will sort you out if you log in directly via the running VirtualBox VM. So in my case I can ssh to 192.168.56.101 or point a browser to https://192.168.56.101:215 to log into the storage appliance. One simple way of executing a script on the 7320 is to ssh to the device and redirecting a file with the script in it to ssh. ssh [email protected] < myscript.js One question I got from my client and the people who will take over the systems was: "how can we see the quotas and allocations for all projects/shares in one easy go so we don't have to go navigating around in the BUI for all the hundreds of shares the 7320 is hosting just to check if anything is running dry?" Easy! JavaScript time, VirtualBox and emacs! //NOTE! this script is available 'as is' It has ben run on a couple of 7320's, (running 2010.08.17.3.0,1-1.25 & // 2011.04.24.1.0,1-1.8) a 7420 and the VB image, but I personally //offer no guarantee whatsoever that it won't make your server topple, catch fire or in any way go pear shaped.. //run at your own risk or learn from my code and or mistakes.. script run('cd /'); run('shares'); //get all projects: proj = list(); function spaceToGig(bytes){ return bytes/1073741824; //convert bytes to GB } function fullInPercent(quota, space_data){ tmp = (space_data/quota)*100; return tmp; } //print header, slightly good looking printf(" %s/%-15s %8s(GB) %7s(GB) %5s(GB) %7s(GB) %3s\n","Project", "Share","Quota","Ref", "Snap", "Total","%full"); printf("-------------------------------------------------------------------------------\n") //for each project, get all shares. check for quota and calculate percentage and human readable figures.. for (i=0;i<proj.length;i++){ run('select ' + proj[i]); //get all shares for a project var pshares = list(); //for each share get quota properties for (j=0;j<pshares.length;j++){ run('select ' + pshares[j]); quota = get('quota'); //properties associated with a share or inherited from a project spaceData = get('space_data'); spaceSnap = get('space_snapshots'); spaceTotal = get('space_total'); if(quota>0){ //has quota printf(" %s/%-15s \t%4.2fGB\t%.2fGB\t%.2fGB\t%.2fGB\t%5.2f%%\n",proj[i], pshares[j],spaceToGig(quota),spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),fullInPercent(quota,spaceTotal)); }else{ //no quota printf(" %s/%-15s \t%8s\t%.2fGB\t%.2fGB\t%.2fGB\t%s\n",proj[i],pshares[j], "N/A", spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),"N/A"); } run('cd ..'); } run('done'); } The resulting output should look something like this: Project/Share Quota(GB) Ref(GB) Snap(GB) Total(GB) %full ------------------------------------------------------------------------------- ACSExalogicSystem/domains N/A 0.04GB 0.00GB 0.04GB N/A ACSExalogicSystem/logs N/A 0.01GB 0.00GB 0.01GB N/A ACSExalogicSystem/nodemgrs N/A 0.00GB 0.00GB 0.00GB N/A ACSExalogicSystem/stores N/A 0.04GB 0.00GB 0.04GB N/A ***_dev/FMW_***_1 133GB 4.24GB 0.01GB 4.25GB 3.19% ***_dev/FMW_***_2 N/A 4.25GB 0.01GB 4.26GB N/A ***_dev/applications 10GB 0.00GB 0.00GB 0.00GB 0.00% ***_dev/domains 50GB 10.75GB 3.55GB 14.30GB 28.61% ***_dev/logs 20GB 0.32GB 0.01GB 0.33GB 1.66% ***_dev/softwaredepot 20GB 4.15GB 0.00GB 4.15GB 20.73% ***_dev/stores 20GB 0.01GB 0.00GB 0.01GB 0.05% ###_dev/FMW_###_1 400GB 17.63GB 0.12GB 17.75GB 4.44% ###_dev/applications N/A 0.00GB 0.00GB 0.00GB N/A ###_dev/domains 120GB 14.21GB 5.53GB 19.74GB 16.45% ###_dev/logs 15GB 0.00GB 0.00GB 0.00GB 0.00% ###_dev/softwaredepot 250GB 73.55GB 0.02GB 73.57GB 29.43% …snip My apologies if the output is a bit mis-aligned here and there, I only bothered making it look good, not perfect :/ I also removed some of the project names (*,#)

    Read the article

  • Come meet our Interns in Dublin

    - by klaudia.drulis
    Oracle Worldwide Product Translation Group (WPTG) provides solutions for all Oracle product and Content translation requirements. WPTG is a global organisation with its headquarters in Ireland and employees in Oracle offices worldwide. WPTG offer expertise in fields such as process engineering, tools development, linguistic quality, terminology, global product release, financial and vendor management. WPTG provides translation solution for over 40 languages including Asia Pacific, European, American and Middle Eastern languages. WPTG first introduced an intern program over 10 years ago and it has become a key component of our teams structure. The majority of Interns are sourced from a Computer Science related course, these Interns joining the engineering team. Others are sourced from Business courses and work within the Business / Project management area. The intern program allows us to maintain ties with current course curriculum and brings fresh energy and perspective into our Organisation. Four of the full time staff working in Dublin today joined us originally as Interns and subsequently were offered permanent positions. Come Meet some of our 2010 Interns, Come and see what Darragh, Anthony, Caoimhe, James and Artemij thought about working within the WPTG at Oracle: Darragh “Oracle has been a fun, challenging work placement for me. From day one I was treated as a full member of staff, this was both comforting and a little bit scary. The responsibilities stack up but I found I was able to keep on top of everything and even make improvements to how we handle a few things thanks to a great team and a very supportive manager. There’s a very positive atmosphere in work that’s really conducive to getting a lot of work done. Ideas seem to be the central hub in my line of business so all of my ideas and innovations were greeted with enthusiasm. Oracle has given me a fantastic opportunity and I urge you to grab it with both hands, you’ll find that you’re with a set of like minded people from all works of life that make work both interesting and fun. Even when the pressure is on you know that you can always get help and advice from someone nearby. My last word of advice is don’t be afraid to stick your neck out, everyone here is willing to learn, try something new and innovate, your voice will be heard and who knows, you could end up having a large impact on Oracle and your career.” Anthony “I had a great experience working with Oracle, from day one I was treated like a full member of staff with responsibilities of my own. I found that the more I put into the work the more I got out from the experience. Volunteering and being willing to face challenges have made this a more exciting placement. I am given a lot of leeway to do my own projects and so I’ve found that I am really enjoying my time here.” Caoimhe “I am currently spending my year of placement within the Release Management Team in the WPTG. My main role is to handle the finance process of all translation projects under 100k which includes creating workspecs and PO's, sending out kits, dealing with vendor queries and handling the invoicing and payment part. I am really enjoying my time here at Oracle, everyone is very open and friendly and willing to help you out with any questions you may have. I would definitely be interested in returning to Oracle after I graduate!” James “I am currently on a 12 month placement with Oracle, working as part of the Worldwide Product Translation Group in the Business Management. The Business Management team provides a global view on WPTG’s vendor and business strategy and is an interface into WPTG for new business. The business management team work together to support the external translation partner network. My role is to support the Business Management team and also to work on various projects when the need arises. This involves working with translation vendors and working with other Oracle employees worldwide. I am really enjoying my time working for Oracle, at times it can be challenging bit also very rewarding. I would recommend any student wanting to undertake a placement year to apply to Oracle, I made some great friends and I will never forget my time in Dublin.” Artemij “From working within Oracle, I have truly understood what "career path" is, and what opportunities a large corporation like Oracle can offer. Without any illusions, the work itself is exciting, sometimes challenging, tests your ability to handle pressure, to make decisions and take responsibility, to learn quickly and cooperate efficiently in order to solve a problem. I have learned a lot about myself. What I am good at, where and what I can do better. My placement at Oracle has allowed me to get a clearer picture of what I want, and which door I am going to open after college. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Facebook connect displaying invite friends dialog and closing on completion

    - by Dougnukem
    I'm trying to create a Facebook Connect application that displays a friend invite dialog within the page using Facebook's Javascript API (through a FBMLPopupDialog). The trouble is to display a friend invite dialog you use a multi-friend form which requires an action="url" attribute that represents the URL to redirect your page to when the user completes or skips the form. The problem is that I want to just close the FBMLPopupDialog (the same behavior as if the user just hit the 'X' button on the popup dialog). The best I can do is redirect the user back to the page they were on basically a reload but they lose all AJAX/Flash application state. I'm wondering if any Facebook Connect developers have run into this issue and have a good way to simply display a friend invite "lightbox" dialog within their website where they don't want to "refresh" or "redirect" when the user finishes. The facebook connect JS API provides a FB.Connect.inviteConnectUsers, which provides a nice dialog but only connects existing users of your application who also have a Facebook account and haven't connected. http://bugs.developers.facebook.com/show%5Fbug.cgi?id=4916 function fb_inviteFriends() { //Invite users log("Inviting users..."); FB.Connect.requireSession( function() { //Connect succes var uid = FB.Facebook.apiClient.get_session().uid; log('FB CONNECT SUCCESS: ' + uid); //Invite users log("Inviting users..."); //Update server with connected account updateAccountFacebookUID(); var fbml = fb_getInviteFBML() ; var dialog = new FB.UI. FBMLPopupDialog("Weblings Invite", fbml) ; //dialog.setFBMLContent(fbml); dialog.setContentWidth(650); dialog.setContentHeight(450); dialog.show(); }, //Connect cancelled function() { //User cancelled the connect log("FB Connect cancelled:"); } ); } function fb_getInviteFBML() { var uid = FB.Facebook.apiClient.get_session().uid; var fbml = ""; fbml = '<fb:fbml>\n' + '<fb:request-form\n'+ //Redirect back to this page ' action="'+ document.location +'"\n'+ ' method="POST"\n'+ ' invite="true"\n'+ ' type="Weblings Invite"\n' + ' content="I need your help to discover all the Weblings and save the Internet! WebWars: Weblings is a cool new game where we can collect fantastic creatures while surfing our favorite websites. Come find the missing Weblings with me!'+ //Callback the server with the appropriate Webwars Account URL ' <fb:req-choice url=\''+ WebwarsFB.WebwarsAccountServer +'/SplashPage.aspx?action=ref&reftype=Facebook' label=\'Check out WebWars: Weblings\' />"\n'+ '>\n'+ ' <fb:multi-friend-selector\n'+ ' rows="2"\n'+ ' cols="4"\n'+ ' bypass="Cancel"\n'+ ' showborder="false"\n'+ ' actiontext="Use this form to invite your friends to connect with WebWars: Weblings."/>\n'+ ' </fb:request-form>'+ ' </fb:fbml>'; return fbml; }

    Read the article

  • Featureful commercial text editors?

    - by wrp
    I'm willing to buy tools if they add genuine value over a FOSS equivalent. One thing I wouldn't mind having is an editor with the power of Emacs, but made more user-friendly. There seem to be several commercial editors out there, but I can't find much discussion of them online. Maybe it's because the kind of people who use commercial software don't have time to do much blogging. ;-) If you have used any, what was your evaluation? I'd especially like to hear how you would compare them to Emacs. I'm thinking of editors like VEDIT, Boxer, Crisp, UltraEdit, SlickEdit, etc. To get things started, I tried EditPad Pro because I needed something on a Win98SE box. I was attracted by its powerful support for regexps, but I didn't use it for long. One annoyance was that find-in-files was only available in a separate product you had to buy. The main problem, though, was stability. It sometimes hung and I lost a few files because it corrupted them while editing. After a couple weeks, I found that I was avoiding using it, so I just uninstalled. Edit: Ah...I need to remove some ambiguity. With reference to Emacs, "power" often means its potential for customization. This malleability comes from having an architecture in which most of the functionality is written in a scripting language that runs on a compiled core. Emacs (with elisp) is by far the most widely known such system among home users, but there have been other heavily used editors such as Freemacs (MINT), JED (S-Lang), XEDIT (Rexx), ADAM (TPU), and SlickEdit (Slick-C). In this case, by "power" I'm not referring to extensibility but to realized features. There are three main areas which I think a commercial text editor might be an improvement over Emacs: Stability The only apps I regularly use on Linux that give me flaky behavior are Emacs, Gedit, and Geany. On Windows, I like the look and features of Notepad++, but I find it extremely unstable, especially if I try to use the plugins. Whatever I happen to be doing, I'm using some text editor practically all day long. If I could switch to an editor that never gave me problems, it would definitely lower my stress level. Tools When I started using Emacs, I searched the manual cover to cover to gleam ideas for clever, useful things I could do with it. I'd like to see lots of useful features for editing code, based on detailed knowledge of what the system can do and the accumulated feedback of users. Polish The rule of threes goes that if you develop something for yourself, it's three times harder to make it usable in-house, and three times harder again to make it a viable product for sale. It's understandable, but free software development doesn't seem to benefit from much usability testing. BTW, texteditors.org is a fantastic resource for researching text editors.

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • WPF MVVM: Convention over Configuration for ResourceDictionary ?

    - by Jeffrey Knight
    Update In the wiki spirit of StackOverflow, here's an update: I spiked Joe White's IValueConverter suggestion below. It works like a charm. I've written a "quickstart" example of this that automates the mapping of ViewModels-Views using some cheap string replacement. If no View is found to represent the ViewModel, it defaults to an "Under Construction" page. I'm dubbing this approach "WPF MVVM White" since it was Joe White's idea. Here are a couple screenshots. The first image is a case of "[SomeControlName]ViewModel" has a corresponding "[SomeControlName]View", based on pure naming convention. The second is a case where the ModelView doesn't have any views to represent it. No more ResourceDictionaries with long ViewModel to View mappings. It's pure naming convention now. I'm hosting a download of the project here: http://rootsilver.com/files/Mvvm.White.Quickstart.zip I'll follow up with a longer blog post walk through. Original Post I read Josh Smith's fantastic MSDN article on WPF MVVM over the weekend. It's destined to be a cult classic. It took me a while to wrap my head around the magic of asking WPF to render the ViewModel. It's like saying "Here's a class, WPF. Go figure out which UI to use to present it." For those who missed this magic, WPF can do this by looking up the View for ModelView in the ResourceDictionary mapping and pulling out the corresponding View. (Scroll down to Figure 10 Supplying a View ). The first thing that jumps out at me immediately is that there's already a strong naming convention of: classNameView ("View" suffix) classNameViewModel ("ViewModel" suffix) My question is: Since the ResourceDictionary can be manipulated programatically, I"m wondering if anyone has managed to Regex.Replace the whole thing away, so the lookup is automatic, and any new View/ViewModels get resolved by virtue of their naming convention? [Edit] What I'm imagining is a hook/interception into ResourceDictionary. ... Also considering a method at startup that uses interop to pull out *View$ and *ViewModel$ class names to build the DataTemplate dictionary in code: //build list foreach .... String.Format("<DataTemplate DataType=\"{x:Type vm:{0} }\"><v:{1} /></DataTemplate>", ...)

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • NSTimer as a self-targeting ivar.

    - by Matt Wilding
    I have come across an awkward situation where I would like to have a class with an NSTimer instance variable that repeatedly calls a method of the class as long as the class is alive. For illustration purposes, it might look like this: // .h @interface MyClock : NSObject { NSTimer* _myTimer; } - (void)timerTick; @end - // .m @implementation MyClock - (id)init { self = [super init]; if (self) { _myTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:@selector(timerTick) userInfo:nil repeats:NO] retain]; } return self; } - (void)dealloc { [_myTimer invalidate]; [_myTImer release]; [super dealloc]; } - (void)timerTick { // Do something fantastic. } @end That's what I want. I don't want to to have to expose an interface on my class to start and stop the internal timer, I just want it to run while the class exists. Seems simple enough. But the problem is that NSTimer retains its target. That means that as long as that timer is active, it is keeping the class from being dealloc'd by normal memory management methods because the timer has retained it. Manually adjusting the retain count is out of the question. This behavior of NSTimer seems like it would make it difficult to ever have a repeating timer as an ivar, because I can't think of a time when an ivar should retain its owning class. This leaves me with the unpleasant duty of coming up with some method of providing an interface on MyClock that allows users of the class to control when the timer is started and stopped. Besides adding unneeded complexity, this is annoying because having one owner of an instance of the class invalidate the timer could step on the toes of another owner who is counting on it to keep running. I could implement my own pseudo-retain-count-system for keeping the timer running but, ...seriously? This is way to much work for such a simple concept. Any solution I can think of feels hacky. I ended up writing a wrapper for NSTimer that behaves exactly like a normal NSTimer, but doesn't retain its target. I don't like it, and I would appreciate any insight.

    Read the article

  • Updating a deallocated UIWebView from a background thread

    - by Dan Ray
    As you can see from the title, I've programmed myself into a corner and I've got several things working against me... In a UIViewController subclass that manages a large and complex view. One part of it is a UIWebView that contains output from a web request that I had to build and execute, and manually assemble HTML from. Since it takes a second or two to run, I dropped it into the background by calling self performSelectorInBackground:. Then from that method I call there, I use self performSelectorOnMainThread: to get back to the surface of the thread stack to update the UIWebView with what I just got. Like this (which I've cut down to show only the relevant issues): -(void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { //then get mapquest directions NSLog(@"Got called to handle new location!"); [manager stopUpdatingLocation]; [self performSelectorInBackground:@selector(getDirectionsFromHere:) withObject:newLocation]; } - (void)getDirectionsFromHere:(CLLocation *)newLocation { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; CLLocationCoordinate2D here = newLocation.coordinate; // assemble a call to the MapQuest directions API in NSString *dirURL // ...cut for brevity NSLog(@"Query is %@", dirURL); NSString *response = [NSString stringWithContentsOfURL:[NSURL URLWithString:dirURL] encoding:NSUTF8StringEncoding error:NULL]; NSMutableString *directionsOutput = [[NSMutableString alloc] init]; // assemble response into an HTML table in NSString *directionsOutput // ...cut for brevity [self performSelectorOnMainThread:@selector(updateDirectionsWithHtml:) withObject:directionsOutput waitUntilDone:NO]; [directionsOutput release]; [pool drain]; [pool release]; } - (void)updateDirectionsWithHtml:(NSString *)directionsOutput { [self.directionsWebView loadHTMLString:directionsOutput baseURL:nil]; } This all works totally great, UNLESS I've backed out of this view controller before CLLocationManager hits its delegate method. If this happens after I've already left this view, I get: 2010-06-07 16:38:08.508 EverWondr[180:760b] bool _WebTryThreadLock(bool), 0x1b6830: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... Despite what this says, I can repeatably cause this crash when I back out too early. I'm not at all convinced that attempting a UI update from a background thread is really the issue; I think it's that my UIWebView is deallocated. I suspect that the fact I was just IN a background thread makes the runtime suspect something's up about that, but I feel fairly sure that's not it. So how do I tell CLLocationManager not to worry about it, when I'm backing out of that view? I tried [self.locationManager stopUpdatingLocation] inside my viewWillDisappear method, but that didn't do it. (Incidentally, MapQuest's apis are FANTASTIC. Way WAY better than anything Google provides. I can't recommend them highly enough.)

    Read the article

  • zend_navigation and modules

    - by Grant Collins
    Hi, I am developing an application at the moment with zend and I have seperated the app into modules. The default module is the main site where unlogged in users access and have free reign to look around. When you log in, depending on the user type you either go to module A or module B, which is controlled by simple ACLs. If you have access to Module A you can not access Module B and visa versa. Both user types can see the default module. Now I want to use Zend_Navigation to manage the entire applications navigation in all modules. I am not sure how to go about this, as all the examples that I have seen work within a module or very simple application. I've tried to have my navigation.xml file look like this: <configdata> <navigation> <label>Home</label> <controller>index</controller> <action>index</action> <module>default</module> <pages> <tour> <label>tour</label> <controller>tour</controller> <action>index</action> <module>default</module> </tour> <blog> <label>blog</label> <url>http://blog.mysite.com</url> </blog> <support> <label>Support</label> <controller>support</controller> <action>index</action> <module>default</module> </support> </pages> </navigation> </configdata> This if fine for the default module, but how would I go about the other modules to this navigation page? Each module has it's own home page, and others etc. Would I be better off adding a unique navigation.xml file for each module that is loaded in the preDispatch plugin that I have written to handle my ACLs?? Or keep them in one massive navigation file? Any tips would be fantastic. Thanks, Grant

    Read the article

  • How and why do I set up a C# build machine?

    - by mmr
    Hi all, I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts? What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests. What kind of hardware will I need for this? Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to. How often should we make this kind of build? How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so? Is there anything else I'm not seeing here? I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know. EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX. Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

  • vim-powerline colors are out of whack in urxvt

    - by komidore64
    I have attached two images showing what my vim-powerline looks like. As you can see, something has happened to the colors and I cannot figure out how to fix it. I'm running Fedora 17 on a clean install with i3 (default config) and urxvt. Here is my bashrc: # .bashrc if [[ "$(uname)" != "Darwin" ]]; then # non mac os x # source global bashrc if [[ -f "/etc/bashrc" ]]; then . /etc/bashrc fi export TERM='xterm-256color' # probably shouldn't do this fi # bash prompt with colors # [ <user>@<hostname> <working directory> {current git branch (if you're in a repo)} ] # ==> PS1="\[\e[1;33m\][ \u\[\e[1;37m\]@\[\e[1;32m\]\h\[\e[1;33m\] \W\$(git branch 2> /dev/null | grep -e '\* ' | sed 's/^..\(.*\)/ {\[\e[1;36m\]\1\[\e[1;33m\]}/') ]\[\e[0m\]\n==> " # execute only in Mac OS X if [[ "$(uname)" == 'Darwin' ]]; then # if OS X has a $HOME/bin folder, then add it to PATH if [[ -d "$HOME/bin" ]]; then export PATH="$PATH:$HOME/bin" fi alias ls='ls -G' # ls with colors fi alias ll='ls -lah' # long listing of all files with human readable file sizes alias tree='tree -C' # turns on coloring for tree command alias mkdir='mkdir -p' # create parent directories as needed alias vim='vim -p' # if more than one file, open files in tabs export EDITOR='vim' # super-secret work stuff if [[ -f "$HOME/.workbashrc" ]]; then . $HOME/.workbashrc fi # Add RVM to PATH for scripting if [[ -d "$HOME/.rvm/bin" ]]; then # if installed PATH=$PATH:$HOME/.rvm/bin fi and my Xdefaults: ! URxvt config ! colors! URxvt.background: #101010 URxvt.foreground: #ededed URxvt.cursorColor: #666666 URxvt.color0: #2E3436 URxvt.color8: #555753 URxvt.color1: #993C3C URxvt.color9: #BF4141 URxvt.color2: #3C993C URxvt.color10: #41BF41 URxvt.color3: #99993C URxvt.color11: #BFBF41 URxvt.color4: #3C6199 URxvt.color12: #4174FB URxvt.color5: #993C99 URxvt.color13: #BF41BF URxvt.color6: #3C9999 URxvt.color14: #41BFBF URxvt.color7: #D3D7CF URxvt.color15: #E3E3E3 ! options URxvt*loginShell: true URxvt*font: xft:DejaVu Sans Mono for Powerline:antialias=true:size=12 URxvt*saveLines: 8192 URxvt*scrollstyle: plain URxvt*scrollBar_right: true URxvt*scrollTtyOutput: true URxvt*scrollTtyKeypress: true URxvt*urlLauncher: google-chrome and finally my vimrc set nocompatible set dir=~/.vim/ " set one place for vim swap files " vundler for vim plugins ---- filetype off set rtp+=~/.vim/bundle/vundle call vundle#rc() Bundle 'gmarik/vundle' Bundle 'tpope/vim-surround' Bundle 'greyblake/vim-preview' Bundle 'Lokaltog/vim-powerline' Bundle 'tpope/vim-endwise' Bundle 'kien/ctrlp.vim' " ---------------------------- syntax enable filetype plugin indent on " Powerline ------------------ set noshowmode set laststatus=2 let g:Powerline_symbols = 'fancy' " show fancy symbols (requires patched font) set encoding=utf-8 " ---------------------------- " ctrlp ---------------------- let g:ctrlp_open_multiple_files = 'tj' " open multiple files in additional tabs let g:ctrlp_show_hidden = 1 " include dotfiles and dotdirs in ctrlp indexing let g:ctrlp_prompt_mappings = { \ 'AcceptSelection("e")': ['<c-t>'], \ 'AcceptSelection("t")': ['<cr>', '<2-LeftMouse>'], \ } " remap <cr> to open file in a new tab " ---------------------------- set showcmd set tabpagemax=100 set hlsearch set incsearch set nowrapscan set ignorecase set smartcase set ruler set tabstop=4 set shiftwidth=4 set expandtab set wildmode=list:longest autocmd BufWritePre * :%s/\s\+$//e "remove trailing whitespace " :REV to "revert" file to state of the most recent save command REV earlier 1f " disable netrw -------------- let g:loaded_netrw = 1 let g:loaded_netrwPlugin = 1 " ---------------------------- Any guidance as to fixing the statusline would be fantastic. I've found a github issue outlining almost the exact same problem, but the solution was never posted. Thank you.

    Read the article

  • SQLExpress service unable to start Error code 17053

    - by Chris Sobolewski
    A user was instructed by their software support to upgrade a program and install SQLExpress as part of the installation process. Since that time, the service has been able to start, citing error 17053, which appears to be an authentication issue. Here is the error log: 2011-01-11 13:17:45.50 Server Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 5.1 (Build 2600: Service Pack 2) 2011-01-11 13:17:45.50 Server (c) 2005 Microsoft Corporation. 2011-01-11 13:17:45.50 Server All rights reserved. 2011-01-11 13:17:45.50 Server Server process ID is 3332. 2011-01-11 13:17:45.50 Server Authentication mode is WINDOWS-ONLY. 2011-01-11 13:17:45.50 Server Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG'. 2011-01-11 13:17:45.52 Server This instance of SQL Server last reported using a process ID of 2332 at 11/10/2010 2:15:24 PM (local) 11/10/2010 7:15:24 PM (UTC). This is an informational message only; no user action is required. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 2011-01-11 13:17:45.52 Server Registry startup parameters: 2011-01-11 13:17:45.52 Server -d c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf 2011-01-11 13:17:45.52 Server -e c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG 2011-01-11 13:17:45.52 Server -l c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mastlog.ldf 2011-01-11 13:17:45.52 Server Error: 17113, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server Error 3(The system cannot find the path specified.) occurred while opening file 'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary. 2011-01-11 13:17:45.52 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:17:45.52 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 4 Server Error: 17053, Severity: 16, State: 1. 2011-01-11 13:08:21.34 Server UpdateUptimeRegKey: Operating system error 5(Access is denied.) encountered. 12:47:20.85 spid5s SQL Trace ID 1 was started by login "sa". 2011-01-11 12:47:20.90 spid5s Starting up database 'mssqlsystemresource'. 2011-01-11 12:47:20.93 spid5s The resource database build version is 9.00.3042. This is an informational message only. No user action is required. 2011-01-11 12:47:21.21 spid5s Error: 15466, Severity: 16, State: 1. 2011-01-11 12:47:21.21 spid5s An error occurred during decryption. 2011-01-11 12:47:21.38 spid8s Starting up database 'model'. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x90. 2011-01-11 12:47:21.38 Server Error: 17182, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server TDSSNIClient initialization failed with error 0x5, status code 0x1. 2011-01-11 12:47:21.38 Server Error: 17826, Severity: 18, State: 3. 2011-01-11 12:47:21.38 Server Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2011-01-11 12:47:21.38 Server Error: 17120, Severity: 16, State: 1. 2011-01-11 12:47:21.38 Server SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. One lead I had was to change the SQL logon account from "Network Service" to "Local System". Unfortunately, that is resulting in the error message The Security ID Structure is Invalid [0x80070539] Any help either uninstalling or getting SQLExpress running would be fantastic.

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27  | Next Page >