Search Results

Search found 5637 results on 226 pages for 'triple slash comments'.

Page 116/226 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • How can I learn to effectively write Pythonic code?

    - by Matt Fenwick
    I'm tired of getting downvoted and/or semi-rude comments on my Python answers, saying things like "this isn't Pythonic" or "that's not the Python way of doing things". To clarify, I'm not tired of getting corrected and downvoted, and I'm not tired of being wrong: I'm tired of feeling like there's a whole field of Python that I know nothing about, and seems to be implicit knowledge of experienced Python programmers. Doing a google search for "Pythonic" reveals a wide range of interpretations. The wikipedia page says: A common neologism in the Python community is pythonic, which can have a wide range of meanings related to program style. To say that code is pythonic is to say that it uses Python idioms well, that it is natural or shows fluency in the language. Likewise, to say of an interface or language feature that it is pythonic is to say that it works well with Python idioms, that its use meshes well with the rest of the language. It also discusses the term "unpythonic": In contrast, a mark of unpythonic code is that it attempts to write C++ (or Lisp, Perl, or Java) code in Python—that is, provides a rough transcription rather than an idiomatic translation of forms from another language. The concept of pythonicity is tightly bound to Python's minimalist philosophy of readability and avoiding the "there's more than one way to do it" approach. Unreadable code or incomprehensible idioms are unpythonic. I suspect one way to learn the Pythonic way is just to program in Python a whole bunch. But I bet I could write a bunch of crap and not improve that much without some guidance, whereas a good resource might speed up the learning process significantly. PEP 8 might be exactly what I'm looking for, or maybe not. I'm not sure; on the one hand it covers a lot of ground, but on the other hand, I feel like it's more suited as a reference for knowledgeable programmers than a tutorial for fresh 'uns. How do I get my foot in the Pythonic/Python way of doing things door?

    Read the article

  • FY13 Partner Kickoff Kick’s off Summer Right

    - by Kristin Rose
    This summer’s blockbuster movie lineup is far from disappointing – From the Avengers to Prometheus and The Dark Knight Rises, there is no shortage of ‘cling-to-your-seat’ entertainment in store, not to mention buttery popcorn fingers. Will all this big screen action taking place, Oracle wanted to take part in some big premiers of its own, which is why we are happy to announce that our FY13 Partner Kickoff event is taking place June 26th. This year we are welcoming several partners from around the globe in person to Oracle’s Headquarters, as well as another 22,000 partners tuning in to help us kickoff FY13. Hosted by Judson Althoff, SVP of WWA&C, the Oracle PartnerNetwork FY13 Kickoff is being held live — five times throughout the day — and will include a special message for each region.  Have a look at the schedule of shows below: EMEA Kickoff – Tuesday, June 26 @ 2:00 pm BST (London) LAD Kickoff – Tuesday, June 26 @ 4:00 pm UTC (San Paulo) North America Kickoff – Tuesday, June 26 @ 8:30 am PT (San Francisco) Japan Kickoff – Wednesday, June 27 @ 10:00 am JST (Tokyo) Asia Pacific Kickoff – Wednesday, June 27 @ 8:30 am IST (Bangalore) / 11:00 am SGT (Singapore) / 1:00 pm AEST (Sydney) Partners near and far will be able to get a first row seat to some exciting Oracle announcements, keynotes, round-tables and a live after-show event hosted by Nick Kritikos, VP of Partner Enablement. Did we mention there is an exciting online component which will allow partners to send in questions or comments and get them answered in real time? Now that deserves two thumbs up!So whether you’re partial to Milk Duds or Junior Mints, grab a box of your favorite candy and sign-up for this strategy driven, partner focused blockbuster event. To get a sneak-peek at what’s in store, watch this short PKO “trailer” below, starring our very own GVP of WWA&C, Lydia Smyers to find out more.   To the Depths and Back,The OPN Communications Team

    Read the article

  • Attend my Fusion sessions

    - by Daniel Moth
    The inaugural Fusion conference was 1 year ago in June 2011 and I was there doing a demo in the keynote, and also presenting a breakout session. If you look at the abstract and title for that session you won't see the term "C++ AMP" in there because the technology wasn't announced and we didn't want to spill the beans ahead of the keynote, where the technology was announced. It was only an announcement, we did not give any bits out, and in fact the first bits came three months later in September 2011 with the Beta following in February 2012. So it really feels great 1 year later, to be back at Fusion presenting two sessions on C++ AMP, demonstrating our progress from that announcement, to the Visual Studio 2012 Release Candidate that came out last week. If you are attending Fusion (in person or virtually later), be sure to watch my two-part session. Part 1 is PT-3601 on Tuesday 4pm and part 2 is PT-3602 on Wednesday 4pm. Here is the shared abstract for both parts: Harnessing GPU Compute with C++ AMP C++ AMP is an open specification for taking advantage of accelerators like the GPU. In this session we will explore the C++ AMP implementation in Microsoft Visual Studio 2012. After a quick overview of the technology understanding its goals and its differentiation compared with other approaches, we will dive into the programming model and its modern C++ API. This is a code heavy, interactive, two-part session, where every part of the library will be explained. Demos will include showing off the richest parallel and GPU debugging story on the market, in the upcoming Visual Studio release. See you there! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • String Format for DateTime in C#

    - by SAMIR BHOGAYTA
    String Format for DateTime [C#] This example shows how to format DateTime using String.Format method. All formatting can be done also using DateTime.ToString method. Custom DateTime Formatting There are following custom format specifiers y (year), M (month), d (day), h (hour 12), H (hour 24), m (minute), s (second), f (second fraction), F (second fraction, trailing zeroes are trimmed), t (P.M or A.M) and z (time zone). Following examples demonstrate how are the format specifiers rewritten to the output. [C#] // create date time 2008-03-09 16:05:07.123 DateTime dt = new DateTime(2008, 3, 9, 16, 5, 7, 123); String.Format("{0:y yy yyy yyyy}", dt); // "8 08 008 2008" year String.Format("{0:M MM MMM MMMM}", dt); // "3 03 Mar March" month String.Format("{0:d dd ddd dddd}", dt); // "9 09 Sun Sunday" day String.Format("{0:h hh H HH}", dt); // "4 04 16 16" hour 12/24 String.Format("{0:m mm}", dt); // "5 05" minute String.Format("{0:s ss}", dt); // "7 07" second String.Format("{0:f ff fff ffff}", dt); // "1 12 123 1230" sec.fraction String.Format("{0:F FF FFF FFFF}", dt); // "1 12 123 123" without zeroes String.Format("{0:t tt}", dt); // "P PM" A.M. or P.M. String.Format("{0:z zz zzz}", dt); // "-6 -06 -06:00" time zone You can use also date separator / (slash) and time sepatator : (colon). These characters will be rewritten to characters defined in the current DateTimeForma­tInfo.DateSepa­rator and DateTimeForma­tInfo.TimeSepa­rator. [C#] // date separator in german culture is "." (so "/" changes to ".") String.Format("{0:d/M/yyyy HH:mm:ss}", dt); // "9/3/2008 16:05:07" - english (en-US) String.Format("{0:d/M/yyyy HH:mm:ss}", dt); // "9.3.2008 16:05:07" - german (de-DE) Here are some examples of custom date and time formatting: [C#] // month/day numbers without/with leading zeroes String.Format("{0:M/d/yyyy}", dt); // "3/9/2008" String.Format("{0:MM/dd/yyyy}", dt); // "03/09/2008" // day/month names String.Format("{0:ddd, MMM d, yyyy}", dt); // "Sun, Mar 9, 2008" String.Format("{0:dddd, MMMM d, yyyy}", dt); // "Sunday, March 9, 2008" // two/four digit year String.Format("{0:MM/dd/yy}", dt); // "03/09/08" String.Format("{0:MM/dd/yyyy}", dt); // "03/09/2008" Standard DateTime Formatting In DateTimeForma­tInfo there are defined standard patterns for the current culture. For example property ShortTimePattern is string that contains value h:mm tt for en-US culture and value HH:mm for de-DE culture. Following table shows patterns defined in DateTimeForma­tInfo and their values for en-US culture. First column contains format specifiers for the String.Format method. Specifier DateTimeFormatInfo property Pattern value (for en-US culture) t ShortTimePattern h:mm tt d ShortDatePattern M/d/yyyy T LongTimePattern h:mm:ss tt D LongDatePattern dddd, MMMM dd, yyyy f (combination of D and t) dddd, MMMM dd, yyyy h:mm tt F FullDateTimePattern dddd, MMMM dd, yyyy h:mm:ss tt g (combination of d and t) M/d/yyyy h:mm tt G (combination of d and T) M/d/yyyy h:mm:ss tt m, M MonthDayPattern MMMM dd y, Y YearMonthPattern MMMM, yyyy r, R RFC1123Pattern ddd, dd MMM yyyy HH':'mm':'ss 'GMT' (*) s SortableDateTi­mePattern yyyy'-'MM'-'dd'T'HH':'mm':'ss (*) u UniversalSorta­bleDateTimePat­tern yyyy'-'MM'-'dd HH':'mm':'ss'Z' (*) (*) = culture independent Following examples show usage of standard format specifiers in String.Format method and the resulting output. [C#] String.Format("{0:t}", dt); // "4:05 PM" ShortTime String.Format("{0:d}", dt); // "3/9/2008" ShortDate String.Format("{0:T}", dt); // "4:05:07 PM" LongTime String.Format("{0:D}", dt); // "Sunday, March 09, 2008" LongDate String.Format("{0:f}", dt); // "Sunday, March 09, 2008 4:05 PM" LongDate+ShortTime String.Format("{0:F}", dt); // "Sunday, March 09, 2008 4:05:07 PM" FullDateTime String.Format("{0:g}", dt); // "3/9/2008 4:05 PM" ShortDate+ShortTime String.Format("{0:G}", dt); // "3/9/2008 4:05:07 PM" ShortDate+LongTime String.Format("{0:m}", dt); // "March 09" MonthDay String.Format("{0:y}", dt); // "March, 2008" YearMonth String.Format("{0:r}", dt); // "Sun, 09 Mar 2008 16:05:07 GMT" RFC1123 String.Format("{0:s}", dt); // "2008-03-09T16:05:07" SortableDateTime String.Format("{0:u}", dt); // "2008-03-09 16:05:07Z" UniversalSortableDateTime

    Read the article

  • Electric Dreams: Picking Out a Vintage 1980s Computer [Video]

    - by Jason Fitzpatrick
    What if you had to pick out a 1980s era computer for use in your home today? BBC show Electric Dreams walks us through the history with a “time traveling” family. Electric Dreams is a show based on the novel premise that an average British family is starting, technologically speaking, in the 1970s and progressing over a month to the year 2000–restricted each step of the way to using technology available only in the era they are emulating. In the above video clip they’ve reached 1982 and visit the National Museum of Computing to pick out a vintage computer. It’s interesting to see the kids interact with the computer and experience programming for, presumably, the first time. Have a vintage computer memory (mine is programming on a Timex Sinclair); let’s hear about it in the comments. Electric Dreams – The 1980s ‘The Micro Home Computer Of 1982′ [via O'Reilly Radar] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • HTML5Rocks Live, Episode 1

    HTML5Rocks Live, Episode 1 In this episode of HTML5Rocks Live, Boris, Eric and Paul join us to show some great new libraries and performance tips. Please leave your comments on our plus page at goo.gl In the first chapter, Paul shows how to use some of Chrome's new developer tools to understand how things are rendering and get improved performance. In the second chapter (21:25), Boris shows off his new device.js library to help make development of mobile web applications and sites easier. Eric closes the hangout (40:00) and talks about his new file system API polyfill that uses indexed db as it's back end. 02:15 Scroll Effects Demo goo.gl 23:04 - Media Queries Site goo.gl 24:15 - WURFL goo.gl 26:40 - Boris' Device Library goo.gl 29:28 - Device.js Demo goo.gl 33:25 - Bug to add touch-enabled media query to Chrome, please star goo.gl 35:00 - Chrome's DevTools for Mobile Development 38:56 - Paul Irish's Touch Demos goo.gl 40:43 - File System API Book goo.gl 43:10 - Eric's idb.filesystem.js goo.gl 44:27 - idb.filesystem HTML5 File System Demo goo.gl 47:33 - HTML5 Filesystem Playground goo.gl From: GoogleDevelopers Views: 12239 221 ratings Time: 52:29 More in Science & Technology

    Read the article

  • Alice In Wonderland: Good, but not Great

    - by Theo Moore
    We went to see Alice In Wonderland today. We both like Tim Burton a lot (the stranger the better) and like Johnny Depp very well also. After seeing all the previews and such, we were fired up to see this film. Honestly, I thought it was good but not great. I was prepared to be wow-ed, but I wasn't. Perhaps I expected too much. I did like it, but I'll not own it nor would I expect to see it again...unless someone I know decides they want to see it. I was about to say something to reassure you that I wasn't going to provide any spoilers but two things occurred to me: one, I never give spoilers and two, why worry about spoilers for a film that so closely follows a book? My comments about the film are hard to describe, but the basic gist is that it doesn't really feel like it..."works" to me. I can't get any more specific than that, much as I'd like to do so. Something about it seems sort of disjointed and not in that Alice way you'd expect. My only specific comment is that I didn't like the actor who plays Alice very well. She was very flat and just didn't sell he character to me. She seemed a bit, well, plastic. Depp was as good as you'd expect him to be, I am happy to say. Obviously Lewis Carroll couldn't have imagined this made into film, but I can't help thinking that he'd see this and say that Depp was the perfect Mad Hatter. So, I'd definitely recommend seeing it (we saw it in 3D which was cool, but not really necessary) at least once, but don't be surprised if you're kinda meh afterwards.

    Read the article

  • Crowdsourcing MVVM Light Toolkit support

    - by Laurent Bugnion
    Considering the number of emails that are sent to me asking for support for MVVM Light toolkit, I find myself unable to answer all of them in sufficient time to make me feel good. In consequence, I started to send the following message in response to support queries, either per email or on the MVVM Light Codeplex discussion page. Hi, I am doing my best to answer all the questions as fast as possible. I receive a lot of them, however, and cannot reply to everyone fast enough to make me happy. Due to this, I would like to encourage you to post your question on StackOverflow, and tag it with the tag mvvm-light. StackOverflow is an awesome site where tons of developers help others with their technical question. http://stackoverflow.com/questions/tagged/mvvm-light I will monitor this tag on the StackOverflow website and do my best to answer questions. The advantage of StackOverflow over the Codeplex discussion is the sheer number of qualified developers able to help you with your questions, the visibility of the question itself, and the whole StackOverflow infrastructure (reputation, up- or down-vote, comments, etc) Thanks! Laurent Bug reports Regarding bug reports, feel free to continue to send them to the Codeplex site (preferred), or to me directly. I hope that this will help all support queries to be answered faster, and with the great quality for which the StackOverflow users are known!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Bulletin Board System with tagging, email notification

    - by user678220
    I am looking for nice BBS system, Bulletin Board System, Discussion Board, or nice in-company communication platform. There are lots of people, about 30 people, joining in our project. We would like to share idea among us on that platform. We can post questions and concerns related with the project, and we would like to respond each other. Here is my list of functionality I want: Tagging Thread e.g) Announcement, Finance, Legal, Idea. One thread can have multiple Tags. members can set on/off to receive email when new comments are posted. They can set on/off on each Tag. e.g) one member on to receive email related with "Announcement", but off to receive "Finance". Thread owner can change threads' tag any time. Thread can have several type of post. Thread can be "vote" thread. Everyone can vote their opinion. Thread can be "action plan" thread. In this thread, "who" will "what" remains in the thread. By viewing all "action plan" thread, all action plans needed in the company is visualized.

    Read the article

  • Agile Testing Days 2012 – Day 1 – The birth of the #unicorn…

    - by Chris George
    Still riding the high from the tutorial day, I arrived at the conference venue eager to get cracking with the days talks. The opening Keynote was “Disciplined Agile Delivery: The Foundation for Scaling Agile” presented by Scott Ambler. The general ideas behind the methodology such as not re-inventing the wheel, and being goal driven, not prescriptive in how you work certainly struck chords with how we are trying to work in my team. Scott made some interesting observations about how scrum is quite prescriptive and is this really agile? I agreed with quite a few of his points on how what works for one team may not work for another. How a team works should be driven by context and reflection, not process and prescription. However was somewhat dubious about some of the statistics he rolled out towards the end. However, out of this keynote was born something that was to transcend this one presentation. During the talk, Scott mentioned on more than one occasion “In the real world”, and at one point made reference to people living in the land of unicorns and rainbows. The challenge was then laid down on twitter for all speakers to include a unicorn in their presentations… and for the most part this happened! It became an identity for this years conference, and I’m sure something that any attendee will always associate with Agile Testing Days 2012! Following this keynote, I attended “Going agile with Automated GUI Testing – Some personal insights” by Jan Zdunek from codecentric on the vendor track. My speciality is test automation, and in particular GUI testing, so this drew me to this talk more than the others. Thankfully, it was made clear from the very start that this was not peddling any particular product (even though it was on the vendor track), and Jan faithfully stuck to that. Most of the content was not new to me, but it was really comforting to hear someone else with very similar experiences to my own. In particular, things like how GUI testing is hard and is not a silver bullet; how record & replay is NOT a good thing to do (which drew a somewhat inflammatory tweet from an automation company when I tweeted that!). Something that I have started hearing around the place, and has certainly been murmuring at work is to push more of the automation coding onto the developers. After all they are the coding experts. I agree with this to a degree, but I personally enjoy coding and find it very rewarding doing so, therefore I’d be reluctant to give it up. I think there are some better alternatives such as pairing with a developer. Lastly, Jan mentioned, almost in passing, that we should consider virtualisation for gui testing for covering configuration combinations. On my project we’ve been running our win32/.NET GUI tests in cloud virtualisation for a couple of years now… I really should write about that! After lunch the second keynote of the day was by Lisa Crispin and Janet Gregory,”Myths about Agile Testing, De-Bunked”. It started off well… with the two ladies donning Medusa style head bands whilst they disbanding several myths about agile testing! I got the impression that it was perhaps not as slick as they would have liked, but then Janet was suffering with a very sore throat so kept losing her voice. Nevertheless, the presentation was captivating, and they debunked several myths such as : “Testing is dead”, “Testers must write code”, “Agile teams always deliver faster”. I didn’t take many notes for this because it was being recorded, but unfortunately the recordings have not been posted yet so I’ll write more about this when they are. The TestLab was held during a somewhat free for all time during most of the afternoon. It looked intriguing and proved to be one of the surprising experiences of the conference for me. Run by James Lyndsay and Bart Knaack, it consisted of a number of ‘stations’ that offered different testing problems. I opted for testing a mathematical drawing app call Geogebra, the task being to pair up and exploratory test it. After an allotted time, we discussed issues we’d found and decided if we wanted to continue ‘playing’ to which we all agreed! It was fun! The last track talk of the day was “Developers Exploratory Testing – Raising the bar” by Sigge Birgisson. One of the teams at Red Gate have tried Dev or Team exploratory testing a couple of times, and I was really interested to go to the presentation that prompted that. I was not disappointed! Sigge gave a first class presentation, and not only explained what DET was all about, but also how to go about implementing it. Little tips like calling it a ‘workshop’ rather than ‘testing’ I can really see working! Monday evening saw the presentation of the award for the Most Influential Agile Testing Professional Person go to a much deserved Lisa Crispin. The evening was great, with acrobatics, magic and music. My Takeaway Triple from Day 1:  Some of the cool stuff that was suggested in the GUI Testing talk, we are already doing. I should write about that! Testing is not dead! Perhaps testing will become more of a skill than a specific role, but it is certainly not dead. Team/Developer exploratory testing… seems like a no-brainer assuming you have a team who is willing.  Day 2 – Coming soon…

    Read the article

  • BUILD apps that use C++ AMP

    - by Daniel Moth
    If you are a developer on the Microsoft platform, you are hopefully attending (live or virtually) the sessions of the BUILD conference, aka //build/ in Anaheim, CA. The conference sold out not long after it opened registration, and it achieved that without sharing *any* session details nor a meaningful agenda up until after the keynote today – impressive! I am speaking at BUILD and hope you'll catch my talk at 9am on Friday (the last day of the conference) at Marriott Elite 2 Ballroom. Session details follow. 802 - Taming GPU compute with C++ AMP Developers today inject parallelism into their compute-intensive applications in order to take advantage of multi-core CPU hardware. Beyond CPUs, however, compute accelerators such as general-purpose GPUs can provide orders of magnitude speed-ups for data parallel algorithms. How can you as a C++ developer fully utilize this heterogeneous hardware from your Visual Studio environment?  How can you benefit from this tremendous performance boost in your Visual C++ solutions without sacrificing developer productivity?  The answers will be presented in this session about C++ Accelerated Massive Parallelism. I'll be covering a lot of the material I've been recently blogging about on my blog that you are reading, which I have also indexed over on our team blog under the title: "C++ AMP in a nutshell". Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Minidlna Directory Issues

    - by Somnambulist
    I've done my searching and can't find an answer to THIS specific issue. I have my minidlna set up and running - but it's not really done properly. First off, when I open the server on my bluray player, all of my movies are listed twice - when they are certainly not saved on my external twice. Second, when I open the server - rather than reading "Movies" "TV" "Music", etc - It just mashes all of my movies, tv, and some other folders all together with no real organization. I never had this problem when I had my Windows set up, so I know it's something configured improperly more-so than my external drive giving me gruff. Here's my minidlna.conf file: # This is the configuration file for the MiniDLNA daemon, a DLNA/UPnP-AV media # server. # # Unless otherwise noted, the commented out options show their default value. # # On Debian, you can also refer to the minidlna.conf(5) man page for # documentation about this file. media_dir=/media/somnambulist/Ghost In You # This option can be specified more than once if you want multiple directories # scanned. # # If you want to restrict a media_dir to a specific content type, you can # prepend the directory name with a letter representing the type (A, P or V), # followed by a comma, as so: # * "A" for audio (eg. media_dir=A,/var/lib/minidlna/music) # * "P" for pictures (eg. media_dir=P,/var/lib/minidlna/pictures) # * "V" for video (eg. media_dir=V,/var/lib/minidlna/videos) # # WARNING: After changing this option, you need to rebuild the database. Either # run minidlna with the '-R' option, or delete the 'files.db' file # from the db_dir directory (see below). # On Debian, you can run, as root, 'service minidlna force-reload' instead. #media_dir=/var/lib/minidlna media_dir=V,/media/somnambulist/Ghost In You/Movies media_dir=V,/media/somnambulist/Ghost In You/TV media_dir=P,/home/somnambulist/Pictures # Path to the directory that should hold the database and album art cache. db_dir=/home/somnambulist/serverart # Path to the directory that should hold the log file. log_dir=/home/somnambulist/serverlog # Minimum level of importance of messages to be logged. # Must be one of "off", "fatal", "error", "warn", "info" or "debug". # "off" turns of logging entirely, "fatal" is the highest level of importance # and "debug" the lowest. #log_level=warn # Use a different container as the root of the directory tree presented to # clients. The possible values are: # * "." - standard container # * "B" - "Browse Directory" # * "M" - "Music" # * "P" - "Pictures" # * "V" - "Video" # if you specify "B" and client device is audio-only then "Music/Folders" will be used as root root_container=B # Network interface(s) to bind to (e.g. eth0), comma delimited. #network_interface= # IPv4 address to listen on (e.g. 192.0.2.1). #listening_ip= # Port number for HTTP traffic (descriptions, SOAP, media transfer). port=8200 # URL presented to clients. # The default is the IP address of the server on port 80. #presentation_url=http://example.com:80 # Name that the DLNA server presents to clients. friendly_name=Somnambulist Media Server # Serial number the server reports to clients. serial=12345678 # Model name the server reports to clients. #model_name=Windows Media Connect compatible (MiniDLNA) # Model number the server reports to clients. model_number=1 # Automatic discovery of new files in the media_dir directory. #inotify=yes # List of file names to look for when searching for album art. Names should be # delimited with a forward slash ("/"). album_art_names=Cover.jpg/cover.jpg/AlbumArtSmall.jpg/albumartsmall.jpg/AlbumArt.jpg/albumart.jpg/Album.jpg/album.jpg/Folder.jpg/folder.jpg/Thumb.jpg/thumb.jpg # Strictly adhere to DLNA standards. # This allows server-side downscaling of very large JPEG images, which may # decrease JPEG serving performance on (at least) Sony DLNA products. #strict_dlna=no # Support for streaming .jpg and .mp3 files to a TiVo supporting HMO. #enable_tivo=no # Notify interval, in seconds. #notify_interval=895 # Path to the MiniSSDPd socket, for MiniSSDPd support. #minissdpdsocket=/run/minissdpd.sock` And here's the error I get in terminal when I run: sudo service minidlna restart sudo service minidlna force-reload Force restart error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] Force-reload error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] rm: cannot remove ‘/home/somnambulist/serverart/files.db’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Slumdog Millionaire/Slumdog.Millionaire.Cover.jpg’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Zack and Miri Make a Porno/ZackAndMiriMakeAPornoCover.jpg’: Permission denied [2013/08/12 21:19:46] minidlna.c:744: warn: Failed to clean old file cache. [ OK ] I've spent hours on this at this point, read through various files - and even had a friend who is relatively Ubuntu-savvy try to help me via chat - no such luck. Thanks in advance for any help.

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

  • Visual Studio 2008 Crashes When Adding Custom Pipeline Components to Toolbox

    - by Sean Feldman
    I have run into this issue trying to add custom pipeline component to toolbox. The only way (I know about) to add a custom pipeline component to a customized pipeline is using the visual designer. In order to do that you have to have components on toolbox. This was a bit frustrating. Google has brought one result which was exactly what I needed. One of the comments had another link, to the similar issue, but this time with a different title: Hotfix for BizTalk 2009 and Visual Studio 2008. I followed the link, installed hotfix, and it worked. Oh, yes, you have to reboot your machine for hotfix to work completely (this is where I spent some time pulling my hair out and asking why hotfix didn’t work?!). Once this is done, you are good to go. Among other things, this hotfix deals with BizTalk project references to each other and items not being updated (like distinguished fields in schemas project not reflected in orchestrations project). Here’s the full list: * The orchestrations in the referenced BizTalk project may show compiler warnings. * The changes that are made to the referenced BizTalk project are not propagated on to the referencing project. * When you edit the orchestrations of the referenced project, XLANG errors are thrown. These errors may disappear after the orchestrations are saved and recompiled. * After you deploy the referencing project, the local copies of the referenced project’s binaries are deleted. * After you deploy the referencing project, various errors or warnings occur in Orchestration Designer.

    Read the article

  • Parallel Debugging

    Using Visual Studio 2010 parallel debugging is easy. Two new debugging windows provide a total view of the internals of your PPL and TPL applications with hints on where to start investigations. These are not mere extensions to VS, but tightly integrated with the rest of the debugger experience, so you don't need to learn many new techniques. Use them in your program to eclipse bugs from existence!One of the most FAQ I receive is links to VS2010 parallel debugging content and rather than keep sending many, I decided to gather them all under one permalink, hence this multi link blog post.- MSDN Magazine article on Parallel Debugging.- Screencast of sample code from the article.- MSDN Walkthrough: Debugging a Parallel Application (VB, C++, C#).- Screencast of walkthrough for Parallel Stacks.- Screencast of walkthrough for Parallel Tasks.- MSDN "How To" on Parallel Tasks.- MSDN "How To" on Parallel Stacks.- Detailed blog post on Parallel Tasks.- Detailed blog post on Parallel Stacks.- Detailed blog post on Parallel Stacks - Tasks View.- Detailed blog post on Parallel Stacks - Method View.- Download slides on Parallel Tasks and Parallel Stacks (pptx).If you have questions on these, please post to any of the parallel computing forums or the debugging forum (your question will be routed to me if nobody else can answer it). Comments about this post welcome at the original blog.

    Read the article

  • Siebel Troubleshooting : An ODBC error occurred; SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl

    - by Giri Mandalika
    Symptom: A newly installed Siebel application server fails to start despite successful ODBC connectivity to the database. SRProc process logs ODBC error messages similar to the following: Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (Data length (32) Column definition (3)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 4 ). GenericLog GenericError 1 0002157.. 11-11-18 13:28 Message: Generated SQL statement:, Additional Message: SQLFetch: SELECT RDOBJ.DOCK_ID, RDOBJ.RELATED_DOCK_ID, RDOBJ.SQL_STATEMENT, RDOBJ.CHECK_VISIBILITY, 'N', RDOBJ.COMMENTS, RDOBJ.ACTIVE, RDOBJ.SEQUENCE, RDOBJ.VIS_STRENGTH, RDOBJ.REL_VIS_STRENGTH, RDOBJ.VIS_EVT_COLS FROM ORAPERF.S_DOCK_REL_DOBJ RDOBJ, ORAPERF.S_DOCK_OBJECT DOBJ WHERE RDOBJ.REPOSITORY_ID = (SELECT ROW_ID FROM ORAPERF.S_REPOSITORY WHERE NAME = ?) AND DOBJ.ROW_ID = RDOBJ.DOCK_ID AND (DOBJ.INACTIVE_FLG = 'N' OR DOBJ.INACTIVE_FLG IS NULL) AND (RDOBJ.INACTIVE_FLG = 'N' OR RDOBJ.INACTIVE_FLG IS NULL) Message: Error: An ODBC error occurred, Additional Message: Function: DICGetRDObjects; ODBC operation: SQLFetch Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (UTLCompressFRead (fseek)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 0 ). Message: GEN-10, Additional Message: Calling Function: DICLoadDObjectInfo; Called Function: Calling DICGetRDObjects Message: GEN-10, Additional Message: Calling Function: DICLoadDict; Called Function: DICLoadDObjectInfo GenericError (srpdb.cpp (860) err=3006 sys=2) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpsmech.cpp (74) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpmtsrv.cpp (107) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (smimtsrv.cpp (1203) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl SmiLayerLog Error Terminate process due to unrecoverable error: 3006. (Main Thread) An inconsistent or corrupted dictionary file "diccache.dat" is likely the cause. Solution: Stop the application server and manually kill the remaining Siebel application specific processes eg., stop_server all pkill siebmtsh pkill siebproc .. Remove $SIEBEL_HOME/bin/diccache.dat file. It will be re-generated during the application server startup Start the application server start_server all

    Read the article

  • In-Application Support Made Easier

    - by matt.hicks
    With the availability of Oracle UPK 3.6.1 and Enablement Service Pack 1 for Oracle UPK 3.6.1 (Oracle Support login required for both), there are quite a few changes for content admins to absorb. In addition to the support added for dozens of application releases, patches and new target applications, we've also added features to make implementing and using In-Application Support even easier. First, the old Help Menu Integration Guides have been updated and combined into a single In-Application Support Guide. If you integrate UPK content for user assistance, or if you're interested in doing so, read the new guide! It covers all the integration steps, including a section on the new In-Application Support Configuration Utility. If you've integrated content in multiple languages, or if you've ever had to make configuration changes for UPK Help Integration, then you know how cumbersome it was to manually edit javascript files. No longer! The Player now includes a configuration utility that provides a web browser interface for setting all In-Application Support options. From the main screen, you see a list of applications covered by the published content. Clicking on an application name takes you to the edit configuration screen where you can set all Player options for that application. No more digging through the Player folders to find the right javascript file to edit. No complicated javascript syntax to make changes. And with Enablement Service Pack 1 we've added a new feature we're calling the Tabbed Gateway. The Tabbed Gateway is a top-level navigation bar for Help Integration. And all tabs, links, and text are controlled with the Configuration Utility... I think the Tabbed Gateway is a really cool and exciting feature for content launch. I can't wait to hear how your ideas for how to use it for your content. Let me know in comments or email!

    Read the article

  • Visual Studio 11 not 2011

    - by Daniel Moth
    A little pet peeve of mine is when people incorrectly refer to the Developer Preview (or the upcoming Beta) as Visual Studio 2011 instead of the correct Visual Studio 11. The "11" refers to the version number (internally we call it Dev11). What the product will be called when it is released is anyone's guess (it could keep the name or it could have a year appended to it, or it could be something else, who knows). Even if it does have a year appended to the name, I think it is a safe bet it won't be last year! For reference, version 10 was the previous version of Visual Studio which happened to be released in 2010, hence it got the name Visual Studio 2010. That is what confuses new people to this product I guess... they think that the two-digit number matches the year, just because it coincided like that last year. (btw, internally we called it Dev10). For further reference, older releases were: Visual Studio 2008 (v9) aka "Orcas", Visual Studio 2005 (v8) aka "Whidbey", Visual Studio .NET 2003 (v7.1) aka "Everett", and Visual Studio .NET 2002 (v7) aka "Rainier". Before that, we were in the pre-.NET era with Visual Studio 6 (where the version and the product name matched, without the year appended to the name). So next time you hear someone saying "Visual Studio 2011", point them to this post for some mini-education... thanks. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Blogger.com kills FTP

    - by Daniel Moth
    History (you can safely ignore) Back in 2002 I came across some (almost) free Linux/Apache space and set up my first manually-created HTML-based home page, which still exists: http://www.danielmoth.com/. In 2004 I wanted to have a blog that would be hosted on a sub-folder of my domain, and at the same time I did not want to mess with setting up a blog engine myself. I found the perfect solution in blogger.com, which offered a web interface for creating blog posts (and managing the pages' template) and it would then use FTP to upload HTML pages to my space (no server-side programming/installation required at all)! FTP feature dropped by blogger.com Unfortunately, along the way Google purchased blogger.com and a couple of months ago they announced that they decided to kill the FTP feature, and they are forcing customers using that feature to have their content hosted (in an opaque way) on Google's servers. Even though I prefer having my content on my own space, I would have considered moving it to Google's servers if I could host my blog in a sub-folder and preserve my full blog URL: http://www.danielmoth.com/Blog/ (including my home pages being hosted at the root of the domain). Sadly, that is not possible. What now So I decided to move my blog somewhere else. I'll document on the next few posts how I did that (inc. a tool I wrote) in case it helps someone else in the same situation and also as a reminder to me if I need to do something like this again in the future. Comments about this post welcome at the original blog.

    Read the article

  • Start Debugging in Visual Studio

    - by Daniel Moth
    Every developer is familiar with hitting F5 and debugging their application, which starts their app with the Visual Studio debugger attached from the start (instead of attaching later). This is one way to achieve step 1 of the Live Debugging process. Hitting F5, F11, Ctrl+F10 and the other ways to start the process under the debugger is covered in this MSDN "How To". The way you configure the debugging experience, before you hit F5, is by selecting the "Project" and then the "Properties" menu (Alt+F7 on my keyboard bindings). Dependent on your project type there are different options, but if you browse to the Debug (or Debugging) node in the properties page you'll have a way to select local or remote machine debugging, what debug engines to use, command line arguments to use during debugging etc. Currently the .NET and C++ project systems are different, but one would hope that one day they would be unified to use the same mechanism and UI (I don't work on that product team so I have no knowledge of whether that is a goal or if it will ever happen). Personally I like the C++ one better, here is what it looks like (and it is described on this MSDN page): If you were following along in the "Attach to Process" blog post, the equivalent to the "Select Code Type" dialog is the "Debugger Type" dropdown: that is how you change the debug engine. Some of the debugger properties options appear on the standard toolbar in VS. With Visual Studio 11, the Debug Type option has been added to the toolbar If you don't see that in your installation, customize the toolbar to show it - VS 11 tends to be conservative in what you see by default, especially for the non-C++ Visual Studio profiles. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • HUGE EF4 Inheritance Bug

    - by djsolid
    Well maybe not for everyone but for me is definitely really important. That is why I get straight into the point. We have the following model: Which maps to the following database: We are using EF4.0 and we want to load all Burgers including BurgerDetails. So we write the following query: But it fails. The error is : “The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[SampleEFDBModel.Food]' but the required type is 'Transient.reference[SampleEFDBModel.Burger]'.Parameter name: arguments[0]”   So in the new version of EF there is no way to eager load data through Navigation Properties with 1-1 relationships defined in subclasses. Here is the relevant Microsoft Connect Issue. It is described through an other example but the result is the same.  Please if you think this is important vote up on Microsoft Connect.   EF 4.0 has many improvements. I am using it since v1 in large-scale projects and this version is faster,produces cleaner sql, more reliable and can be used for complicated business scenarios. That is why I believe this issue should be solved as soon as possible. I understand that release cycles are slow but I am hoping atleast for a hotfix. I also have uploaded the example project so you can test it. Download it from here. If anyone has found any workarounds please post it in the comments section. Thanks!

    Read the article

  • Making Sense of ASP.NET Paths

    - by Renso
    Making Sense of ASP.NET Paths ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject portUri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString();

    Read the article

  • Great Blogs About Oracle Solaris 11

    - by Markus Weber
    Now that Oracle Solaris 11 has been released, why not blog about blogs. There is of course a tremendous amount of resource and information available, but valuable insights directly from people actually building the product is priceless. Here's a list of such great blogs. NOTE: If you think we missed some good ones, please let us know in the comments section !  Topic Title Author Top 11 Things My 11 favourite Solaris 11 features Darren Moffat Top 11 Things These are 11 of my favorite things! Mike Gerdts Top 11 Things 11 reason to love Solaris 11     Jim Laurent SysAdmin Resources Solaris 11 Resources for System Administrators Rick Ramsey Overview Oracle Solaris 11: The First Cloud OS Larry Wake Overview What's a "Cloud Operating System"? Harry Foxwell Overview What's New in Oracle Solaris 11 Jeff Victor Try it ! Virtually the fastest way to try Solaris 11 (and Solaris 10 zones) Dave Miner Upgrade Upgrading Solaris 11 Express b151a with support to Solaris 11 Alan Hargreaves IPS The IPS System Repository Tim Foster IPS Building a Solaris 11 repository without network connection Jim Laurent IPS IPS Self-assembly – Part 1: overlays Tim Foster IPS Self assembly – Part 2: multiple packages delivering configuration Tim Foster Security Immutable Zones on Encrypted ZFS Darren Moffat Security User home directory encryption with ZFS Darren Moffat Security Password (PAM) caching for Solaris su - "a la sudo" Darren Moffat Security Completely disabling root logins on Solaris 11 Darren Moffat Security OpenSSL Version in Solaris Darren Moffat Security Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 Valerie Fenwick Performance Critical Threads Optimization Rafael Vanoni Performance SPARC T4-2 Delivers World Record SPECjvm2008 Result with Oracle Solaris 11 BestPerf Blog Performance Recent Benchmarks Using Oracle Solaris 11 BestPerf Blog Predictive Self Healing Introducing SMF Layers Sean Wilcox Predictive Self Healing Oracle Solaris 11 - New Fault Management Features Gavin Maltby Desktop What's new on the Solaris 11 Desktop? Calum Benson Desktop S11 X11: ye olde window system in today's new operating system Alan Coopersmith Desktop Accessible Oracle Solaris 11 - released! Peter Korn

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >