Search Results

Search found 2764 results on 111 pages for 'mike lovely'.

Page 100/111 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Audio not working in 12.10

    - by frampy
    I did a clean install of 12.10, when I open Sound Settings in gnome the only device in the list is "Dummy Output", and sound is not working. Sound worked fine out of the box in 12.04 I ran alsamixer, it says my card is "HDA Intel", and chip is "Realtek ALC880". The alsamixer playback output was set to mute at first, unmuting did not fix. I checked out the info at http://www.unixmen.com/2012003-howto-resolve-nosound-problem-on-ubuntu/ as suggested on a similar question, I've done everything there except installing the ubuntu audio dev team driver. Should I try install this? Edit: I've been reading the sound troubleshooting guide at https://help.ubuntu.com/community/SoundTroubleshooting It looks like Ubuntu is finding my audio device correctly. mike@wucade:~$ lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) Subsystem: Albatron Corp. Device 2668 Flags: bus master, fast devsel, latency 0, IRQ 40 Memory at d01c0000 (64-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel Still stuck as to why this isn't working.

    Read the article

  • What are developer's problems with helpful error messages?

    - by Moo-Juice
    It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, still to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low-priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server knows what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an Applicatio_Name attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely string or binary data would be truncated error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method by choice, it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help me whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come on.

    Read the article

  • TechEd Israel 2010 may only accept speakers from sponsors

    - by RoyOsherove
    A month or so ago, Microsoft Israel started sending out emails to its partners and registered event users to “Save the date!” – Micraoft Teched Israel is coming, and it’s going to be this november! “Great news” I thought to myself. I’d been to a couple of the MS teched events, as a speaker and as an attendee, and it was lovely and professionally done. Israel is an amazing place for technology and development and TechEd hosted some big names in the world of MS software. A couple of weeks ago, I was shocked to hear from a couple of people that Microsoft Israel plans to only accept non-MS teched speakers, only from sponsors of the event. That means that according to the amount that you have paid, you get to insert one or more of your own selected speakers as part of teched. I’ve spent the past couple of weeks trying to gather more evidence of this, and have gotten some input from within MS about this information. It looks like that is indeed the case, though no MS rep. was prepared to answer any email I had publicly. If they approach me now I’d be happy to print their response. What does this mean? If this is true, it means that Microsoft Israel is making a grave mistake – They are diluting the quality of the speakers for pure money factors. That means, that as a teched attendee, who paid good money, you might be sitting down to watch nothing more that a bunch of infomercials, or sub-standard speakers – since speakers are no longer selected on quality or interest in their topic. They are turning the conference from a learning event to a commercial driven event They are closing off the stage to the community of speakers who may not be associated with any organization  willing to be a sponsor They are losing speakers (such as myself) who will not want to be part of such an event. (yes – even if my company ends up sponsoring the event, I will not take part in it, Sorry Eli!) They are saying “F&$K you” to the community of MVPs who should be the people to be approached first about technical talks (my guess is many MVPs wouldn’t want to talk at an event driven that way anyway ) I do hope this ends up not being true, but it looks like it is. MS Israel had already done such a thing with the Developer Days event previouly held in Israel – only sponsors were allowed to insert speakers into the event. If this turns out to be true I would urge the MS community in Israel to NOT TAKE PART AT THIS EVENT in any form (attendee, speaker, sponsor or otherwise). by taking part, you will be telling MS Israel it’s OK to piss all over the community that they are quietly suffocating anyway. The MVP case MS Israel has managed to screw the MVP program as well. MS MVPs (I’m one) have had a tough time here in Israel the past couple of years. ever since yosi taguri left the blue badge ranks, there was not real community leader left. Whoever runs things right now has their eyes and minds set elsewhere, with the software MVP community far from mind and heart. No special MVP events (except a couple of small ones this year). No real MVP leadership happens here, with the MVP MEA lead (Ruari) being on a remote line, is not really what’s needed. “MVP? What’s that?” I’m sure many MS Israel employees would say. Exactly my point. Last word I’ve been disappointed by the MS machine for a while now, but their slowness to realize what real community means in the past couple of years really turns me off. Maybe it’s time to move on. Maybe I shouldn’t be chasing people at MS Israel begging for a room to host the Agile Israel user group. Maybe it’s time to say a big bye bye and start looking at a life a bit more disconnected.

    Read the article

  • DeveloperDeveloperDeveloper! Scotland 2010 - DDDSCOT

    - by Plip
    DDD in Scotland was held on the 8th May 2010 in Glasgow and I was there, not as is uaual at these kind of things as an organiser but actually as a speaker and delegate. The weekend started for me back on Thursday with the arrival of Dave Sussman to my place in Lancashire, after a curry and watching the Electon night TV coverage we retired to our respective beds (yes, I know, I hate to shatter the illusion we both sleep in the same bed wearing matching pijamas is something I've shattered now) ready for the drive up to Glasgow the following afternoon. Before heading up to Glasgow we had to pick up Young Mr Hardy from Wigan then we began the four hour drive back in time... Something that struck me on the journey up is just how beautiful Scotland is. The menacing landscapes bordered with fluffy sheep and whirly-ma-gigs are awe inspiring - well worth driving up if you ever get the chance. Anywho we arrived in Glasgow, got settled intot he hotel and went in search of Speakers for pre conference drinks and food. We discovered a gaggle (I believe that's the collective term) of speakers in the Bar and when we reached critical mass headed off to the Speakers Dinner location. During dinner, SOMEONE set my hair on FIRE. That's all I'm going to say on the matter. Whilst I was enjoying my evening there was something nagging at me, I realised that I should really write my session as I was due to give it the following morning. So after a few more drinks I headed back to the hotel and got some well earned sleep (and washed the fire damage out of my hair). Next day, headed off to the conference which was a lovely stroll through Glasgow City Centre. Non of us got mugged, murdered (or set on fire) arriving safely at the venue, which was a bonus.   I was asked to read out the opening Slides for Barry Carr's session which I did dilligently and with such professionalism that I shocked even myself. At which point I reliased in just over an hour I had to give my presentation, so headed back to the speaker room to finish writing it. Wham, bam and it was all over. Session seemed to go well. I was speaking on Exception Driven Development, which isn't so much a technical solution but rather a mindset around how one should treat exceptions and their code. To be honest, I've not been so nervous giving a session for years - something about this topic worried me, I was concerned I was being too abstract in my thinking or that what I was saying was so obvious that everyone would know it, but it seems to have been well recieved which makes me a happy Speaker. Craig Murphy has some brilliant pictures of DDD Scotland 2010. After my session was done I grabbed some lunch and headed back to the hotel and into town to do some shopping (thus my conspicuous omission from the above photo). Later on we headed out to the geek dinner which again was a rum affair followed by a few drinks and a little boogie woogie. All in all a well run, well attended conference, by the community for the community. I tip my hat to the whole team who put on DDD Scotland!       

    Read the article

  • Impressions of my ASUS eee slate EP121 - Dual core 4GB, 64GB SSD

    - by tonyrogerson
    This thing is lovely, very nice bluetooth keyboard that has nice feedback on the keypress, there is no mouse but you can use the stylus or get yourself a bluetooth mouse, me, I've opted for a Microsoft ARC mouse which is a delight to use, the USB doors are a pain to open for the first time if like me you don't have any finger nails. It came as a suprise that the slate shows four processors, Dual Core with multi-threading, I didn't really look at the processor I was more interested in the amount of memory and the SSD; you don't get the full 4GB even with the 64 bit version of Windows 7 installed (which I immediately upgraded to Ultimate through my MSDN subscription). The box is extremely responsive - extremely, it loads Winword in literally a second. I've got office 2010 and onenote 2010 on there now; one problem is that on applying all (43) windows updates since the upgrade the machine is still sat on step 3 of 3 on the start up configuring updates screen after about an hour, you can't turn this machine off without using a paper clip to reset it and as I have just found you need a paper clip :). Installing Windows 7 SP1 was effortless. One of the first things I did on it was to reduce the size of the font, by default its set at 125%, my eye sight is ok :) so I've set that back down. Amazon Kindle for the PC works really well, plenty of text on the screen when viewed portrait, the case it comes with also allows the slate to stand up in various positions - portrait, horizontal - seems stable enough. The wireless works well, seems to have a better signal than my other two laptop machines which is good news. The gadget passed the pose test at work :). I use offline files to keep a copy of all my work stuff locally, I'm not sure what it is, well, its probably my server but whenever I try and sync it runs for a couple of minutes then fails with network name no longer contactable, funnily enough its fine from my big laptop so I can only guess this may be a driver type issue on the EP121 itself - very odd and very annoying. I do a lot of presenting and need to plug into a VGA project because most sites that's all that is offered, the EP121 has a mini-hdmi output which is great except for this scenario, hdmi is digital, vga is analogue, you will struggle to find a cost effective solution, I found HDFury and also a device HP do, however, a better solution appears to be getting a USB graphics adapter for instance the one I've ordered is the ClimaxDigital USB 2.0 to DVI,VGA or HDMI Adaptor which gives everything I need - VGA and DVI output and great resolution as well - ok, so fingers crossed because I'm presenting next Wednesday in Edinburgh and not taking my 300kg lenovo w700 (I'm sure my back just sighed in relief) - it certainly works really well on my LED TV, the install was simple - it just works! One of the several reasons for buying this piece of kit was to use it on my LED TV to remote into my main machine to check stuff whilst sat in my living room, also to watch webcasts and lecture videos in comfort away from my office, because of the wireless speed and limitation I'm opting for a USB network adapter from Belkin - that will also allow me to take advantage of my home gigabit network, there are only 2 usb ports on the slate so I'm going to knock up a hub so connecting it in is straight forward and simple, I'm also going to purchase a second power supply so I don't have to faff about with that either.I now have the developer x64 edition of SQL Server 2008 R2, yes everything :) - about 16GB left to play with on the machine now but that will be fine, I'll put AdventureWorks on there so I can play and demo stuff which is all I'm after from this, my development machine is significantly more powerful and meets my storage needs too.Travel test this weekend and next week, I'm in Dundee for my final exam for the masters degree.

    Read the article

  • Dissing Architects, or "What's wrong with the coffee?"

    - by Bob Rhubart
    In my conversations with people in architect roles, tales of animosity, disrespect, and outright hostility aren't uncommon. And it's clear that in more than a few organizations architects regularly face a tough uphill climb. For architects with the requisite combination of technical, organizational, and people skills, that rough treatment is grossly undeserved. But tales of unqualified people in positions up and down the IT food chain are also easy to come by. So what's the other side of the architect story? Are some architects tarnishing the role and making life miserable for their more qualified colleagues? The various quotes included below were culled from a variety of sources. The criticism is harsh, and the people behind these quotes clearly have issues with architects. Still, whether based on mere opinion or actual experience, the comments shed some light on behaviors that should raise red flags for anyone pursuing a career as an architect. If you're an architect, and you've ever noticed that your coffee tastes like window cleaner, or your car is repeatedly keyed, or no one ever holds the elevator for you, maybe you need to do a little soul searching... Those Who Can, Code; Those Who Can't, Architect | Joe Winchester [May 18, 2007] "At the moment there seems to be an extremely unhealthy obsession in software with the concept of architecture. A colleague of mine, a recent graduate, told me he wished to become a software architect. He was drawn to the glamour of being able to come up with grandiose ideas - sweeping generalized designs, creating presentations to audiences of acronym addicts, writing esoteric academic papers, speaking at conferences attended by headless engineers on company expense accounts hungrily seeking out this year's grail, and creating e-mails with huge cc lists from people whose signature footer is more interesting than the content. I tried to re-orient him into actually doing some coding, to join a team that has a good product and keen users both of whom are pushing requirements forward, to no avail. Somehow the lure of being an architecture astronaut was too strong and I lost him to the dark side." Don't Let Architecture Astronauts Scare You | Joel Spolsky [April 21, 2001] "It's very hard to get them to write code or design programs, because they won't stop thinking about Architecture. They're astronauts because they are above the oxygen level, I don't know how they're breathing. They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don't contribute to the bottom line. Remember that the architecture people are solving problems that they think they can solve, not problems which are useful to solve." Non Coding Architects Suck | Richard Henderson [May 24, 2010] "If a guy with a badge saying 'system architect' looks blank on low-level issues then he is not an architect, he is a business-analyst who went on a course. He will probably wax lyrical on all things high-level and 'important.' He will produce lovely object hierarchies without a clue to implementation. He will have a moustache and play golf." Architects Play Golf | Sunir Shah [August 15, 2012] "Often arrogant architects are difficult to get a hold of during the implementation phase because they no longer feel the need to stick around. Especially around midnight when most of the poor sob [sic] developers are still banging away. After all, they've already solved the problem--the rest is just an implementation exercise." Engineer vs Architect(Part of a discussion on the IT Architect Network Group on LinkedIn) "[An] architect spends his time producing white papers full of acronyms he does not understand but that impress his boss [while the] engineer keeps his head down and does the actual job." Architects Don't Code | [Author Unknown] "Faulty belief: System Architects don't need to code anymore. They know what they are talking about by virtue of the fact that they are System Architects."

    Read the article

  • C#: A "Dumbed-Down" C++?

    - by James Michael Hare
    I was spending a lovely day this last weekend watching my sons play outside in one of the better weekends we've had here in Saint Louis for quite some time, and whilst watching them and making sure no limbs were broken or eyes poked out with sticks and other various potential injuries, I was perusing (in the correct sense of the word) this month's MSDN magazine to get a sense of the latest VS2010 features in both IDE and in languages. When I got to the back pages, I saw a wonderful article by David S. Platt entitled, "In Praise of Dumbing Down"  (msdn.microsoft.com/en-us/magazine/ee336129.aspx).  The title captivated me and I read it and found myself agreeing with it completely especially as it related to my first post on divorcing C++ as my favorite language. Unfortunately, as Mr. Platt mentions, the term dumbing-down has negative connotations, but is really and truly a good thing.  You are, in essence, taking something that is extremely complex and reducing it to something that is much easier to use and far less error prone.  Adding safeties to power tools and anti-kick mechanisms to chainsaws are in some sense "dumbing them down" to the common user -- but that also makes them safer and more accessible for the common user.  This was exactly my point with C++ and C#.  I did not mean to infer that C++ was not a useful or good language, but that in a very high percentage of cases, is too complex and error prone for the job at hand. Choosing the correct programming language for a job is a lot like choosing any other tool for a task.  For example: if I want to dig a French drain in my lawn, I can attempt to use a huge tractor-like backhoe and the job would be done far quicker than if I would dig it by hand.  I can't deny that the backhoe has the raw power and speed to perform.  But you also cannot deny that my chances of injury or chances of severing utility lines or other resources climb at an exponential rate inverse to the amount of training I may have on that machinery. Is C++ a powerful tool?  Oh yes, and it's great for those tasks where speed and performance are paramount.  But for most of us, it's the wrong tool.  And keep in mind, I say this even though I have 17 years of experience in using it and feel myself highly adept in utilizing its features both in the standard libraries, the STL, and in supplemental libraries such as BOOST.  Which, although greatly help with adding powerful features quickly, do very little to curb the relative dangers of the language. So, you may say, the fault is in the developer, that if the developer had some higher skills or if we only hired C++ experts this would not be an issue.  Now, I will concede there is some truth to this.  Obviously, the higher skilled C++ developers you hire the better the chance they will produce highly performant and error-free code.  However, what good is that to the average developer who cannot afford a full stable of C++ experts? That's my point with C#:  It's like a kinder, gentler C++.  It gives you nearly the same speed, and in many ways even more power than C++, and it gives you a much softer cushion for novices to fall against if they code less-than-optimally.  A bug is a bug, of course, in any language, but C# does a good job of hiding and taking on the task of handling almost all of the resource issues that make C++ so tricky.  For my money, C# is much more maintainable, more feature-rich, second only slightly in performance, faster to market, and -- last but not least -- safer and easier to use.  That's why, where I work, I much prefer to see the developers moving to C#.  The quantity of bugs is much lower, and we don't need to hire "experts" to achieve the same results since the language itself handles those resource pitfalls so prevalent in poorly written C++ code.  C++ will still have its place in the world, and I'm sure I'll still use it now and again where it is truly the correct tool for the job, but for nearly every other project C# is a wonderfully "dumbed-down" version of C++ -- in the very best sense -- and to me, that's the smart choice.

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • SharePoint: Numeric/Integer Site Column (Field) Types

    - by CharlesLee
    What field type should you use when creating number based site columns as part of a SharePoint feature? Windows SharePoint Services 3.0 provides you with an extensible and flexible method of developing and deploying Site Columns and Content Types (both of which are required for most SharePoint projects requiring list or library based data storage) via the feature framework (more on this in my next full article.) However there is an interesting behaviour when working with a column or field which is required to hold a number, which I thought I would blog about today. When creating Site Columns in the browser you get a nice rich UI in order to choose the properties of this field: However when you are recreating this as a feature defined in CAML (Collaborative Application Mark-up Language), which is a type of XML (more on this in my article) then you do not get such a rich experience.  You would need to add something like this to the element manifest defined in your feature: <Field SourceID="http://schemas.microsoft.com/sharepoint/3.0"        ID="{C272E927-3748-48db-8FC0-6C7B72A6D220}"        Group="My Site Columns"        Name="MyNumber"        DisplayName="My Number"        Type="Numeric"        Commas="FALSE"        Decimals="0"        Required="FALSE"        ReadOnly="FALSE"        Sealed="FALSE"        Hidden="FALSE" /> OK, its not as nice as the browser UI but I can deal with this. Hang on. Commas="FALSE" and yet for my number 1234 I get 1,234.  That is not what I wanted or expected.  What gives? The answer lies in the difference between a type of "Numeric" which is an implementation of the SPFieldNumber class and "Integer" which does not correspond to a given SPField class but rather represents a positive or negative integer.  The numeric type does not respect the settings of Commas or NegativeFormat (which defines how to display negative numbers.)  So we can set the Type to Integer and we are good to go.  Yes? Sadly no! You will notice at this point that if you deploy your site column into SharePoint something has gone wrong.  Your site column is not listed in the Site Column Gallery.  The deployment must have failed then?  But no, a quick look at the site columns via the API reveals that the column is there.  What new evil is this?  Unfortunately the base type for integer fields has this lovely attribute set on it: UserCreatable = FALSE So WSS 3.0 accordingly hides your field in the gallery as you cannot create fields of this type. However! You can use them in content types just like any other field (except not in the browser UI), and if you add them to the content type as part of your feature then they will show up in the UI as a field on that content type.  Most of the time you are not going to be too concerned that your site columns are not listed in the gallery as you will know that they are there and that they are still useable. So not as bad as you thought after all.  Just a little quirky.  But that is SharePoint for you.

    Read the article

  • PASS Summit 2013 Review

    - by Ajarn Mark Caldwell
    As a long-standing member of PASS who lives in the greater Seattle area and has attended about nine of these Summits, let me start out by saying how GREAT it was to go to Charlotte, North Carolina this year.  Many of the new folks that I met at the Summit this year, upon hearing that I was from Seattle, commented that I must have been disappointed to have to travel to the Summit this year after 5 years in a row in Seattle.  Well, nothing could be further from the truth.  I cheered loudly when I first heard that the 2013 Summit would be outside Seattle.  I have many fond memories of trips to Orlando, Florida and Grapevine, Texas for past Summits (missed out on Denver, unfortunately).  And there is a funny dynamic that takes place when the conference is local.  If you do as I have done the last several years and saved my company money by not getting a hotel, but rather just commuting from home, then both family and coworkers tend to act like you’re just on a normal schedule.  For example, I have a young family, and my wife and kids really wanted to still see me come home “after work”, but there are a whole lot of after-hours activities, social events, and great food to be enjoyed at the Summit each year.  Even more so if you really capitalize on the opportunities to meet face-to-face with people you either met at previous summits or have spoken to or heard of, from Twitter, blogs, and forums.  Then there is also the lovely commuting in Seattle traffic from neighboring cities rather than the convenience of just walking across the street from your hotel.  So I’m just saying, there are really nice aspects of having the conference 2500 miles away. Beyond that, the training was fantastic as usual.  The SQL Server community has many outstanding presenters and experts with deep knowledge of the tools who are extremely willing to share all of that with anyone who wants to listen.  The opening video with PASS President Bill Graziano in a NASCAR race turned dream sequence was very well done, and the keynotes, as usual, were great.  This year I was particularly impressed with how well attended were the Professional Development sessions.  Not too many years ago, those were very sparsely attended, but this year, the two that I attended were standing-room only, and these were not tiny rooms.  I would say this is a testament to both the maturity of the attendees realizing how important these topics are to career success, as well as to the ever-increasing skills of the presenters and the program committee for selecting speakers and topics that resonated with people.  If, as is usually the case, you were not able to get to every session that you wanted to because there were just too darn many good ones, I encourage you to get the recordings. Overall, it was a great time as these events always are.  It was wonderful to see old friends and make new ones, and the people of Charlotte did an awesome job hosting the event and letting their hospitality shine (extra kudos to SQLSentry for all they did with the shuttle, maps, and other event sponsorships).  We’re back in Seattle next year (it is a release year, after all) but I would say that with the success of this year’s event, I strongly encourage the Board and PASS HQ to firmly reestablish the location rotation schedule.  I’ll even go so far as to suggest standardizing on an alternating Seattle – Charlotte schedule, or something like that. If you missed the Summit this year, start saving now, and register early, so you can join us!

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • git-p4 submit fails with "Not a valid object name HEAD~261"

    - by Harlan
    I've got a git repository that I'd like to mirror to a Perforce repository. I've downloaded the git-p4 script (the more recent version that doesn't give deprecation warnings), and have been working with that. I've figured out how to pull changes from Perforce, but I'm getting an error when I try to sync changes from the git repo back. Here's what I've done so far: git clone [email protected]:asdf/qwerty.git git-p4 sync //depot/path/to/querty git merge remotes/p4/master (there was a single README file...) So, I've copied the origin to a clean, new director, got a lovely looking merged tree of files, and git status shows I'm up-to-date. But: > git-p4 submit fatal: Not a valid object name HEAD~261 Command failed: git cat-file commit HEAD~261 This thread on the git mailing list seems to be relevant, but I can't figure out what they're doing with all the A, B, and Cs. Could someone please clarify what "Not a valid object name" means, and what I can do to fix the problem? All I want to do is to periodically snapshot the origin/master into Perforce; a full history is not required. Thanks.

    Read the article

  • Scalaz: request for use case

    - by oxbow_lakes
    This question isn't meant as flame-bait! As it might be apparent, I've been looking at Scalaz recently. I'm trying to understand why I need some of the functionality that the library provides. Here's something: import scalaz._ import Scalaz._ type NEL[A] = NonEmptyList[A] val NEL = NonEmptyList I put some println statements in my functions to see what was going on (aside: what would I have done if I was trying to avoid side effects like that?). My functions are: val f: NEL[Int] => String = (l: NEL[Int]) => {println("f: " + l); l.toString |+| "X" } val g: NEL[String] => BigInt = (l: NEL[String]) => {println("g: " + l); BigInt(l.map(_.length).sum) } Then I combine them via a cokleisli and pass in a NEL[Int] val k = cokleisli(f) =>= cokleisli(g) println("RES: " + k( NEL(1, 2, 3) )) What does this print? f: NonEmptyList(1, 2, 3) f: NonEmptyList(2, 3) f: NonEmptyList(3) g: NonEmptyList(NonEmptyList(1, 2, 3)X, NonEmptyList(2, 3)X, NonEmptyList(3)X) RES: 57 The RES value is the character count of the (String) elements in the final NEL. Two things occur to me: How could I have known that my NEL was going to be reduced in this manner from the method signatures involved? (I wasn't expecting the result at all) What is the point of this? Can a reasonably simple and easy-to-follow use case be distilled for me? This question is a thinly-veiled plea for some lovely person like retronym to explain how this powerful library actually works.

    Read the article

  • No long-running conversations - IllegalArgumentException: Stack must not be null

    - by Markos Fragkakis
    Hi all, I have a very simple application with just 2 pages on WebLogic 10.3.2 (11g), Seam 2.2.0.GA. I have a command button in each, which makes a redirect-after-post to the other. This works well, as I see the URL of the current page I am seeing in the address bar. BUT, even though I have no long-running conversations defined, after a random number of clicks, and - I think - after a random number of seconds (~10s - 60s) I get the lovely exception at the end of this post. Now, if I have understood how temporary conversations work when redirecting this happens: When I first see my application, the url is http://localhost:7001/myapp When I click the button in pageA.xhtml, I end up in "pageB.xhtml?cid=26". This is normal because Seam extends the temporary conversation of the first request to last until the renderResponse phase of the redirect. So, it uses the cid (Conversation Id) of the extended temporary conversation to find any propagated parameters. When I click the button in pageB.xhtml, I end up in pageA.xhtml?cid=26 The same cid was given to the new extended temporary conversation. This is normal because the conversation ended at the end of the previous redirect-after-post, and not the number 26 is free to use as a cid. Is this all correct? If yes, why does this happen: If I re-type the applications home address (showing pageA) and re-click, I end up in pageB.xhtml?cid=29, which is a different number than 26. But 26 has ended after the previous RenderResponse phase, befire I re-types the url. Why is it not used instead of 29? So, to sup up, 2 questions: Why do I get the exception, even though I have not started any long-running conversations? What happens exactly with the cid? On what basis does it change? Cheers,

    Read the article

  • Scalaz: request for use case for Cokleisli composition

    - by oxbow_lakes
    This question isn't meant as flame-bait! As it might be apparent, I've been looking at Scalaz recently. I'm trying to understand why I need some of the functionality that the library provides. Here's something: import scalaz._ import Scalaz._ type NEL[A] = NonEmptyList[A] val NEL = NonEmptyList I put some println statements in my functions to see what was going on (aside: what would I have done if I was trying to avoid side effects like that?). My functions are: val f: NEL[Int] => String = (l: NEL[Int]) => {println("f: " + l); l.toString |+| "X" } val g: NEL[String] => BigInt = (l: NEL[String]) => {println("g: " + l); BigInt(l.map(_.length).sum) } Then I combine them via a cokleisli and pass in a NEL[Int] val k = cokleisli(f) =>= cokleisli(g) println("RES: " + k( NEL(1, 2, 3) )) What does this print? f: NonEmptyList(1, 2, 3) f: NonEmptyList(2, 3) f: NonEmptyList(3) g: NonEmptyList(NonEmptyList(1, 2, 3)X, NonEmptyList(2, 3)X, NonEmptyList(3)X) RES: 57 The RES value is the character count of the (String) elements in the final NEL. Two things occur to me: How could I have known that my NEL was going to be reduced in this manner from the method signatures involved? (I wasn't expecting the result at all) What is the point of this? Can a reasonably simple and easy-to-follow use case be distilled for me? This question is a thinly-veiled plea for some lovely person like retronym to explain how this powerful library actually works.

    Read the article

  • UIView Controller mysteriously getting deallocated

    - by Dan Ray
    I have a UINavigation scheme with a "welcome" page, a middle page, and a detail page. In the middle page, there's a segmented controller that can swap the main body of that page between a table, a calendar, and a MKMapView, each implemented with their own view controller classes. Today I implemented the MapView and its annotations and all of that. It's nice. And a detail disclosure on each callout takes you to the detail page just the same way as if you'd gotten there via the table. Lovely. I also have a right-button-bar button that pushes in a "search" view. From there you can search the data I'm navigating. When it's done filtering the data (an array of objects I'm keeping in a data singleton), it makes the table reload its data, and calls my annotation-clearer-and-builder methods on the map view, and then pops itself out, so the "middle" page (including whatever view was in the guts of it) is back on the screen. Problem is, if I go back and forth between the map and the search a couple times, any mention of the table view causes us to crash with: *** -[CALayer retain]: message sent to deallocated instance 0x710b810. (I obviously have NSZombies turned on.) I put an NSLog in the dealloc method of my table view controller. That thing's never getting called. I don't know if we're ditching it behind the scenes for memory purposes, or if I'm leaking it and can't get my hands back on it, or what. I'm sort of at a loss about where to look. Any hints?

    Read the article

  • CAScrollLayer doesn't scroll!

    - by Cliff
    Maybe it's because it's late. Whatever the reason I can't figure out why I'm having trouble with a simple CSScrollLayer example I'm trying. I add a 50 pixel Eclipse icon to a view based project and in my initialize method (called from initWithNibName:bundle:) I have this: -(void) initialize { CAScrollLayer *scrollLayer = [CAScrollLayer layer]; scrollLayer.backgroundColor = [[UIColor blackColor] CGColor]; CGRect bounds = self.view.bounds; scrollLayer.bounds = CGRectMake(0, 0, bounds.size.width, bounds.size.height); scrollLayer.contentsRect = CGRectMake(0, 0, bounds.size.width + 800, bounds.size.height + 800); scrollLayer.borderWidth = 2.5; scrollLayer.borderColor = [[UIColor redColor] CGColor]; scrollLayer.position = CGPointMake(self.view.center.x, self.view.center.y - 20); scrollLayer.scrollMode = kCAScrollBoth; [self.view.layer addSublayer:scrollLayer]; UIImage *image = [UIImage imageNamed:@"eclipse32.gif"]; for(int i=0; i<6; i++) { layer = [CALayer layer]; layer.backgroundColor = [[UIColor blackColor] CGColor]; layer.bounds = CGRectMake(0, 0, 100, 100); layer.contents = (id)[image CGImage]; layer.position = CGPointMake(layer.bounds.size.width * i, self.view.center.y); [scrollLayer addSublayer:layer]; } // [button removeFromSuperview]; // [self.view addSubview:button]; // self.view.userInteractionEnabled = YES; [image release]; } The scroll layer shows, the icon is repeated on the layer I have a border around the edge of the screen. Everything is lovely except I can't scroll the icons. I've tried with/without setting scroll mode. I've tried with a single stretched icon that falls off screen. I've tried everything. What am I doing wrong?

    Read the article

  • Hibernate triggering constraint violations using orphanRemoval

    - by ptomli
    I'm having trouble with a JPA/Hibernate (3.5.3) setup, where I have an entity, an "Account" class, which has a list of child entities, "Contact" instances. I'm trying to be able to add/remove instances of Contact into a List<Contact> property of Account. Adding a new instance into the set and calling saveOrUpdate(account) persists everything lovely. If I then choose to remove the contact from the list and again call saveOrUpdate, the SQL Hibernate seems to produce involves setting the account_id column to null, which violates a database constraint. What am I doing wrong? The code below is clearly a simplified abstract but I think it covers the problem as I'm seeing the same results in different code, which really is about this simple. SQL: CREATE TABLE account ( INT account_id ); CREATE TABLE contact ( INT contact_id, INT account_id REFERENCES account (account_id) ); Java: @Entity class Account { @Id @Column public Long id; @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) @JoinColumn(name = "account_id") public List<Contact> contacts; } @Entity class Contact { @Id @Column public Long id; @ManyToOne(optional = false) @JoinColumn(name = "account_id", nullable = false) public Account account; } Account account = new Account(); Contact contact = new Contact(); account.contacts.add(contact); saveOrUpdate(account); // some time later, like another servlet request.... account.contacts.remove(contact); saveOrUpdate(account); Result: UPDATE contact SET account_id = null WHERE contact_id = ? Edit #1: It might be that this is actually a bug http://opensource.atlassian.com/projects/hibernate/browse/HHH-5091

    Read the article

  • Display a input type=file over another input type=file

    - by Kevin Sedgley
    WARNING: Lengthy description coming up! I have written an uploader based upon APC progress uploader for PHP. This works fine and dandy, but the script as a whole (apc etc) is intended to be used only for those with Javascript. To achieve this, I have searched for any input type=file, and replaced these with an absolutely positioned form that appears over the original area where the old file input area was. The reasons for this are so the new uploader can submit to a hidden in page IFrame has to be in a seperate <form> in order to submit to the APC reciever to display the progress upload bar. allows it to be used within any form with an input type=file throughout the site I have used JQuery to do this, with the following code: Original HTML form code: <div><input type="file" name="media" id="media" /></div> Find position of div block code: // get the parent div, and properties thereof parentDiv = $(this).closest('div'); w = $(parentDiv).width(); h = $(parentDiv).height(); loc = $(parentDiv).offset(); Locate new block over old block: $('#_sender').appendTo('body').css({left:loc.left,top:loc.top,position:'absolute',zIndex:400,height:h,width:w}).show(); This works fine, and shows over the old block OK. The problem: When other elements in the DOM before or above it change (in this case a "tree view" selector is pushing the old block down) the new upload form gets moved over other elements. Is there a JQuery (or JS) method for changing this upon DOM change? Some kind of .onchange for the page?! Or an .onmove for the original block? Thanks in advance you lovely people Before DOM change: . After: .

    Read the article

  • MySQL can't access root account or reset with mysqladmin

    - by glumptious
    So if I type mysql -u root I'm supposedly logged in, however upon trying to create or access a database I get this lovely error: ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'test1'. I haven't the foggiest idea why after logging in as root it's trying access DB's as ''@'localhost' and it's driving me a bit crazy right now. Possibly related, when I try to set the root password I get the error mysqladmin: Can't turn off logging; error: 'Access denied; you need (at least one of) the SUPER privilege(s) for this operation'. I've tried removing mysql-server via running apt-get purge mysql-server and then reinstalling with no luck. This is running Ubuntu Server 12.10 64-bit and mysql is indeed running. --Edit-- I wonder if perhaps there is no root user. So I try to start MySQL with --skip-grant-tables and the create the root user but then I'm given this: ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement. Fun fun fun fun fun.

    Read the article

  • Get percentiles of data-set with group by month

    - by Cylindric
    Hello, I have a SQL table with a whole load of records that look like this: | Date | Score | + -----------+-------+ | 01/01/2010 | 4 | | 02/01/2010 | 6 | | 03/01/2010 | 10 | ... | 16/03/2010 | 2 | I'm plotting this on a chart, so I get a nice line across the graph indicating score-over-time. Lovely. Now, what I need to do is include the average score on the chart, so we can see how that changes over time, so I can simply add this to the mix: SELECT YEAR(SCOREDATE) 'Year', MONTH(SCOREDATE) 'Month', MIN(SCORE) MinScore, AVG(SCORE) AverageScore, MAX(SCORE) MaxScore FROM SCORES GROUP BY YEAR(SCOREDATE), MONTH(SCOREDATE) ORDER BY YEAR(SCOREDATE), MONTH(SCOREDATE) That's no problem so far. The problem is, how can I easily calculate the percentiles at each time-period? I'm not sure that's the correct phrase. What I need in total is: A line on the chart for the score (easy) A line on the chart for the average (easy) A line on the chart showing the band that 95% of the scores occupy (stumped) It's the third one that I don't get. I need to calculate the 5% percentile figures, which I can do singly: SELECT MAX(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE ASC) AS SubQ SELECT MIN(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE DESC) AS SubQ But I can't work out how to get a table of all the months. | Date | Average | 45% | 55% | + -----------+---------+-----+-----+ | 01/01/2010 | 13 | 11 | 15 | | 02/01/2010 | 10 | 8 | 12 | | 03/01/2010 | 5 | 4 | 10 | ... | 16/03/2010 | 7 | 7 | 9 | At the moment I'm going to have to load this lot up into my app, and calculate the figures myself. Or run a larger number of individual queries and collate the results.

    Read the article

  • Can I print an HTMLLoader (pdf) in Adobe Air?

    - by Stephano
    I'm using AlivePDF to create a PDF file, then save it to the desktop. I can then use an HTMLLoader to display my lovely PDF file. Now, the print button in Adobe Reader works fine. However, there will be young children using the app, so I'd like to have a big "Print" button right above it. I figured I could just start up a print job and feed it my HTMLLoader. Am I doing something wrong here, cause I can't seem to get any output? note: variable "stuff" below is my HTMLLoader. I also have access to the PDF file if that comes in handy. private function print():void { var myPrintJob:PrintJob=new PrintJob(); var result:Boolean=myPrintJob.start(); if (result && stuff != null) { var rect:Rectangle=new Rectangle(0, 0, 2550, 3300); var opt:PrintJobOptions=new PrintJobOptions(true); myPrintJob.addPage(stuff, rect, opt); myPrintJob.send(); } else { //User does not have printer or user canceled print action } }

    Read the article

  • Can I print an HTMLLoader (pdf) in Adobe Air?

    - by Stephano
    I'm using AlivePDF to create a PDF file, then save it to the desktop. I can then use an HTMLLoader to display my lovely PDF file. Now, the print button in Adobe Reader works fine. However, there will be young children using the app, so I'd like to have a big "Print" button right above it. I figured I could just start up a print job and feed it my HTMLLoader. Am I doing something wrong here, cause I can't seem to get any output? note: variable "stuff" below is my HTMLLoader. I also have access to the PDF file if that comes in handy. private function print():void { var myPrintJob:PrintJob=new PrintJob(); var result:Boolean=myPrintJob.start(); if (result && stuff != null) { var rect:Rectangle=new Rectangle(0, 0, 2550, 3300); var opt:PrintJobOptions=new PrintJobOptions(true); myPrintJob.addPage(stuff, rect, opt); myPrintJob.send(); } else { //User does not have printer or user canceled print action } }

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >