Search Results

Search found 9983 results on 400 pages for 'fuzzy c means'.

Page 157/400 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Continuous Physics Engine's Collision Detection Techniques

    - by Griffin
    I'm working on a purely continuous physics engine, and I need to choose algorithms for broad and narrow phase collision detection. "Purely continuous" means I never do intersection tests, but instead want to find ways to catch every collision before it happens, and put each into "planned collisions" stack that is ordered by TOI. Broad Phase The only continuous broad-phase method I can think of is encasing each body in a circle and testing if each circle will ever overlap another. This seems horribly inefficient however, and lacks any culling. I have no idea what continuous analogs might exist for today's discrete collision culling methods such as quad-trees either. How might I go about preventing inappropriate and pointless broad test's such as a discrete engine does? Narrow Phase I've managed to adapt the narrow SAT to a continuous check rather than discrete, but I'm sure there's other better algorithms out there in papers or sites you guys might have come across. What various fast or accurate algorithm's do you suggest I use and what are the advantages / disatvantages of each? Final Note: I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave, convex, round, or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if I choose an algorithm that breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • Looking Back at MIX10

    - by WeigeltRo
    It’s the sad truth of my life that even though I’m fascinated by airplanes and flight in general since my childhood days, my body doesn’t like flying. Even the ridiculously short flights inside Germany are taking their toll on me each time. Now combine this with sitting in the cramped space of economy class for many hours on a transatlantic flight from Germany to Las Vegas and back, and factor in some heavy dose of jet lag (especially on my way eastwards), and you get an idea why after coming back home I had this question on my mind: Was it really worth it to attend MIX10? This of course is a question that will also be asked by my boss at Comma Soft (for other reasons, obviously), who decided to send me and my colleague Jens Schaller, to the MIX10 conference. (A note to my German readers: An dieser Stelle der Hinweis, dass Comma Soft noch Silverlight-Entwickler und/oder UI-Designer für den Standort Bonn sucht – aussagekräftige Bewerbungen bitte an [email protected]) Too keep things short: My answer is yes. Before I’ll go into detail, let me ask the heretical questions whether tech conferences in general still make sense. There was a time, where actually being at a tech conference gave you a head-start in regard to learning about new technologies. Nowadays this is no longer true, where every bit of information and every detail is immediately twittered, blogged and whatevered to death. In the case of MIX10 you even can download the video-taped sessions shortly after. So: Does visiting a conference still make sense? It depends on what you expect from a conference. It should be clear to everybody that you’ll neither get exclusive information, nor receive training in a small group. What a conference does offer that sitting in front of your computer does not can be summarized as follows: Focus Being away from work and home will help you to focus on the presented information. Of course there are always the poor guys who are haunted by their work (with mails and short text messages reporting the latest showstopper problem), but in general being out of your office makes a huge difference. Inspiration With the focus comes the emotional involvement. I find it much easier to absorb information if I feel that certain vibe when sitting in a session. This still means that I have put work into reviewing the information later, but it’s a better starting point. And all the impressions collected at a (good) conference combined lead to a higher motivation – be it by the buzz (“this is gonna be sooo cool!”) or by the fear to fall behind (“man, we’ll have work on this, or else…”). People At a conference it’s pretty easy to get into contact with other people during breakfast, lunch and other breaks. This is a good opportunity to get a feel for what other development teams are doing (on a very general level of course, nobody will tell you about their secret formula) and what they are thinking about specific technologies. So MIX10 did offer focus, inspiration and people, but that would have meant nothing without valuable content. When I (being a frontend developer with a strong interest in UI/UX) planned my visit to MIX10, I made the decision to focus on the "soft" topics of design, interaction and user experience. I figured that I would be bombarded with all the technical details about Silverlight 4 anyway in the weeks and months to come. Actually, I would have liked to catch a few technical sessions, but the agenda wasn’t exactly in favor of people interested in any kind of Silverlight and UI/UX/Design topics. That’s one of my few complaints about the conference – I would have liked one more day and/or more sessions per day. Overall, the quality of the workshops and sessions was pretty high. In fact, looking back at my collection of conferences I’ve visited in the past I’d say that MIX10 ranks somewhere near the top spot. Here’s an overview of the workshops/sessions I attended (I’ll leave out the keynotes): Day 0 (Workshops on Sunday) Design Fundamentals for Developers Robby Ingebretsen is the man! Great workshop in three parts with the perfect mix of examples, well-structured definition of terminology and the right dose of humor. Robby was part of the WPF team before founding his own company so he not only has a strong interest in design (and the skillz!) but also the technical background.   Design Tools and Techniques Originally announced to be held by Arturo Toledo, the Rosso brothers from ArcheType filled in for the first two parts, and Corrina Black had a pretty general part about the Windows Phone UI. The first two thirds were a mixed bag; the two guys definitely knew what they were talking about, and the demos were great, but the talk lacked the preparation and polish of a truly great presentation. Corrina was not allowed to go into too much detail before the keynote on Monday, but the session was still very interesting as it showed how much thought went into the Windows Phone UI (and there’s always a lot to learn when people talk about their thought process). Day 1 (Monday) Designing Rich Experiences for Data-Centric Applications I wonder whether there was ever a test-run for this session, but what Ken Azuma and Yoshihiro Saito delivered in the first 15 minutes of a 30-minutes-session made me walk out. A commercial for a product (just great: a video showing a SharePoint plug-in in an all-Japanese UI) combined with the most generic blah blah one could imagine. EPIC FAIL.   Great User Experiences: Seamlessly Blending Technology & Design I switched to this session from the one above but I guess I missed the interesting part – what I did catch was what looked like a “look at the cool stuff we did” without being helpful. Or maybe I was just in a bad mood after the other session.   The Art, Technology and Science of Reading This talk by Kevin Larson was very interesting, but was more a presentation of what Microsoft is doing in research (pretty impressive) and in the end lacked a bit the helpful advice one could have hoped for.   10 Ways to Attack a Design Problem and Come Out Winning Robby Ingebretsen again, and again a great mix of theory and practice. The clean and simple, yet effective, UI of the reader app resulted in a simultaneous “wow” of Jens and me. If you’d watch only one session video, this should be it. Microsoft has to bring Robby back next year! Day 2 (Tuesday) Touch in Public: Multi-touch Interaction Design for Kiosks & Architectural Experiences Very interesting session by Jason Brush, a great inspiration with many details to look out for in the examples. Exactly what I was hoping for – and then some!   Designing Bing: Heart and Science How hard can it be to design the UI for a search engine? An input field and a list of results, that should be it, right? Well, not so fast! The talk by Paul Ray showed the many iterations to finally get it right (up to the choice of a specific blue for the links). And yes, I want an eye-tracking device to play around with!   The Elephant in the Room When Nishant Kothary presented a long list of what his session was not about, I told to myself (not having the description text present) “Am I in the wrong talk? Should I leave?”. Boy, was I wrong. A great talk about human factors in the process of designing stuff.   An Hour with Bill Buxton Having seen Bill Buxton’s presentation in the keynote, I just had to see this man again – even though I didn’t know what to expect. Being more or less unplanned and intended to be more of a conversation, the session didn’t provide a wealth of immediately useful information. Nevertheless Bill Buxton was impressive with his huge knowledge of seemingly everything. But this could/should have been a session some when in the evening and not in parallel to at least two other interesting talks. Day 3 (Wednesday) Design the Ordinary, Like the Fixie This session by DL Byron and Kevin Tamura started really well and brought across the message to keep things simple. But towards the end the talk lost some of its steam. And, as a member of the audience pointed out, they kind of ignored their own advice when they used a fancy presentation software other then PowerPoint that sometimes got in the way of showing things.   Developing Natural User Interfaces Speaking of alternative presentation software, Joshua Blake definitely had the most remarkable alternative to PowerPoint, a self-written program called NaturalShow that was controlled using multi-touch on a touch screen. Not a PowerPoint-killer, but impressive nevertheless. The (excellent) talk itself was kind of eye-opening in regard to what “multi-touch support” on various platforms (WPF, Silverlight, Windows Phone) actually means.   Treat your Content Right The talk by Tiffani Jones Brown wasn’t even on my planned schedule, but somehow I ended up in that session – and it was great. And even for people who don’t necessarily have to write content for websites, some points made by Tiffani are valid in many places, notably wherever you put texts with more than a single word into your UI. Creating Effective Info Viz in Microsoft Silverlight The last session of MIX10 I attended was kind of disappointing. At first things were very promising, with Matthias Shapiro giving a brief but well-structured introduction to info graphics and interactive visualizations. Then the live-coding began and while the result was interesting, too much time was spend on wrestling to get the code working. Ending earlier than planned, the talk was a bit light on actual content, but at least it included a nice list of resources. Conclusion It could be felt all across MIX10, UIs will take a huge leap forward; in fact, there are enough examples that have already. People who both have the technical know-how and at least a basic understanding of design (“literacy” as Bill Buxton called it) are in high demand. The concept of the MIX conference and initiatives like design.toolbox shows that Microsoft understands very well that frontend developers have to acquire new knowledge besides knowing how to hack code and putting buttons on a form. There are extremely exciting times before us, with lots of opportunity for those who are eager to develop their skills, that is for sure.

    Read the article

  • .NET 4.5 now supported with Windows Azure Web Sites

    - by ScottGu
    This week we finished rolling out .NET 4.5 to all of our Windows Azure Web Site clusters.  This means that you can now publish and run ASP.NET 4.5 based apps, and use  .NET 4.5 libraries and features (for example: async and the new spatial data-type support in EF), with Windows Azure Web Sites.  This enables a ton of really great capabilities - check out Scott Hanselman’s great post of videos that highlight a few of them. Visual Studio 2012 includes built-in publishing support to Windows Azure, which makes it really easy to publish and deploy .NET 4.5 based sites within Visual Studio (you can deploy both apps + databases).  With the Migrations feature of EF Code First you can also do incremental database schema updates as part of publishing (which enables a really slick automated deployment workflow). Each Windows Azure account is eligible to host 10 free web-sites using our free-tier.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. In the next few days we’ll also be releasing support for .NET 4.5 and Windows Server 2012 with Windows Azure Cloud Services (Web and Worker Roles) – together with some great new Azure SDK enhancements.  Keep an eye out on my blog for details about these soon. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • shared hosting: ssl domain receives all server's ssl requests, google gets it wrong

    - by pixeline
    This website is hosted on a shared server. I've recently had my hosting provider secure our website using SSL (https://domain.com instead of http://domain.com). Ever since then, all https requests sent to the server are redirected to my website. https://otherdomain.com leads to a warning, then, if you continue, to my website. Ok, my fault, i should have known SSL means 1 IP, otherwise, this thing can happen. But... Google Search results for my website's target keywords is now displaying these websites above my own, even though they have nothing even remotely related to the target keywords! already done: provide canonical url in the html page. told the problem to the server manager, who tells me it's normal but he'll look for a solution. This was one week ago, no answer since. I have no idea why Google is providing these https urls: i thought someone would have to submit them, or have them inside an html page in order for Google to actually index dummy https domains, but i see no reason why someone would do that. Any suggestion on how to solve this situation? Golive is in one week and SEO looks really bad because of that.

    Read the article

  • Steve Jobs Goes On Medical. iPad 2 and iPhone 5 On Track.

    - by Gopinath
    Here is a bit of disappointing news for Apple fan boys. Steve Jobs is again going on medical leave as he wants to concentrate on his health for sometime. In an email to the employees of Apple Steve said, At my request, the board of directors has granted me a medical leave of absence so I can focus on my health..I will continue as CEO and be involved in major strategic decisions for the company.I have great confidence that Tim and the rest of the executive management team will do a terrific job executing the exciting plans we have in place for 2011   In the mail, Steve also said that plans for the product releases scheduled in 2011 will not be affected. This means as rumoured iPad 2 In April, iPhone 5 In June With New Hardware. There is not much information on the medical complications Steve is facing now, but many are thinking  its linked to the liver transplant he had in 2009. What ever may be reason, we wish for this speedy recovery. Here is the full content of the email Steve Jobs sent to all employees: Team, At my request, the board of directors has granted me a medical leave of absence so I can focus on my health. I will continue as CEO and be involved in major strategic decisions for the company. I have asked Tim Cook to be responsible for all of Apple’s day to day operations. I have great confidence that Tim and the rest of the executive management team will do a terrific job executing the exciting plans we have in place for 2011. I love Apple so much and hope to be back as soon as I can. In the meantime, my family and I would deeply appreciate respect for our privacy. Steve This article titled,Steve Jobs Goes On Medical. iPad 2 and iPhone 5 On Track., was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How do I install GRUB on a RAID system installation?

    - by root45
    I'm trying to setup and install Ubuntu on a RAID 1 setup. I have two disks, sdb and sdc. I've been following this guide https://help.ubuntu.com/community/Installation/SoftwareRAID which more or less works for getting everything set up and Ubuntu installed. The problem is at the end of the installation, it tries to install GRUB. By default it tries my "first disk", which gives a "fatal error". I've tried installing it on a specific partion, e.g. sdb1 as well as RAID devices, e.g. md0, md1, etc.. Nothing seems to work. Edit: The actual error is "Unable to install GRUB in /dev/sdb Executing 'grub-install '/dev/sdb' failed. This is a fatal error." Then I'm taken back to the main install menu. If I choose "Install the GRUB boot loader on a hard disk" option, I can pick the partition, but entering sdb2 or md1 gives the same error. So I went ahead an just didn't install GRUB, which means now I presumably have a working Ubuntu installation, but I can't boot it. I've tried booting from the LiveCD to install GRUB, but I can't chroot into my system because it doesn't seem to recognize that my disk is a Linux disk. There's an error about it being a RAID partition. So basically I would really like to know how you know to which device to install GRUB at installation, or at the very least, how to install it on to my system now. I suppose I should also mention that sda is a Windows 7 installation that I would like to keep around and be able to access at boot. Thanks for any help.

    Read the article

  • What's the value of a Facebook fan?

    - by David Dorf
    In his blog posting titled "Why Each Facebook Fan Is Worth $2,000 to J. Crew," Joe Skorupa lays out a simplistic calculation for assigning a value to social media efforts within Facebook. While I don't believe the metric, at least its a metric that can be applied consistently. Trying to explain the ROI to management to start a program, then benchmarking to show progress isn't straightforward at all. Social media isn't really mature enough to have hard-and-fast rules around valuation (yet). When I'm asked by retailers how to measure social media efforts, I usually fess-up and say I can't show an ROI but the investment is so low you might was well take a risk. Intuitively, it just seems like a good way to interact with consumers, and since your competition is doing it, you better do it as well. Vitrue, a social media management company, has calculated a fan as being worth $3.60 per year based on impressions generated in Facebook's news feed. That means a fan base of 1 million translates into at least $3.6 million in equivalent media over a year. Don't believe that number either? Fine, Vitrue now has a tool that let's you adjust the earned media value of a fan. Jump over to http://evaluator.vitrue.com/ and enter your brand's Facebook URL to get an assessment of the current value and potential value. For fun, I compared Abercrombie & Fitch (1,077,480 fans), Gap (567,772 fans), and Wet Seal (294,479 fans). The image below shows the results assuming the default $5 earned media value for a fan. The calculation is more complicated than just counting fans. It also accounts for postings and comments. Its possible for a brand with fewer fans to have a higher value based on frequency and relevancy of posts. The tool gathers data via the Social Graph API for the past 30 days of activity. I'm not sure this tool assigns the correct value either, but hey, its a great start.

    Read the article

  • How to Change Ubuntu’s Window Borders with Emerald

    - by The Geek
    The look of your operating system is all about the panels and the window borders, so now that we’ve shown you how to customize your panels, it’s time to customize the window borders to make Ubuntu look the way you want it to. For the purposes of this article, we’re going to assume that you’ve got the Compiz compositing window manager running—which is what provides the custom window effects found in Ubuntu. This means the technique probably won’t work in a virtual machine, or a really old PC. This is the third installment of our series on customizing Ubuntu by geeky reader Omar Hafiz—be sure and check out the first article, where he explained how to make your panels transparent, and the second article, where he explained how to customize the fonts and colors of those panels Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Never Call Me at Work [Humorous Star Wars Video] Add an Image Properties Listing to the Context Menu in Chrome and Iron Add an Easy to View Notification Badge to Tabs in Firefox SpellBook Parks Bookmarklets in Chrome’s Context Menu Drag2Up Brings Multi-Source Drag and Drop Uploading to Firefox Enchanted Swing in the Forest Wallpaper

    Read the article

  • Drupal CMS most Stable for High Traffic

    - by Aditi
    Drupal users have high satisfaction with Drupal compared to the Joomla users, for a number of reasons. If you are thinking of  choosing a high performance platform to run your high traffic website.. Drupal Installation is your forte! Overload Scenario Drupal is scalable high performance CMS and is stable under heavy load. If your server is pushed beyond its capacity, Drupal shuts off gracefully and doesn’t crash. As soon as the server is back within its traffic capability, Drupal handles all requests smoothly again. For example if your dedicated server can handle a maximum of 50,000 visits a day, and on lucky days when your news created the buzz in social media and your traffic rose to 70,000 on one day, then your server will be overloaded and usually it crashes causing permanent damage to your database at times.. But if you have used Drupal CMS it closes down gracefully an as soon as traffic goes down to within the server’s capacity, the Drupal running site accepts all requests again. Extensibility Drupal users know that their add-ons integrate better with the core, and their framework makes it easier to extend their CMS’s capabilities.. which makes an extended version of it quite stable unlike Joomla, which loses its strength if you have plenty of plugins & heavy customizations running. Any CMS with number of plugins makes the content complex and reduces your ability to handle high traffic requests. Accessibility Management or ACL Chances are if you are high traffic website, you may have various users & content contributors. ACL means group roles that is assigning people out of the various registered user levels and allocating many kinds of privileges. The most common example is the ability to see or edit a section or selected pages. This efficient feature of Drupal makes it a class apart than other CMSs out there.

    Read the article

  • Introducing Oracle VM VirtualBox

    - by Fat Bloke
    I guess these things always take longer than expected and, while the dust is still not completely settled in all the ex-Sun geographies, it is high time we started looking at some of the great new assets in the Oracle VM portfolio. So let's start with one of the most exciting: Oracle VM VirtualBox. VirtualBox is cross-platform virtualization software, oftentimes called a hypervisor, and it runs on Windows, Linux, Solaris and the Mac. Which means that you download it, you install it on your existing platform, and start creating and running virtual machines alongside your existing applications. For example, on my Mac I can run Oracle Enterprise Linux and Windows 7 alongside my Mac apps like this...(Click to zoom)VirtualBox use has grown phenomenally to the point that at Sun it was the 3rd most popular download behind Java and MySQL. Its success can be attributed to the fact that it doesn't need dedicated hardware, it can be installed on either client or server classes of computers, is very easy to use and is free for personal use. And, as you might expect, VirtualBox has it's own vibrant community too, over at www.virtualbox.org There are hundreds of tutorials out there about how to use VirtualBox to create vm's and install different operating systems ranging from Windows 7 to ChromeOS, and if you don't want to install an operating system yourself, you can download pre-built virtual appliances from community sites such as VirtualBox Images or commercial companies selling subscriptions to whole application stacks, such as JumpBox . In no time you'll be creating and sharing your own vm's using the VirtualBox OVF export and import function. VirtualBox is deceptively powerful. Under the simple GUI lies a formidable engine capable of running heavyweight multi-CPU virtual workloads, exhibiting Enterprise capabilities including a built-in remote display server, an iSCSI initiator for connecting to shared storage, and the ability to teleport running vm's from one host to another. And for solution builders, you should be aware that VirtualBox has a scriptable command line interface and an SDK and rich web service APIs. To get a further feel for what VirtualBox is capable of, check out some of these short movies or simply go download it for yourself.- FB

    Read the article

  • links for 2010-03-11

    - by Bob Rhubart
    Andy Mulholland: (Information Technology) + (Business Technology) ÷ Clouds = Infostructure "Internal information technology with its dedicated users, applications, licenses, client-server, data-centric and close coupled integration architecture cannot support externally oriented business technology where almost every condition is different. Internet connectivity and the emergence of people centric services in the web 2.0 world has led business and user expectations to shift dramatically and give rise to the expectation of a new and completely different working environment, based in the cloud, or more correctly, clouds." -- Andy Mulholland, CTO Blog, Capgemini (tags: enterprisearchitecture cloud web2.0 entarch) @myfear: Getting started with (GSW #2): GlassFish v3 "If the application server/container of your choice is a Java EE compliant one, you are on the right track. This list is not too long these days, if you look for Java EE 6 compliant servers. The most prominent and well-known is also the Java EE 6 reference implementation (RI): The Oracle GlassFish v3." -- Oracle ACE Markus "@myfear" Eisele (tags: oracle otn oracleace glassfish java) @oraclenerd: The"Database is a Bucket" Mentality "Could it be that everyone out there believes that the sole purpose of a database is to store data? That it can't do anything else?" -- Chet "@oraclenerd" Justice (tags: otn oracle database dba) The Encyclopedia of SOA "SOA is an anagram for OSA, which means female bear in spanish. It is a well-known fact in the spanish-speaking world that female bears are able to model business processes and optimize reusable IT assets better than any other hibernating animal." -- One of the surprisingly funny nuggets of wisdom available in the Encyclopedia of SOA. (tags: architecture chucknorris humor soa software technology webservices) Marina Fisher: Book Review - Web 2.0 Fundamentals Marina Fisher reviews WEB 2.0 FUNDAMENTALS by Oswald Campesato and Kevin Nilson. (tags: sun web2.0 bookreview socialnetworking)

    Read the article

  • Filling in PDF Forms with ASP.NET and iTextSharp

    The Portable Document Format (PDF) is a popular file format for documents. PDF files are a popular document format for two primary reasons: first, because the PDF standard is an open standard, there are many vendors that provide PDF readers across virtually all operating systems, and many proprietary programs, such as Microsoft Word, include a "Save as PDF" option. Consequently, PDFs server as a sort of common currency of exchange. A person writing a document using Microsoft Word for Windows can save the document as a PDF, which can then be read by others whether or not they are using Windows and whether or not they have Microsoft Word installed. Second, PDF files are self-contained. Each PDF file includes its complete text, fonts, images, input fields, and other content. This means that even complicated documents with many images, an intricate layout, and with user interface elements like textboxes and checkboxes can be encapsulated in a single PDF file. Due to their ubiquity and layout capabilities, it's not uncommon for a websites to use PDF technology. For example, when purchasing goods at an online store you may be offered the ability to download an invoice as a PDF file. PDFs also support form fields, which are user interface elements like textboxes, checkboxes, comboboxes, and the like. These form fields can be entered by a user viewing the PDF or, with a bit of code, they can be entered programmatically. This article is the first in a multi-part series that examines how to programmatically work with PDF files from an ASP.NET application using iTextSharp, a .NET open source library for PDF generation. This installment shows how to use iTextSharp to open an existing PDF document with form fields, fill those form fields with user-supplied values, and then save the combined output to a new PDF file. Read on to learn more! Read More >

    Read the article

  • Programming concepts taken from the arts and humanities

    - by Joey Adams
    After reading Paul Graham's essay Hackers and Painters and Joel Spolsky's Advice for Computer Science College Students, I think I've finally gotten it through my thick skull that I should not be loath to work hard in academic courses that aren't "programming" or "computer science" courses. To quote the former: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. — Paul Graham, "Hackers and Painters" There are certainly other, much stronger reasons to work hard in the "boring" classes. However, it'd also be neat to know that these classes may someday inspire me in programming. My question is: what are some specific examples where ideas from literature, art, humanities, philosophy, and other fields made their way into programming? In particular, ideas that weren't obviously applied the way they were meant to (like most math and domain-specific knowledge), but instead gave utterance or inspiration to a program's design and choice of names. Good examples: The term endian comes from Gulliver's Travels by Tom Swift (see here), where it refers to the trivial matter of which side people crack open their eggs. The terms journal and transaction refer to nearly identical concepts in both filesystem design and double-entry bookkeeping (financial accounting). mkfs.ext2 even says: Writing superblocks and filesystem accounting information: done Off-topic: Learning to write English well is important, as it enables a programmer to document and evangelize his/her software, as well as appear competent to other programmers online. Trigonometry is used in 2D and 3D games to implement rotation and direction aspects. Knowing finance will come in handy if you want to write an accounting package. Knowing XYZ will come in handy if you want to write an XYZ package. Arguably on-topic: The Monad class in Haskell is based on a concept by the same name from category theory. Actually, Monads in Haskell are monads in the category of Haskell types and functions. Whatever that means...

    Read the article

  • Weird entry for robots.txt on a Naked Domain in Google Webmaster Tools

    - by Metalshark
    We own a .co.uk address and use an Internet hosting company that has made mistakes around DNS in the past. Our main site is hosted on www. and their reluctance to allow editing of AAAA records on-line means our naked domain does not resolve. Currently when we attempt to reach the naked version there is no entry for the browser to go to and it displays an unreachable page (nslookup just says Name: name of domain with no further entries such as an IP or Canonical Name). We recently added the relevant TXT records to verify us to view both the www. version and the naked version of the domain in Google Webmaster Tools (in anticipation of the requests to our Internet host coming to fruition). Imagine our shock when double checking the Site configuration Crawler access and finding a (admittedly failing) robots.txt with a dynamically generated HTML page (full of crude pop-up JavaScript) with references to 3 of our most prominent competitors. What could cause this to happen? As we are in the UK I am assuming some DNS server is serving Google bad information. We are going to contact the Internet hosting company to fix our A and AAAA records once and for all, then check that they work in the US (using something like OpenDNS). Should we be doing more though, for instance informing Google (through Webmaster Tools) that we are now aware there is something currently wrong with our naked domain? UPDATE: We have fixed our A records (not AAAA) and that has resolved the issue. But if there are further actions we should take for effectively having a parking page hosted on our active visitor-heavy, SEO-rich domain that advertised our competitors to US visitors, what would they be?

    Read the article

  • SPARC64 VII+ Processor Core License Factor Reduced by 33%

    - by john.shell
    The Oracle processor core license factor has been a popular topic the last few months.  For those partners new to Oracle software licensing, the processor core license factor determines the number licensed CPUs that are required when running Oracle software (those charged on a per-CPU basis) on multi-core processors.My last entry talked about the core factor reduction for our T3 processor.  The core license factor for our newly announced SPARC64 VII+ processor is 0.5, which is a 33% reduction from the 0.75 rate used with our SPARC64 VI and VII processors.What does this mean for our partners?  Increased opportunity.  This change, similar to our T3-based systems, means that our hardware is the preferred platform for Oracle software. Still a little dizzy on the breadth of Oracle's software offering?  Do a simple scan of Oracle's software price lists. Consider this your target market.This change allows you to focus on total solution price or price/performance, not server prices or per core performance (a standard IBM sales tactic). That's the offensive side of the game.  Don't forget your defense.  One of the biggest customer benefits around the M-Series is investment protection.  The combination of a simple processor/board upgrade, along with a reduction in processor core license factor, makes upgrading one of the best financial moves for our customers.    One reminder.  The update to the processor core license factor only applies to the new VII+ processor - NOT the SPARC64 VI or VII processors.  You can find the official table here.

    Read the article

  • Looking into Entity Framework Code First Migrations

    - by nikolaosk
    In this post I will introduce you to Code First Migrations, an Entity Framework feature introduced in version 4.3 back in February of 2012.I have extensively covered Entity Framework in this blog. Please find my other Entity Framework posts here .   Before the addition of Code First Migrations (4.1,4.2 versions), Code First database initialisation meant that Code First would create the database if it does not exist (the default behaviour - CreateDatabaseIfNotExists). The other pattern we could use is DropCreateDatabaseIfModelChanges which means that Entity Framework, will drop the database if it realises that model has changes since the last time it created the database.The final pattern is DropCreateDatabaseAlways which means that Code First will recreate the database every time one runs the application.That is of course fine for the development database but totally unacceptable and catastrophic when you have a production database. We cannot lose our data because of the work that Code First works.Migrations solve this problem.With migrations we can modify the database without completely dropping it.We can modify the database schema to reflect the changes to the model without losing data.In version EF 5.0 migrations are fully included and supported. I will demonstrate migrations with a hands-on example.Let me say a few words first about Entity Framework first. The .Net framework provides support for Object Relational Mappingthrough EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach.In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework.This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext.Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class we can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests.DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First).Let's move on to our hands-on example.I have installed VS 2012 Ultimate edition in my Windows 8 machine. 1)  Create an empty asp.net web application. Give your application a suitable name. Choose C# as the development language2) Add a new web form item in your application. Leave the default name.3) Create a new folder. Name it CodeFirst .4) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place this class file in the CodeFirst folder.The code follows    public class Footballer     {         public int FootballerID { get; set; }         public string FirstName { get; set; }         public string LastName { get; set; }         public double Weight { get; set; }         public double Height { get; set; }              }5) We will have to add EF 5.0 to our project. Right-click on the project in the Solution Explorer and select Manage NuGet Packages... for it.In the window that will pop up search for Entity Framework and install it.Have a look at the picture below   If you want to find out if indeed EF version is 5.0 version is installed have a look at the References. Have a look at the picture below to see what you will see if you have installed everything correctly.Have a look at the picture below 6) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows     public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }             }    Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 7) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it , it will use a connection string in the web.config and will create the database based on the classes.I will use the name "FootballTraining" for the database.In my case the connection string inside the web.config, looks like this    <connectionStrings>    <add name="CodeFirstDBContext" connectionString="server=.;integrated security=true; database=FootballTraining" providerName="System.Data.SqlClient"/>                       </connectionStrings>8) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.csWe will create a simple public method to retrieve the footballers. The code for the class followspublic class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers select player;             return query.ToList();         }     } 9) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard.Build and Run your application.  10) Obviously you will not see any records coming back from your database, because we have not inserted anything. The database is created, though.Have a look at the picture below.  11) Now let's change the POCO class. Let's add a new property to the Footballer.cs class.        public int Age { get; set; } Build and run your application again. You will receive an error. Have a look at the picture below 12) That was to be expected.EF Code First Migrations is not activated by default. We have to activate them manually and configure them according to your needs. We will open the Package Manager Console from the Tools menu within Visual Studio 2012.Then we will activate the EF Code First Migration Features by writing the command “Enable-Migrations”.  Have a look at the picture below. This adds a new folder Migrations in our project. A new auto-generated class Configuration.cs is created.Another class is also created [CURRENTDATE]_InitialCreate.cs and added to our project.The Configuration.cs  is shown in the picture below. The [CURRENTDATE]_InitialCreate.cs is shown in the picture below  13) ??w we are ready to migrate the changes in the database. We need to run the Add-Migration Age command in Package Manager ConsoleAdd-Migration will scaffold the next migration based on changes you have made to your model since the last migration was created.In the Migrations folder, the file 201211201231066_Age.cs is created.Have a look at the picture below to see the newly generated file and its contents. Now we can run the Update-Database command in Package Manager Console .See the picture above.Code First Migrations will compare the migrations in our Migrations folder with the ones that have been applied to the database. It will see that the Age migration needs to be applied, and run it.The EFMigrations.CodeFirst.FootballeDBContext database is now updated to include the Age column in the Footballers table.Build and run your application.Everything will work fine now.Have a look at the picture below to see the migrations applied to our table. 14) We may want it to automatically upgrade the database (by applying any pending migrations) when the application launches.Let's add another property to our Poco class.          public string TShirtNo { get; set; }We want this change to migrate automatically to the database.We go to the Configuration.cs we enable automatic migrations.     public Configuration()        {            AutomaticMigrationsEnabled = true;        } In the Page_Load event handling routine we have to register the MigrateDatabaseToLatestVersion database initializer. A database initializer simply contains some logic that is used to make sure the database is setup correctly.   protected void Page_Load(object sender, EventArgs e)        {            Database.SetInitializer(new MigrateDatabaseToLatestVersion<FootballerDBContext, Configuration>());        } Build and run your application. It will work fine. Have a look at the picture below to see the migrations applied to our table in the database. Hope it helps!!!  

    Read the article

  • SQL SERVER- Differences Between Left Join and Left Outer Join

    - by pinaldave
    There are a few questions that I had decided not to discuss on this blog because I think they are very simple and many of us know it. Many times, I even receive not-so positive notes from several readers when I am writing something simple. However, assuming that we know all and beginners should know everything is not the right attitude. Since day 1, I have been keeping a small journal regarding questions that I receive in this blog. There are around 200+ questions I receive every day through emails, comments and occasional phone calls. Yesterday, I received a comment with the following question: What are the differences between Left Join and Left Outer Join? Click here to read original comment. This question has triggered the threshold of receiving the same question repeatedly. Here is the answer: There is absolutely no difference between LEFT JOIN and LEFT OUTER JOIN. The same is true for RIGHT JOIN and RIGHT OUTER JOIN. When you use LEFT JOIN keyword in SQL Server, it means LEFT OUTER JOIN only. I have already written in-depth visual diagram discussing the JOINs. I encourage all of you to read the article for further understanding of the JOINs: Read Introduction to JOINs – Basic of JOINs Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • Failed to load viewstate.The control tree into which viewstate is being loaded...etc

    - by alaa9jo
    Two days ago,a colleague of mine tried to publish an asp.net website (which is built in VS2008 using framework 3.5) to our server,he configured everything in IIS (he made sure that the selected asp.net version is 2.0) and launched the website..at first it was working great but when he tried to click on a specific treeview...BOOM..: "Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request." In that page there were these control: a TreeView and a Placeholder,when the user selects any node then it's controls will be created dynamically into that placeholder..for the first time it's working fine but when (s)he select another node then that issue appears. He called me to help him with this issue,for me this is the first time I see such an issue,scratch my head then I decided to eliminate the possibilities of this issue one by one,at the development machine it's working perfectly,he published the website at the local IIS and again..it's working perfectly,I took a copy of the website and published it into my laptop but no issues at all,so this is means that it's not an issue in the code. So there is something missing/wrong in our server [it has Windows Server 2003],we went to the server and checked on the web-config and the configurations on IIS...nothing wrong so far,so I decided to check if the framework 3.5 is installed or not and the answer: it wasn't installed Of course he assumed that it was installed and there was nothing to tell if it wasn't from the "ASP.Net version" in IIS because frameworks 3.0 and 3.5 will not be listed there [2.0 will be listed there instead],the only way to check if it was installed or not is to search for the framework in this path:[WINDOWS Folder]\Microsoft.NET\Framework or check if it was installed in Add or remove programs. The obvious solution for his case: We installed Framework 3.5 SP1 into our server,did a restart to the machine and it worked ! If anyone faced the same issue and solved it using the same solution or with a different one please post it here to share experience.

    Read the article

  • No USB 04b4:0307 mike input after 10.04

    - by Papou
    I am using an USB phone that is in fact a so-called "sound card" based on the 04b4:0307 chip. In fact, I have two different phones using 04b4:0307 and in fact I have a sound USB key too. This, I believe, is the start of why they call 04b4:0307 "ubiquitous" (instead of oh four bee...). But not "eternal". The mike worked in Ubuntu 9.10 and 10.04 but not later ([email protected]). "not working" means that 04b4:0307 shows in Sound Preference but that its vu-meter is mute. I have posted the full lsusb and the result of tests in various systems here: http://www.papou.byethost9.com/tmp/1043601.html Note: Tests done on VirtualBox (thankfully). But UbuntUnity no longer works on VB, so I used the LinuxMint equivalent. ?hat's fate. I could not find any problem report close enough. What should be my next step? I believe the problem occurs in module snd-usb-audio. One thing I might try if I knew how is hacking a DEB with its 10.04's source. I can hack DEBs. Any hint welcome on how to make a DEB overriding a kernel packed module. I mean that the newly installed module should have precedence, be loaded instead, the module installed with the kernel. TIA !

    Read the article

  • Entity Framework &amp; Transactions

    - by Sudheer Kumar
    There are many instances we might have to use transactions to maintain data consistency. With Entity Framework, it is a little different conceptually. Case 1 – Transaction b/w multiple SaveChanges(): here if you just use a transaction scope, then Entity Framework (EF) will use distributed transactions instead of local transactions. The reason is that, EF closes and opens the connection when ever required only, which means, it used 2 different connections for different SaveChanges() calls. To resolve this, use the following method. Here we are opening a connection explicitly so as not to span across multipel connections.   using (TransactionScope ts = new TransactionScope()) {     context.Connection.Open();     //Operation1 : context.SaveChanges();     //Operation2 :  context.SaveChanges()     //At the end close the connection     ts.Complete(); } catch (Exception ex) {       //Handle Exception } finally {       if (context.Connection.State == ConnectionState.Open)       {            context.Connection.Close();       } }   Case 2 – Transaction between DB & Non-DB operations: For example, assume that you have a table that keeps track of Emails to be sent. Here you want to update certain details like DataSent once if the mail was successfully sent by the e-mail client. Email eml = GetEmailToSend(); eml.DateSent = DateTime.Now; using (TransactionScope ts = new TransactionScope()) {    //Update DB    context.saveChanges();   //if update is successful, send the email using smtp client   smtpClient.Send();   //if send was successful, then commit   ts.Complete(); }   Here since you are dealing with a single context.SaveChanges(), you just need to use the TransactionScope, just before saving the context only.   Hope this was helpful!

    Read the article

  • SQL SERVER – Fix: Error: Compatibility Level Drop Down is Empty

    - by Pinal Dave
    I currently have SQL Server 2012 and SQL Server 2014 both installed on the same machine. My job requires me to travel a lot and I like to travel light. Hence, I have only one computer with all the software installed in it. I can install Virtual Machines but as I was able to install SQL Server 2012 and SQL Server 2014 side by side, I just went ahead with that option. Now one day when I opened up my SQL Server 2014 and went to the properties of the my database, I realized that the dropdown box for Compatibility level is empty. I just can’t select anything there or see what is the current Compatibility level of the database. This was the first time for me so I was bit confused and I tried to search online. Upon searching online I realize that if I was not the first, there are very few questions on this subject on various forums as well as there is no convincing answer to this problem online. That means, I was pretty much first one to face this error. See the image of the situation I was facing. Now I decided to resolve this issue as soon as I can. I spent a few minutes here and there and realize my mistake. I had connected to SQL Server 2014 instance from SQL Server 2012 Management Studio. Hence, I was not able to see any compatibility related settings. Once I connected to SQL Server 2014 instance with SQL Server 2014 Management Studio – this issue was resolved. Well, simple things sometimes keep us very busy. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Rolling your own Hackathon

    - by Terrance
    Background Info Hey, I pitched the idea of a company Hackathon that would donate our time to a charity to work on a project (for free) to improve morale in my company and increase developer cohesion. As it turns out most like the idea but, guess who's gonna be the one to put it together. lol Yeah me. I should add that we are a fairly small shop with about 10-12 programmers (some pull double duty as programmers, inters etc..) So, that might make things a bit easier. Base Question While I am no means a project manager or of any level of authority (Entry level guy) I was wondering if anyone knew the best approach for someone in my position to put together such an even with possibly (some) company backing. Or for that matter have any helpful advice to pass along to a young padawan. So far..... As of right now it is just an idea so, to start with I presumably would have to put together some sort of proposal and do some that office stuff that I became a programmer to steer clear of to some extent.

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How do i make an AJAX block crawlable?

    - by Vikas Gulati
    I have a block with a few tabs. When the user clicks the tab the content of that block get loaded. Now I would like to make it crawlable by the search engines and at the same time I want to maintain the good user-experience. I figured out a couple of alternative but each one has its own shortcomings. The approached that i could come up with. Use hashbangs and then use this. But hashbangs are not good and things of past now. Secondly it will make my content crawlable by only googlebot as yahoo and bing dont support this. Use GET PARAMETERIZED fallback incase when javascript doesn't work. This will work for all bots and also would be nice as it would work without javascript. But then this will create duplicates of my page as this block is only a very small section of my page and i have like around 5-6 tabs. So it means that many duplicates! Doing this without AJAX is not an option as it would only increase the page load time as all these blocks have heavy media content in them!

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >