Search Results

Search found 10789 results on 432 pages for 'experience'.

Page 234/432 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Taking our Friendships to the next level.

    - by RedAndTheCommunity
    Red Gate have been running the Friends of Red Gate program for years now, and over that time we've built some great relationships with some truly awesome members of the SQL and .NET communities. When I took over the running of the program from Annabel in 2011, I was overwhelmed by the enthusiasm and commitment of our Friends. There were just so many of them, however, that it was hard to make the most of the relationships we had with people, and I wanted to fix that. I decided to survey all our Friends, to find out what they wanted to get out of, and put into, being in the Friends of Red Gate (FoRG) program. From the results of that survey, I identified 30 FoRGs that were really willing and able to go that step further to help Red Gate improve their tools, improve their relationship with the community, and improve the Friends of Red Gate program. Those 30 Friends of Red Gate have been awarded 'FoRG+' status. That means they'll: Have a closer relationship with the product teams, by getting involved in projects Have even more access to the inside track about the tools they're interested in Get the opportunity to come visit us at the Red Gate office and really influence the development of the tools. Plus more, depending on how the individual FoRG+ wants to work with us. This doesn't mean I've forgotten our other Friends; I'm working on ways to improve their experience of the Friends of Red Gate program. I'll write about them in another post. If you're an existing Friend of Red Gate, and you're interested in finding out how to get involved in the FoRG+ program, then I'd love to chat to you. For anyone that's interested in joining the Friend of Red Gate program, take a look at the web page dedicated to the program, and get in touch at [email protected] to be put on the waiting list for our 2013 program.

    Read the article

  • Roadmap for Thinktecture IdentityServer

    - by Your DisplayName here!
    I got asked today if I could publish a roadmap for thinktecture IdentityServer (idrsv in short). Well – I got a lot of feedback after B1 and one of the biggest points here was the data access layer. So I made two changes: I moved to configuration database access code to EF 4.1 code first. That makes it much easier to change the underlying database. So it is now just a matter of changing the connection string to use real SQL Server instead of SQL Compact. Important when you plan to do scale out. I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. Then there are some other small changes: The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. I made certificate based endpoint identities for SSL endpoints optional. This caused some problems with configuration and versioning of existing clients. I hope I can release the RC in the next days. If there are no major issues, there will be RTM very soon!

    Read the article

  • 10 Tech Products Ahead of Their Time [Video]

    - by Jason Fitzpatrick
    Sometimes a product just can’t help but be too far ahead of it’s time to be adopted. Check out these 10 products that had their moment of glory a moment (or a decade) too soon. At Mashable they’ve gathered up 10 products that hit the market too soon for people to really appreciate them. Among them, as seen in the video above, a super simple internet-focused computer. At the time it hit the market people simply didn’t get the value of having a cheap, easy to use internet terminal. It probably didn’t help much that the 1990s internet didn’t have the plethora of powerful and useful web-based applications we have now. None the less we now have tons of lightweight and “underpowered” devices focused on the internet experience (like netbooks, iPads, smart phones, chromebooks, and more). Hit up the link below to see the 9 other gems from their collection of products ahead of their times. 10 Tech Products Ahead of Their Time [Mashable] How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • How-To Backup, Swap, and Update Your Wii Game Saves

    - by Jason Fitzpatrick
    Whether you want to backup your game saves because you’ve worked so hard on them or you want to import game saves precisely so you don’t have to work so hard, we’ve got you covered. Image adapted from icon set by GasClown. There are a multitude of reasons you might want to export and import game saves from your Wii including: saving the progress on your favorite games before sending in your Wii for service, copying the progress to a friend’s or your secondary Wii, and importing saved games from the web or your friend’s Wii so that you don’t have to bust your ass to unlock all the specialty items yourself. (Here’s looking at you Mario Kart and House of the Dead: Overkill.) Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • South Florida .Net Code Camp - February 12th, 2011

    - by Sam Abraham
    Later this week, I will be heading to our annual South Florida .Net Code Camp, an all-day free “Geek Fest” taking place on February 12th, 2011.This year’s code camp will be conveniently taking place at Nova Southeastern University in Ft Lauderdale.   With more than 700 already registered, this year’s event is bound to exceed last year’s registration and attendance. We are also fortunate to have secured the backing of a large number of our kind sponsors, supporters and volunteers, with our efforts led by our chief organizer, Fladotnet founder and Microsoft MVP, Dave Noderer.   As a member of the volunteer organizing team, I have gotten a good exposure on what it takes to run a code camp and gotten to appreciate the tremendous amount of work such a large event takes to put together to handle logistics such as venue, food, speaker registration and scheduling, website updates; that of course in addition to the essential outreach efforts necessary to secure sponsorships.   As Dave puts it, Code Camp is a great venue for those who want to gain exposure and experience as technical speakers to try it out just as much as it being a forum for experienced speakers to share the latest on their topics of interest. So far, 65 speakers are already scheduled to speak, bringing us an array of diverse topics.   I will be speaking on ASP.Net MVC3, the Razor view engine and present a brief introduction to NUGet. Below is a brief abstract on the session. For more information on code camp and to regsiter, please visit http://www.fladotnet.com/codecamp/Default.aspx   Hope to see you there!   Diving into ASP.Net MVC 3 and the Razor View Engine The first few minutes of this session will bring those who might not have previously used or learned about MVC up to speed with the necessary rules and conventions for an MVC project. We will then cover the latest additions to ASP.Net MVC 3 and discuss the value it brings with its new Razor View Engine and the various project template improvements made in Visual Studio 2010. We will also explore how to leverage both Razor and ASPX View Engines in one project. Audience participation is strongly encouraged and will be solicited.

    Read the article

  • From physics to Java programmer?

    - by inovaovao
    I'm a physics phd with little actual programming experience. I've always liked programming and played around with Basic and Pascal (also VB and Delphi) as a teen, but the largest actual project I completed was an assignement for the introductory computer science class in university where I wrote a nice little program (about 1500 lines of pascal) to display functions of 2 variables in 3D. I've had also a couple other projects of a few hundred lines range, but during my phd I didn't have (or take) the time to program more (string theory is hard guys!), beside playing around with ruby. Now I've decided that I'm more interested in programming than in physics and started to learn Java (hoping to pass the certification exam next week) and OO design. Still, I have trouble deciding on what to focus next (Java EE? Web development? algorithms and C programming?) in order to maximize my employement chances. Bear in mind that I'm aiming (mostly) at the swedish job market and that I'm 30 years old. So for the questions: Do you think that I have any chances to start and make a career in IT and programming coming from physics? What would be the best strategy to maximize my value in the field? Do you have suggestions as to where my physics background might be useful?

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • Speaking at SPTechCon Boston 2012

    - by Brian Jackett
    I will be speaking at SPTechCon Boston 2012.  This will be my 3rd time speaking at SPTechCon and 4th time attending.  The conference has steadily been growing over the past few years and is one of the biggest non-Microsoft run conferences for SharePoint in the US.  I’ll be presenting two topics which I have given before but this time around with some updated content.  Registration is currently open and you can save $200 (on top of the current early bird discount of $400) by using the code "JACKETT” during registration.  I highly recommend joining for valuable learning and networking.   Where: SPTechCon Boston 2012 Title: PowerShell for the SharePoint 2010 Developer Audience and Level: Developer, Intermediate Abstract: PowerShell is not just for SharePoint 2010 administrators. Developers also get access to a wide range of functionality with PowerShell. In this session we will dive into using PowerShell with the .Net framework, web services, and native SharePoint commandlets. We will also cover some of the more intermediate to advanced techniques available within PowerShell that will improve your work efficiency. Not only will you learn how to automate your work but also learn ways to prototype solutions faster. This session is targeted to developers and assumes a basic familiarity with PowerShell. Slides and Code download: coming soon   Title: Integrating Line-of-Business Applications with SharePoint 2010 Audience and Level: Developer, Intermediate Abstract: One of the biggest value-adding enhancements in SharePoint 2010 is the Business Connectivity Services (BCS). In this session, we will overview the BCS, demonstrate connecting line-of-business applications and external systems to SharePoint through external content types, and walk through surfacing that data with external lists. This session is targeted at developers. No prior experience with the BCS is required, but a basic understanding of SharePoint Designer 2010 and SharePoint solutions is suggested. Slides and Code download: coming soon         -Frog Out

    Read the article

  • Oracle Industrial Manufacturing Forum, Nov 8, W Hotel-Chicago

    - by Stephen Slade
    As global markets mature and new customer segments emerge, top industrial manufacturers are restructuring their businesses for growth. Oracle's annual Industrial Manufacturing Forum was created to help these companies focus on revolutionizing product and service innovation, maximize organizational performance, and deliver exceptional customer experiences. Key themes of this year's event are redefining "Lean," transforming service, and modernizing the manufacturing enterprise.  This informative forum will be held at the W Hotel and include a Keynote from Eaton's VP of IT who led the firm through a dramatic supply chain transformation. This jouney led Eaton to win the Manufacturer of the Year award in 2011 from Managing Automation/Manufacturing Executive publication. Other featured presentations include:  Value of BI Applications & EAM Analytics for Industrial Manufacturing: Regal Beloit,  Sales & Operating Planning: GE Healthcare,   Advanced Financial Controls/Leveraging Change Controls: Eaton,   Customer Experience (CX): Pella,  Creating The Strategic Service Chain: Entercoms Register today at: MANUFACTURING_FORUM Oracle Industrial Manufacturing ForumThursday, November 8, 2012 9:30 a.m. – 6:00 p.m. W Hotel City Center172 West Adams Street, Chicago, IL 60603 Click here to register now or call 1.800.820.5592 ext. 10954.

    Read the article

  • Does ssh key need to be named id_rsa?

    - by dustyprogrammer
    I have come across this problem a couple of times when creating build servers with keyed authentication. I was wondering if anyone else has experience this. I have a couple of keys for my current user that may connect to different machines. Let say machine1 and machine2. I have pasted my public key into their respective authorized_keys file. The first one I have named the first key id_rsa and the second key bender. When I try to connect to bender I get the following output with my verbose ssh connection debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/bozo/.ssh/.ssh/identity debug1: Trying private key: /home/bozo/.ssh/.ssh/id_rsa debug1: Trying private key: /home/bozo/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). It only offers the id_rsa key, as you can see above. Is this correct? If so why? How do I get it to offer more keys? I know it is a problem I see intermittently, because I at home I have multiple keys without much trouble. I would also appreciate a overview on how the pub and private keys interact with the client and server. I thought I had a pretty decent idea, but apparently I am missing something. Please and thank you.

    Read the article

  • probability of trouble-free upgrade

    - by intuited
    One of the problems with recommending Ubuntu to potential future users, especially those not particularly given to technical endeavours, is that there is a chance that upgrades will break their machine, and they'll have to pay or otherwise coerce some knowledgeable person into fixing them. In my limited experience of running successive versions of Ubuntu since 8-something on a couple of different laptops, this chance is quite high. I'm not sure if I'm just unlucky with the hardware that I'm using, or if it's a result of the higher-than-average number of packages I have installed, or if upgrades are just typically problematic. So I'd like to know the likelihood, for a casual user, of doing a release upgrade, for example from 10.04 to 10.10, without experiencing any regression bugs. Obviously this is dependent on the hardware that people are running. Canonical seems to be making some efforts towards collecting data on this, for example with the "I am affected by this bug" checkbox on their issue tracker, and with the laptop compatibility reports, but I've not seen anything comprehensive. I'm hoping for an objective reference here, for example a study carried out by relatively unbiased individuals. However, anecdotal evidence is probably useful too.

    Read the article

  • File format for animated scene

    - by stephelton
    I've got a custom OpenGL based rendering engine and I'd like to add support for cinema-type scene animation. The artist that is helping me uses primarily 3DSMax. I'd like a file format for exporting and importing this data. I'm also in need of a file format for skeletal animation data, which may have an impact here. I've been looking at MAXScript to manually export this stuff, which would buy me the most flexibility, but I have virtually no experience with 3DSMax itself, so I get a little lost when it comes to terminology. So I'd like to know what file formats exist for animated scene data, and whether they are appropriate for my use (my fear is that they will be way too broad for my fairly simple needs.) The way I view animated scene data is basically a bunch of references to [animated] models with keyframe-based matrices describing their orientation over time. And probably some special camera stuff to handle perspective. I might also want some event type stuff for adding/removing objects. Is this a sane concept?

    Read the article

  • BDD/TDD vs JAD?

    - by Jonathan Conway
    I've been proposing that my workplace implement Behavior-Driven-Development, by writing high-level specifications in a scenario format, and in such a way that one could imagine writing a test for it. I do know that working against testable specifications tends to increase developer productivity. And I can already think of several examples where this would be the case on our own project. However it's difficult to demonstrate the value of this to the business. This is because we already have a Joint Application Development (JAD) process in place, in which developers, management, user-experience and testers all get together to agree on a common set of requirements. So, they ask, why should developers work against the test-cases created by testers? These are for verification and are based on the higher-level specs created by the UX team, which the developers currently work off. This, they say, is sufficient for developers and there's no need to change how the specs are written. They seem to have a point. What is the actual benefit of BDD/TDD, if you already have a test-team who's test cases are fully compatible with the higher-level specs currently given to the developers?

    Read the article

  • How can a large, Fortran-based number crunching codebase be modernized?

    - by Dave Mateer
    A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: Other people can maintain it over next 10 year cycle Get faster at tweaking the software Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: Use Python and CUDA from Nvidia Rewrite in a functional language. For example, F# or Haskell Go cloud based and use something like Hadoop and Java Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See Medical Imaging and a guy who has written a Fortran wrapper for CUDA, Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?.

    Read the article

  • ArchBeat Link-o-Rama for 11/11/2011

    - by Bob Rhubart
    3 SOA business cases, explained in a 2-minute elevator speech | Joe McKendrick Impress your CEO — maybe even the CFO — with some quick examples of SOA making a difference to the business. ADF Faces - a logic bomb in the order of bean instantiations | Chris Muir Oracle ACE Director Chris Muir shares the details on "an interesting ADF logic bomb" discovered by one of his colleagues. 5 key trends in cloud computing's future | David Linthicum "'Cloud computing' will become just 'computing' at some point," says Linthicum, "but it will still be around as an approach to computing." What's New with XBRL? | John O'Rourke John O'Rourke shares highlights and key take-aways from the XBRL US Conference in Nashville and the XBRL International Conference in Montreal. Siri-ous Business: Enterprise Apps and Global UX Considerations | Ultan O'Broin Ultan O'Broin ponders "the enterprise applications user experience (UX) implications of Siri" and "the global UX aspects to the Siri potential." These are 11 of my favorite things! | Mike Gerdts Gerdts introduces his 11 favorite things about zones in Solaris 11. The Power of Social Recommendations | Peter Reiser "Do you really want to invest to drive YOUR audience trough public social networks," asks Reiser, "or do you want to have YOUR audience on your own social network which is seamless integrated with your web properties and business applications." Fourth Key Attribute of Cloud Computing - Provisioning | Tom Laszewski "Self-service provisioning of computing infrastructure in a cloud infrastructure is also very desirable as it can cut down the time it takes to deploy new infrastructure for a new application or scale up/down infrastructure for an existing application," says Tom Laszewski. Oracle Utilities Application Framework Whitepaper List as of November 2011 | Anthony Shorten Anthony Shorten shares an updated and nicely detailed list of Oracle Utilities Application Framework white papers. Down from the Tower; Information Integration Conversation; By the Time the Architects get to Phoenix This week on the Oracle Technology Network Architect Home Page.

    Read the article

  • 12.10 visual performance using nvidia driver

    - by user100485
    My fresh ubuntu 12.10 install is slow, not something extreme but dragging windows, switching workspaces and things like that are just slow and look horrible. it feels like the fps is dropping in a game. Doing some photoshop work in windows was even a relief! This effect gets worse if I connect my external monitor. My system is an intel pentium dual core T4500 with 4gb memory and a GeForce 8200M G/integrated/SSE2 graphics chip. Nothing fancy but should be able to run ok. My "experience" in ubuntu is set to standard. (MSI cr500 laptop) I've installed the nvidia drivers, tried current and experimental and the experimental drivers seem to perform a bit better but overall bad anyway. I set the mode to adaptive in the nvidia-settings tool and it goes to maximum setting directly and doesn't come back. Using htop I found out that compiz or the X server always use a few percent of my cpu, more than I think it should and the time consumed is 5:18 for compiz, 4:33 for /usr/bin/X and 2:41 for google chrome(about 30 tabs open so not too strange I think.) What can I do to increase the visual performance cause this makes me not want to use ubuntu in public!

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • The How-To Geek Video Guide to Using Windows 7 Speech Recognition

    - by YatriTrivedi
    Ever get the desire to control your computer, Star Trek-style? With Windows 7’s Speech Recognition, it’s easier than you might think. Microsoft has been working on its voice command steadily over the years. XP introduced it, Vista smoothed it, and 7 has it polished. It’s strangely not advertised as a feature, even though other voice command and speech recognition programs are hundreds of dollars. It may not be as perfect as some of them, but there’s definitely something amazing about vocally telling your computer to do things and it actually working Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • Installation of Tex Live via the terminal 12.04

    - by user74713
    I need to install upstream version of TeX Live for 12.04 to edit the manual for Ubuntu. I am having a difficult time installing it per the directions @ http://ubuntu-manual.org/getinvolved/editors#install-texlive. TeX Live documents are on my computer, but I am not able to run the install. No TeX Live program found on my computer. Any help is greatly appreciated! ~Thanx!~ Hello, my name is Chris. I am a student pursuing a career in technical writing. I would like to assist the Ubuntu community while gaining experience & building my resume. Below I have listed the prior attempts & links to view the posts of each attempt: Backports I have tried using the official backports of the latest (2012) TeX-Live via their PPA. Please refer to link below for the particulars. How do I install the latest 2012 TeX Live on 12.04? Latex I've also tried running Latex as suggested. Please refer to link below for the particulars. http://ubuntuforums.org/showthread.php?t=2019051 PPA Causing Issue?? or something else I came across a post concerning the ability to install programs via the terminal and am wondering if it may be my problem??? Please refer to link below for the particulars. PPA - TeX Live Cannot install anything through Terminal - apt-get -f install

    Read the article

  • Need a PCIe desktop graphics card for dual-monitor

    - by Graham
    I have a mid-2008 workstation with two HD monitors supporting HDMI and DVI inputs. Since Ubuntu 11.10, I have experienced no end of trouble with my NVidia Quadro NVS 290 in TwinView dual-monitor output. Others have similar desktop TwinView woes. I want a new graphics card. Previously I asked for a graphics card recommendation and response was Nvidia Geforce GTS 450... but really I'm looking for someone who has actually got a working dual-monitor desktop to tell me what card they use so I can get something that is known to work. So please, people who have no-issues with their 64-bit Ubuntu 12.04 Unity 3D desktop spread across two HD-resolution external monitors (either DVI or HDMI connector), and who also run Google Chrome (which throws a spanner due to its own GPU compositing)... please let me know what graphics card you have so I can buy one. Gathering Options These seem to be the Nvidia cards featuring dual DVI. But they all seem to be gaming cards - what has dual-DVI, good support, but is not a massive gaming card? Nvidia GTS 450 (previously recommended) - 2x DVI Nvidia GTX 550 Ti (used by System76) - 2x DVI Nvidia GT 430 (used by System76) - 1xDVI, 1xHDMI Nvidia GT 640 (found on NVidia site) - 1xDVI, 1xHDMI (also GT 620, GT 630) Has anyone had a good desktop dual-monitor Unity 3D experience with ATI cards?

    Read the article

  • moving from wpf to html5

    - by HighCore
    I don't even know if this is the right StackExchange site to post this question. If it isn't, please excuse me and please let me know which would be the right one. I am an experienced WPF developer, and I seriously love the technology. I feel pretty good when working with XAML, bindings, templates, triggers, MVVM and all the WPF world of goodness. Now I have recieved a job offer which surpasses my current salary by 50%. It a position to work as a C# developer in an ASP.Net MVC4 + HTML5 project. I have never EVER in my whole life worked with ASP.Net, nor HTML and I never ever did a web page or web application before. I certainly find myself worried that I will lose all the comfort and joy I live every day coding in WPF. And in the other hand I understand and have seen in these 3/4 months of job hunting that there's a LOT of ASP.Net and really really little or no WPF in the job market (at least here), so I somehow feel forced towards it. So, my question is: Can anybody who had to go thru this type of change tell me the pros and cons of working with these technologies from a developer's perspective? I don't care about open-source / non-microsoft or non-desktop, I care about REAL development experience in every day working with these techs, and whether ASP.Net MVC 4 + HTML + JS is as crappy as I think it is comparing it to WPF.

    Read the article

  • Unit and Integration testing: How can it become a reflex

    - by LordOfThePigs
    All the programmers in my team are familiar with unit testing and integration testing. We have all worked with it. We have all written tests with it. Some of us even have felt an improved sense of trust in his/her own code. However, for some reason, writing unit/integration tests has not become a reflex for any of the members of the team. None of us actually feel bad when not writing unit tests at the same time as the actual code. As a result, our codebase is mostly uncovered by unit tests, and projects enter production untested. The problem with that, of course is that once your projects are in production and are already working well, it is virtually impossible to obtain time and/or budget to add unit/integration testing. The members of my team and myself are already familiar with the value of unit testing (1, 2) but it doesn't seem to help bringing unit testing into our natural workflow. In my experience making unit tests and/or a target coverage mandatory just results in poor quality tests and slows down team members simply because there is no self-generated motivation to produce these tests. Also as soon as pressure eases, unit tests are not written any more. My question is the following: Is there any methods that you have experimented with that helps build a dynamic/momentum inside the team, leading to people naturally wanting to create and maintain those tests?

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • New Cloud Security Book: Securing the Cloud by Vic Winkler

    - by user12608550
    It's rare that I read a technical book straight through; I usually read key chapters and save the rest for later reference. But Winkler's book, written by an accomplished and highly experienced security professional, was worth a complete read, cover to cover. Of the recently published cloud security books, such as... Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance, by Tim Mather, Subra Kumaraswamy, and Shahed Latif; O'Reilly Media Inc, 2009; Cloud Computing: Implementation, Management, and Security, by John Rittenhouse and James Ransome; CRC Press 2010; Cloud Security: A Comprehensive Guide to Secure Cloud Computing, by Ronald Krutz and Russell Vines; Wiley Publishing Inc, 2010 ...Securing the Cloud is the most useful and informative about all aspects of cloud security. Clearly, through his experience, the author has thought through many practical issues of securing large, virtualized IT installations. His Chapter 6 on Best Practices and Chapter 9 with its valuable checklists are worth the price of the book. If you are among the many new cloud computing professionals, Securing the Cloud is an essential reference for your work.

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >