Search Results

Search found 14576 results on 584 pages for 'community building'.

Page 450/584 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • IPgallery banks on Solaris SPARC

    - by Frederic Pariente
    IPgallery is a global supplier of converged legacy and Next Generation Networks (NGN) products and solutions, including: core network components and cloud-based Value Added Services (VAS) for voice, video and data sessions. IPgallery enables network operators and service providers to offer advanced converged voice, chat, video/content services and rich unified social communications in a combined legacy (fixed/mobile), Over-the-Top (OTT) and Social Community (SC) environments for home and business customers. Technically speaking, this offer is a scalable and robust telco solution enabling operators to offer new services while controlling operating expenses (OPEX). In its solutions, IPgallery leverages the following Oracle components: Oracle Solaris, Netra T4 and SPARC T4 in order to provide a competitive and scalable solution without the price tag often associated with high-end systems. Oracle Solaris Binary Application Guarantee A unique feature of Oracle Solaris is the guaranteed binary compatibility between releases of the Solaris OS. That means, if a binary application runs on Solaris 2.6 or later, it will run on the latest release of Oracle Solaris.  IPgallery developed their application on Solaris 9 and Solaris 10 then runs it on Solaris 11, without any code modification or rebuild. The Solaris Binary Application Guarantee helps IPgallery protect their long-term investment in the development, training and maintenance of their applications. Oracle Solaris Image Packaging System (IPS) IPS is a new repository-based package management system that comes with Oracle Solaris 11. It provides a framework for complete software life-cycle management such as installation, upgrade and removal of software packages. IPgallery leverages this new packaging system in order to speed up and simplify software installation for the R&D and production environments. Notably, they use IPS to deliver Solaris Studio 12.3 packages as part of the rapid installation process of R&D environments, and during the production software deployment phase, they ensure software package integrity using the built-in verification feature. Solaris IPS thus improves IPgallery's time-to-market with a faster, more reliable software installation and deployment in production environments. Extreme Network Performance IPgallery saw a huge improvement in application performance both in CPU and I/O, when running on SPARC T4 architecture in compared to UltraSPARC T2 servers.  The same application (with the same activation environment) running on T2 consumes 40%-50% CPU, while it consumes only 10% of the CPU on T4. The testing environment comprised of: Softswitch (Call management), TappS (Telecom Application Server) and Billing Server running on same machine and initiating various services in capacity of 1000 CAPS (Call Attempts Per Second). In addition, tests showed a huge improvement in the performance of the TCP/IP stack, which reduces network layer processing and in the end Call Attempts latency. Finally, there is a huge improvement within the file system and disk I/O operations; they ran all tests with maximum logging capability and it didn't influence any benchmark values. "Due to the huge improvements in performance and capacity using the T4-1 architecture, IPgallery has engineered the solution with less hardware.  This means instead of deploying the solution on six T2-based machines, we will deploy on 2 redundant machines while utilizing Oracle Solaris Zones and Oracle VM for higher availability and virtualization" Shimon Lichter, VP R&D, IPgallery In conclusion, using the unique combination of Oracle Solaris and SPARC technologies, IPgallery is able to offer solutions with much lower TCO, while providing a higher level of service capacity, scalability and resiliency. This low-OPEX solution enables the operator, the end-customer, to deliver a high quality service while maintaining high profitability.

    Read the article

  • Welcome 2011

    - by PSteele
    About this time last year, I wrote a blog post about how January of 2010 was almost over and I hadn’t done a single blog post.  Ugh…  History repeats itself. 2010 in Review If I look back at 2010, it was a great year in terms of technology and development: Visited Redmond to attend the MVP Summit in February.  Had a great time with the MS product teams and got to connect with some really smart people. Continued my work on Visual Studio Magazine’s “C# Corner” column.  About mid-year, the column changed from an every-other-month print column to an every-other-month print column along with bi-monthly web-only articles.  Needless to say, this kept me even busier and away from my blog. Participated in another GiveCamp!  Thanks to the wonderful leadership of Michael Eaton and all of his minions, GiveCamp 2010 was another great success.  Planning for GiveCamp 2011 will be starting soon… I switched to DVCS full time.  After years of being a loyal SVN user, I got bit by the DVCS bug.  I played around with both Mercurial and Git and finally settled on Mercurial.  It’s seamless integration with Windows Explorer along with it’s wealth of plugins made me fall in love.  I can’t imagine going back and using a centralized version control system. Continued to work with the awesome group of talent at SRT Solutions.  Very proud that SRT won it’s third consecutive FastTrack award! Jumped off the BlackBerry train and enjoying the smooth ride of Android.  It was time to replace the old BlackBerry Storm so I did some research and settled on the Motorola DroidX.  I couldn’t be happier.  Android is a slick OS and the DroidX is a sweet piece of hardware.  Been dabbling in some Android development with both Eclipse and IntelliJ IDEA (I like IntelliJ IDEA a lot better!).   2011 Plans On January 1st I was pleasantly surprised to get an email from the Microsoft MVP program letting me know that I had received the MVP award again for my community work in 2010.  I’m honored and humbled to be recognized by Microsoft as well as my peers! I’ll continue to do some Android development.  I’m currently working on a simple app to get me feet wet.  It may even makes it’s way into the Android Market. I’ve got a project that could really benefit from WPF so I’ll be diving into WPF this year.  I’ve played around with WPF a bit in the past – simple demos and learning exercises – but this will give me a chance to build an entire application in WPF.  I’m looking forward to the increased freedom that a WPF UI should give me. I plan on blogging a lot more in 2011! Technorati Tags: Android,MVP,Mercurial,WPF,SRT,GiveCamp

    Read the article

  • Oracle OpenWorld Series: All Things Mobile

    - by Michelle Kimihira
    I caught up with Joe Huang, Senior Principal Product Manager, Mobile Application Development Framework to hear about his recommendations for Oracle OpenWorld. Use this Focus On document, which provides a roadmap to must-attend sessions and demos. By Joe Huang This year’s OpenWorld promises to be “THE” event for anyone interested in mobile enterprise applications.  Although Oracle has had a rich portfolio of mobile products for many years now, there is a much stronger focus on mobile this year.  Every single one of our customers is looking to develop a mobile strategy and bring key business processes to mobile users, and as you will see in the various keynotes, sessions, and demos during OpenWorld, Oracle is clearly the leader in mobile technologies and applications. Look for mobile development technologies being demonstrated in the Oracle Red Lounge located at Moscone North Upper Lobby, where innovative technologies from Oracle are being showcased.  A few select sessions where mobile development technologies will be highlighted: Monday, 10/1 10:45 AM – 11:45 AM GEN9398: The Future Development for Oracle Fusion – From Desktop to Mobile to Cloud See the latest and greatest in Oracle development technologies.  A key customer will be demonstrating the application they built using beta version of ADF Mobile. Marriott Marquis, Salon 8 Monday, 10/1 1:45 PM – 2:45 PM GEN11554: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies – See how to leverage Oracle’s development technology like ADF Mobile to mobilize Oracle applications. Moscone West, 3002/3004 Monday, 10/1 4:45 PM – 5:45 PM GEN11451: Building a Mobile Applications with Oracle Cloud See how Oracle offers a simpler way of developing and deploying cross-device mobile applications, enabling you to access applications, data and services from mobile channels in an easier way. Moscone West, 2002/2004 Tuesday, 10/2 11:45 AM – 12:45 PM CON3824: Mobile-Enabled Oracle Fusion Middleware and Enterprise Applications with Oracle ADF See how Oracle Fusion Middleware and ADF Mobile together delivers a complete and powerful platform for enterprise mobile applications.  A key customer will also be demonstrating a application built using ADF Mobile beta, that extends Oracle application to mobile devices. Moscone South, 306 Additional Information ·         Relevant Blogs: Oracle OpenWorld Countdown Begins ,  Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications, Amit Zavery’s General Session, Hassan Rizvi's General Session, Oracle OpenWorld Blog ·         Focus On Docs: Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications,  Mobile ·         Product Information on Oracle.com: Oracle Fusion Middleware ·         Subscribe to our regular Fusion Middleware Newsletter ·         Follow us on Twitter and Facebook

    Read the article

  • apt-get 403 Forbidden

    - by Lerp
    I've start a new job today and I am trying to set up my machine to run through their Windows server. I've managed to get a internet connection through the server now but now I can't run apt-get update as I get a "403 Forbidden" error. This is for every repo under my source list, apart from translations(?). I do have a proxy in apt.conf, if I don't have it I get a 407 Permission Denied error. Here's my apt.conf file (I have omitted my username and password) Acquire::http::proxy "http://username:[email protected]:8080/"; Here's my sources.list #deb cdrom:[Ubuntu 12.04.2 LTS _Precise Pangolin_ - Release amd64 (20130213)]/ dists/precise/main/binary-i386/ #deb cdrom:[Ubuntu 12.04.2 LTS _Precise Pangolin_ - Release amd64 (20130213)]/ dists/precise/restricted/binary-i386/ #deb cdrom:[Ubuntu 12.04.2 LTS _Precise Pangolin_ - Release amd64 (20130213)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise universe deb-src http://gb.archive.ubuntu.com/ubuntu/ precise universe deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ precise multiverse deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main I can sort-of fix this by changing all the http in sources.list to ftp but I still have issues with ppas

    Read the article

  • Programming Windows 8 Apps with HTML, CSS, and JavaScript - All you need in one title

    It took me a while to work through the 800+ pages of this title. And yes, I really mean working not reading... Since the release of Windows 8 it should be obvious to any Windows software developer that there are new ways to develop, deploy and market applications for a broader audience. Interestingly, Microsoft started to narrow the technological gap between the various platforms - desktop, web, smartphone and XBox - and development of modern apps with HTML, CSS and JavaScript couldn't be easier. Kraig covers all facets of modern Windows 8 apps from the basic building blocks and project templates in Visual Studio 2012 over to the thoughtful use of specific APIs to finally proper deployment in the App Store and potential monetization. The organisation of the book is lied out like step by step instructions or a tutorial. Kraig literally takes the reader by the hand and explains in detail in his examples about the reasons, the pros, and the cons of a certain way of implementation. Thanks to cross-references to other chapters he leaves the choice to the reader to dig deeper right now or to catch up at some time later. Personally, I have to admit that I really enjoyed the relaxed writing style. App development is not dust-dry rocket science and it should be joyful to learn about new technologies. And thanks to the richness of the various chapters and samples you could easily adapt and transfer the knowledge gained in this title to other platforms like Windows Phone 8. And last but not least: The ebook is freely available at Amazon, Microsoft Press and O'Reilly. Don't think about it, just get the book. Now. Update: I already mentioned this title in other blog entries which are related to Microsoft certification. Feel free to read on and to discover more online resources: Learning content for MCSDs: Web Applications and Windows Store Apps using HTML5 More content for MCSDs: Web Applications and Windows Store Apps using HTML5 O'Reilly offers free webcasts on their site, too. And in case that you would like to know more about Kraig's book and his experience with various development teams, please checkout this one: Zero to App in Two Weeks: Programming Windows 8 Apps in HTML, CSS, and JavaScript. The recording should be available soon.

    Read the article

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • Interrupted Upgrade from 11.10 to 12.04

    - by Tamil
    My upgrade using alternative iso from 11.10 to 12.04 got interrupted and I had to hard restart my machine. Now I feel that everything is recovered except my already installed packages like vim. How do I backup my home folder for fresh installation of ubuntu? Following are the errors I'm facing I couldn't mark any package for re-installation in synaptic or remove and install too. output of sudo apt-get install vim Building dependency tree Reading state information... Done Package vim is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'vim' has no installation candidate If I try installing it from synaptic I get apache2.2-common: Package apache2.2-common has no available version, but exists in the database. This typically means that the package was mentioned in a dependency and never uploaded, has been obsoleted or is not available with the contents of sources.list my sources.list file # added by the release upgrader # deb cdrom:[Ubuntu 12.04.1 LTS _Precise Pangolin_ - Release amd64 (20120822.4)]/ precise main restricted # added by the release upgrader # # deb cdrom:[Ubuntu 12.04.1 LTS _Precise Pangolin_ - Release amd64 (20120822.4)]/ precise main restricted # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release amd64 (20110427.1)]/ natty main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted deb-src http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates main restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse # deb-src http://us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main # deb http://tamil.3758_gmail.com:[email protected]/free unstable main # disabled on upgrade to oneiric # deb http://debian.datastax.com/natty oneiric main # disabled on upgrade to oneiric sudo apt-get update Err http://archive.ubuntu.com precise InRelease Err http://archive.canonical.com precise InRelease Err http://archive.ubuntu.com precise-updates InRelease Err http://archive.ubuntu.com precise-security InRelease Err http://extras.ubuntu.com precise InRelease Err http://archive.canonical.com precise Release.gpg Unable to connect to 172.16.140.249:3142: Err http://archive.ubuntu.com precise Release.gpg Unable to connect to 172.16.140.249:3142: Err http://archive.ubuntu.com precise-updates Release.gpg Unable to connect to 172.16.140.249:3142: Err http://extras.ubuntu.com precise Release.gpg Unable to connect to 172.16.140.249:3142: Err http://archive.ubuntu.com precise-security Release.gpg Unable to connect to 172.16.140.249:3142: W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise/InRelease

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • How To Build An Enterprise Application - Introduction

    - by Tuan Nguyen
    An enterprise application is a software which fulfills 4 core quality attributes: Reliability Flexibility Reusability Maintainability Reliability is the ability of a system or component to perform its required functions under stated conditions for a specific period of time. Because there are no ways more than testing to make sure a system is reliability, we can exchange the term reliability with the term testability. Flexibility is the ability of changing a system's core features without violating unrelated features or components. Although flexibility can helps us to achieve interoperability easily but the opposite is not true. For example, a program might run on multiple platforms, contains logic for many scenarios but that wouldn't mean it was flexibility if it forces us rewrite code in all components when we just want to change some aspects of a feature it had. Reusability is the ability of sharing one or more system's components for another system. We should just open a component's reusability in the context in which it is used. For example, we write classes that implement UI logic and deliver them to only classes which implementing UI. Maintainability is the ability of adding or removing features to a system after it was released. Maintainability consists of many factors such as readability, analyzability, extensibility therein extensibility is critical. Maintainability requires us to write code that is longer and complexer than normal but it doesn't mean we introduce unneccessarily complex code. We always try to make our code clear and transparent to everyone. An application enterprise is built on an enterprise design which consists of two parts: low-level design and high-level design. At low-level design, it focuses on building loose-coupled classes or components. Particularly, it recommends: Each class or component undertakes only single responsibility (design based on unit test) Classes or components implement and work through interfaces (design based on contract) Dependency relationship between classes and components could be injected at run-time (design based on dependency) At high-level design, it focuses on architecting system into tiers and layers. Particularly, it recommends: Divide system into subsystems for deployment. Each subsytem is called a tier. Typical, an enterprise application would have 3 tiers as illustrated in the following figure: Arrange classes and components to logical containers called layers. Typical, an enterprise application would have 5 layers as illustrated in the following figure

    Read the article

  • Where to start with game development?

    - by steven_desu
    Searching for related questions I found a number of very specific questions, but I'm afraid the specifics have proved fruitless for me and after 4 hours on Google I'm no closer than I started, so I felt reaching out to a community might be in order. First, my goal: I've never made a game before, although I've muddled over the possibility several times. I decided to finally sit down and start learning how to code games, use game engines, etc. All so that one day (hopefully soon) I'll be able to make functional (albeit simple) games. I can start adding complexity later, for now I'd be glad to have a keyboard-controlled camera moving in a 3D world with no interaction beyond that. My background: I've worked in SEVERAL programming languages ranging from PHP to C++ to Java to ASM. I'm not afraid of any challenges that come with learning the new syntax or limitations inherent in a new language. All of my past programming experience, however, has been strictly non-graphical and usually with little or extremely simple interaction during execution. I've created extensive and brilliant algorithms for solving logical and mathematical problems. However in every case input was either defined in a file, passed form an HTML form, or typed into the console. Real-time interaction with the user is something with which I have no experience. My question: Where should I start in trying to make games? Better yet- where should I start in trying to create a keyboard-navigable 3D environment? In searching online I've found several resources linking to game engines, graphics engines, and physics engines. Here's a brief summary of my experiences with a few engines I tried: Unreal SDK: The tutorial videos assume that you already have in-depth knowledge of 3D modeling, graphics engines, animations, etc. The "Getting Started" page offers no formal explanation of game development but jumps into how Unreal can streamline processes it assumes you're already familiar with. After downloading the SDK and launching it to see if the tools were as intuitive as they claimed, I was greeted with about 60 buttons and a blank void for my 3D modeling. Clicking on "add volume" (to attempt to add a basic cube) I was met with a menu of 30 options. Panicking, I closed the editor. Crystal Space: The website seemed rather informative, explaining that Crystal Space was just for graphics and the companion software, CEL, provided entity logic for making games. A demo game was provided, which was built using "CELStart", their simple tool for people with no knowledge of game programming. I launched the game to see what I might look forward to creating. It froze several times, the menus were buggy, there were thousands of graphical glitches, enemies didn't respond to damage, and when I closed the game it locked up. Gave up on that engine. IrrLicht: The tutorial assumes I have Visual Studio 6.0 (I have Visual Studio 2010). Following their instructions I was unable to properly import the library into Visual Studio and unable to call any of the functions that they kept using. Manually copying header files, class files, and DLLs into my project's folder - the project failed to properly compile. Clearly I'm not off to a good start and I'm going in circles. Can someone point me in the right direction? Should I start by downloading a program like Blender and learning 3D modeling, or should I be learning how to use a graphics engine? Should I look for an all-inclusive game engine, or is it better to try and code my own game logic? If anyone has actually made their own games, I would prefer to hear how they got their start. Also- taking classes at my school is not an option. Nothing is offered.

    Read the article

  • Where to start with game development?

    - by steven_desu
    I asked this earlier in this thread at stackoverflow.com. One of the early comments redirected me here to gamedev.stackexchange.com, so I'm reposting here. Searching for related questions I found a number of very specific questions, but I'm afraid the specifics have proved fruitless for me and after 4 hours on Google I'm no closer than I started, so I felt reaching out to a community might be in order. First, my goal: I've never made a game before, although I've muddled over the possibility several times. I decided to finally sit down and start learning how to code games, use game engines, etc. All so that one day (hopefully soon) I'll be able to make functional (albeit simple) games. I can start adding complexity later, for now I'd be glad to have a keyboard-controlled camera moving in a 3D world with no interaction beyond that. My background: I've worked in SEVERAL programming languages ranging from PHP to C++ to Java to ASM. I'm not afraid of any challenges that come with learning the new syntax or limitations inherent in a new language. All of my past programming experience, however, has been strictly non-graphical and usually with little or extremely simple interaction during execution. I've created extensive and brilliant algorithms for solving logical and mathematical problems as well as graphing problems. However in every case input was either defined in a file, passed form an HTML form, or typed into the console. Real-time interaction with the user is something with which I have no experience. My question: Where should I start in trying to make games? Better yet- where should I start in trying to create a keyboard-navigable 3D environment? In searching online I've found several resources linking to game engines, graphics engines, and physics engines. Here's a brief summary of my experiences with a few engines I tried: Unreal SDK: The tutorial videos assume that you already have in-depth knowledge of 3D modeling, graphics engines, animations, etc. The "Getting Started" page offers no formal explanation of game development but jumps into how Unreal can streamline processes it assumes you're already familiar with. After downloading the SDK and launching it to see if the tools were as intuitive as they claimed, I was greeted with about 60 buttons and a blank void for my 3D modeling. Clicking on "add volume" (to attempt to add a basic cube) I was met with a menu of 30 options. Panicking, I closed the editor. Crystal Space: The website seemed rather informative, explaining that Crystal Space was just for graphics and the companion software, CEL, provided entity logic for making games. A demo game was provided, which was built using "CELStart", their simple tool for people with no knowledge of game programming. I launched the game to see what I might look forward to creating. It froze several times, the menus were buggy, there were thousands of graphical glitches, enemies didn't respond to damage, and when I closed the game it locked up. Gave up on that engine. IrrLicht: The tutorial assumes I have Visual Studio 6.0 (I have Visual Studio 2010). Following their instructions I was unable to properly import the library into Visual Studio and unable to call any of the functions that they kept using. Manually copying header files, class files, and DLLs into my project's folder - the project failed to properly compile. Clearly I'm not off to a good start and I'm going in circles. Can someone point me in the right direction? Should I start by downloading a program like Blender and learning 3D modeling, or should I be learning how to use a graphics engine? Should I look for an all-inclusive game engine, or is it better to try and code my own game logic? If anyone has actually made their own games, I would prefer to hear how they got their start. Also- taking classes at my school is not an option. Nothing is offered.

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • DotNetNuke 7.0 Only Weeks Away!

    - by sbwalker
    The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience. It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0.  This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity  to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight. With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ). Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0.  As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release. Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th. On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available. I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.

    Read the article

  • Partner Blog: aurionPro SENA - Mobile Application Convenience, Flexibility & Innovation Delivered

    - by Darin Pendergraft
    About the Writer: Des Powley is Director of Product Management for aurionPro SENA inc. the leading global Oracle Identity and Access Management specialist delivery and product development partner. In October 2012 aurionPro SENA announced the release of the Mobile IDM application that delivers key Identity Management functions from any mobile device. The move towards an always on, globally interconnected world is shifting Business and Consumers alike away from traditional PC based Enterprise application access and more and more towards an ‘any device, same experience’ world. It is estimated that within five years in many developing regions of the world the PC will be obsolete, replaced entirely by cheaper mobile and tablet devices. This will give a vast amount of new entrants to the Internet their first experience of the online world, and it will only be via these newer, mobile access channels. Designed to address this shift in working and social environments and released in October of 2012 the aurionPro SENA Mobile IDM application directly addresses this emerging market and requirement by enhancing administrators, consumers and managers Identity Management (IDM) experience by delivering a mobile application that provides rapid access to frequently used IDM services from any Mobile device. Built on the aurionPro SENA Identity Service platform the mobile application uses Oracle’s Cloud, Mobile and Social capabilities and Oracle’s Identity Governance Suite for it’s core functions. The application has been developed using standards based API’s to ensure seamless integration with a client’s on premise IDM implementation or equally seamlessly with the aurionPro SENA Hosted Identity Service. The solution delivers multi platform support including iOS, Android and Blackberry and provides many key features including: • Providing easy to access view all of a users own access privileges • The ability for Managers to approve and track requests • Simply raising requests for new applications, roles and entitlements through the service catalogue This application has been designed and built with convenience and security in mind. We protect access to critical applications by enforcing PIN based authentication whilst also providing the user with mobile single sign on capability. This is just one of the many highly innovative products and services that aurionPro SENA is developing for our clients as we continually strive to enhance the value of their investment in Oracle’s class leading 11G R2 Identity and Access Management suite. The Mobile IDM application is a key component of our Identity Services Suite that also includes Managed, Hosted and Cloud Identity Services. The Identity Services Suite has been designed and built specifically to break the barriers to delivering Enterprise, Mobile and Social Identity Management services from the Cloud. aurionPro SENA - Building next generation Identity Services for modern enterprises. To view the app please visit http://youtu.be/btNgGtKxovc For more information please contact [email protected]

    Read the article

  • apt-get install is not able to access /etc

    - by HorusKol
    I put together an ubuntu 12.04 server a couple of weeks ago and everything seemed fine until this morning. Suddenly, I'm having trouble installing new packages - at first I thought there was something wrong with tinyproxy and so I tried installing squid instead. However, I get similar results: Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf".\ ... /var/lib/dpkg/info/squid3.postinst: 1: /var/lib/dpkg/info/squid3.postinst: cannot open /etc/squid3/squid.conf: No such file It seems that apt-get is not creating the configuration files needed for these programs. I haven't modified any configuration or user groups since the last successful update/install of packages. /etc is present, and is populated with a nice healthy tree of configuration files. It is owned and grouped to root, and has the properties drwxr-xr-x - all the files and folders inside seem to be fine to, as far as I can tell. I've even been able to edit/save a couple as sudo. Full output from installing tinyproxy: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: tinyproxy 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/61.6 kB of archives. After this operation, 201 kB of additional disk space will be used. Selecting previously unselected package tinyproxy. (Reading database ... 58916 files and directories currently installed.) Unpacking tinyproxy (from .../tinyproxy_1.8.3-1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up tinyproxy (1.8.3-1) ... Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf". invoke-rc.d: initscript tinyproxy, action "start" failed. dpkg: error processing tinyproxy (--configure): subprocess installed post-installation script returned error exit status 70 Errors were encountered while processing: tinyproxy E: Sub-process /usr/bin/dpkg returned an error code (1) Result of strace after installation: 18467 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 18467 open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 18467 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\30\2\0\0\0\0\0"..., 832) = 832 18467 open("/etc/tinyproxy.conf", O_RDONLY) = -1 ENOENT (No such file or directory)

    Read the article

  • Windows Phone 7 Series - Tools and Resources

    - by TechTwaddle
    Unless you've been living in the caves of Lascaux for the past couple of days, you probably know what's happening in the world of Windows Phone. Microsoft unveiled the developer tools required to develop applications and games for Windows Phone 7 at MIX10 a couple of days back. Silverlight and XNA being the major frameworks, no big surprise there. And the best news of all is that all the development tools are free! So if you are planning to develop apps for Windows Phone 7, read on. The first place, or more appropriately hub, for you is the Windows Phone Developer Portal. It has most of the information you need to get you started. Now there is a ton of information available at other places too. In this post, I take time to put all the information that I found useful at one place, and I'll keep updating this as and when I find new stuff.   Setting up the development environment 1. Install Windows Phone Developer Tools CTP (Community Technology Preview) This will install Visual Studio 2010 Express, Silverlight, XNA framework and emulator for Windows Phone 7. It also installs a few support tools. 2. Expression Blend 4 for Windows Phone:     - Install Expression Blend 4 beta     - Install Expression Blend Add-in Preview for Windows Phone     - Install Expression Blend SDK Preview for Windows Phone Installing the above tools should set your machine up for development. I installed the tools on my Windows Vista SP1 machine and the process went smoothly without running into any major hitch. Note that the tools won't install on Windows XP, read the release notes of the CTP. Resources and Documentation 1. Microsoft Windows Phone 7 Series Developer Training Kit 2. Programming Windows Phone 7 Series by Charles Petzold. Contains few chapters only. Gives a good preview. 3. MSDN documentation for Windows Phone 7 Development 4. A sample chapter from Learning Windows Phone Programming [PDF] by Yochay Kiriaty and Jaime Rodriguez. Complete book will be available at a later time. 5. Windows Phone 7 Developer Forum - where you can ask questions and problems you run into and the experts are there to help you. 6. For Silverlight visit silverlight.net and for XNA game development, the XNA Creators Club is the place to go, also make sure you follow Michael Klutcher's and Shawn Hargreaves' blog. 7. And finally the MIX'10 website. Most of the sessions will be available for download later (some are already available). Click on the Windows Phone tag to get all the session details and downloads.   If you are completely new to Silverlight and XNA (like me), and C# makes some sense to you then I suggest you go through the Developer Training Kit. It gives a good start and ramps you up pretty quickly.

    Read the article

  • BizTalk 2009 - Creating a Custom Functoid Library

    - by StuartBrierley
    If you find that you have a need to created multiple Custom Functoids you may also choose to create a Custom Functoid Library - a single project containing many custom functoids.  As previsouly discussed, the Custom Functoid Wizard can be used to create a project with a new custom functoid inside.  But what if you want to extend this project to include more custom functoids and create your Custom Functoid Library?  First create a Custom Functoid Library project and your first Custom Functoid using the Custom Functoid Wizard. When you open your Custom Functoid Library project in Visual Studio you will see that it contains your custom functoid class file along with its resource file.  One of the items this resource file contains is the ID of the the custom functoid.  Each custom functoid needs a unique ID that is over 6000.  When creating a Custom Functoid Library I would first suggest that you delete the ID from this resource file and instead create a _FunctoidIDs class containing constants for each of your custom functoids.  In this way you can easily see which custom functoid IDs are assigned to which custom functoid and which ID is next in the sequence of availability: namespace MyCompany.BizTalk.Functoids.TestFunctoids {     class _FunctoidIDs     {         public const int TestFunctoid                       = 6001;     } } You will then need to update the base() function in your existing functoid class to reference these constant values rather than the current resource file. From:    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID; To: this.ID = _FunctoidIDs.TestFunctoid; To create a new custom functoid you can copy the existing custom functoid, renaming the resultant class file as appropriate.  Once it is renamed you will need to change the Class name, ResourceName reference and Base function name in the class code to those of your new custom functoid.  You will also need to create a new constant value in the _FunctoidIDs class and update the ID reference in your code to match this.  Assuming that you need some different functionalty from your new  customfunctoid you will need to check or amend the following in your functoid class file: Min and Max connections Functoid Category Input and Output connection types The parameters and functionality of the Execute function To change the appearance of you new custom functoid you will need to check or amend the following in the functoid resource file: Name Description Tooltip Exception Icon You can change the String values by double clicking the resource file and amending the value fields in the string table. To amend the functoid icon you will need to create a 16x16 bitmap image.  Once you have saved this you are then ready to import it into the functoid resource file.  In Visual Studio change the resource view to images, right click the icon and choose import from file. You have now completed your new custom functoid and created a Custom Functoid Library.  You can test your new library of functoids by building the project, copying the resultant DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then resetting the toolbox in Visual Studio.

    Read the article

  • Given the presentation model pattern, is the view, presentation model, or model responsible for adding child views to an existing view at runtime?

    - by Ryan Taylor
    I am building a Flex 4 based application using the presentation model design pattern. This application will have several different components to it as shown in the image below. The MainView and DashboardView will always be visible and they each have corresponding presentation models and models as necessary. These views are easily created by declaring their MXML in the application root. <s:HGroup width="100%" height="100%"> <MainView width="75% height="100%"/> <DashboardView width="25%" height="100%"/> </s:HGroup> There will also be many WidgetViewN views that can be added to the DashboardView by the user at runtime through a simple drop down list. This will need to be accomplished via ActionScript. The drop down list should always show what WidgetViewN has already been added to the DashboardView. Therefore some state about which WidgetViewN's have been created needs to be stored. Since the list of available WidgetViewN and which ones are added to the DashboardView also need to be accessible from other components in the system I think this needs to be stored in a Model object. My understanding of the presentation model design pattern is that the view is very lean. It contains as close to zero logic as is practical. The view communicates/binds to the presentation model which contains all the necessary view logic. The presentation model is effectively an abstract representation of the view which supports low coupling and eases testability. The presentation model may have one or more models injected in in order to display the necessary information. The models themselves contain no view logic whatsoever. So I have a several questions around this design. Who should be responsible for creating the WidgetViewN components and adding these to the DashboardView? Is this the responsibility of the DashboardView, DashboardPresentationModel, DashboardModel or something else entirely? It seems like the DashboardPresentationModel would be responsible for creating/adding/removing any child views from it's display but how do you do this without passing in the DashboardView to the DashboardPresentationModel? The list of available and visible WidgetViewN components needs to be accessible to a few other components as well. Is it okay for a reference to a WidgetViewN to be stored/referenced in a model? Are there any good examples of the presentation model pattern online in Flex that also include creating child views at runtime?

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Windows 8 Camp&ndash;Ways to Prepare

    - by Lori Lalonde
    When Windows 8 was announced at the BUILD conference back in September, it created quite a buzz among the developer community. By the spring of 2012,  Windows 8 Developer Camps started popping up everywhere imaginable. I received a lot of questions from CTTDNUG members about whether or not we would be hosting one locally. If you recall my post about the Windows Phone/Azure Developer Workshop that CTTDNUG hosted back in March, you’ll remember that the biggest hurdle to overcome when planning this type of event was finding the right venue. It took some time, but I finally found a venue that was available and provided the prerequisites needed to ensure this camp is a success. I am very excited that CTTDNUG will be hosting a Windows 8 Camp this summer in the Kitchener/Waterloo area. In fact, it’s coming up in less than 2 weeks. Clearly other developers are excited as well, because our registration numbers show that the event is already 70% full! On top of that, I was fortunate enough to also book two well-known evangelists to present and teach at this full day developer camp: Andrei Marukovich and Atley Hunter. This was the icing on the cake. With the content provided by Microsoft, and two local experts that live and breathe Windows 8 development, I know that I, along with other developers that attend this event, will have the opportunity to maximize our learning potential and hit the ground running. If you plan on attending a Windows 8 Developer Camp soon, and want to ensure you get the most “bang for your buck” (figuratively speaking, since these camps are free), there are some things you can do to prepare before the big day: 1) Install the prerequisites on your own device before the big day I can’t stress this enough. Otherwise, you will be spending valuable time during the hands-on period downloading and installing what is needed, rather than digging into the development and using that time to ask the experts on-hand about programming challenges, issues, questions you may have with respect to your development. Prerequisites: Windows 8 Release Preview Visual Studio 2012 RC Download the Windows 8 SDK Samples 2) Purchase, download, and read Charles Petzold’s newest book:  Programming Windows 6th Edition This is a great introduction to the type of content you will be learning about during the camp. Doing some light reading beforehand might raise some questions about the concepts discussed in the book, which will give you the opportunity to write them down and bring them with you to the camp. The experts on hand will be able to answer them for you. 3) Make use of the freebies that are available Telerik has recently released a preview of their RadControls for Metro. You can sign up to receive a license code to give you access to install the preview for free and start playing around with it. Syncfusion also offers a free download of their Metro Studio package, which is a collection of metro style icons that you can customize and use in your own applications. Last but not least, once you’ve installed the Windows 8 Release Preview on your own device, go to the Windows 8 Store and download a handful of the free apps that are available. Testing out other Metro apps may give you ideas of what you can do in your own apps and analyze what features you like: application flow, type of animations used, concepts that were leveraged, how live tiles were used, etc. I hope you found these tips to be useful as you embark on a new development journey! Although this post focused on how to prepare for a Windows 8 camp, the same ideas are there whichever developer camp/workshop/event you attend. Learning does not begin and end on the day of the event. Attending a developer camp is just one step of many to master whatever technology you are interested in. It is a continuous process, which is fully maximized when you do your homework beforehand, actively participate during,  and follow up by putting what you learned to practice afterwards. Happy coding!

    Read the article

  • Questions re: Eclipse Jobs API

    - by BenCole
    Similar to http://stackoverflow.com/questions/8738160/eclipse-jobs-api-for-a-stand-alone-swing-project This question mentions the Jobs API from the Eclipse IDE: ...The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application... ...The functionality provided by the workspace locking mechanism can be broken down into the following three aspects: Resource locking to ensure multiple operations did not concurrently modify the same resource Resource change batching to ensure UI stability during an operation Identification of an appropriate time to perform incremental building With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added. Job class: support for performing operations or other work in the background. ISchedulingRule interface: support for determining which jobs can run concurrently. WorkspaceJob and two IWorkspace#run() methods: support for batching of delta change notifications. Background auto-build: running of incremental build at a time when no other running operations are affecting resources. ILock interface: support for deadlock detection and recovery. Job properties for configuring user feedback for jobs run in the background. The rest of this article provides examples of how to use the above-mentioned facilities... In regards to above API, is this an implementation of a particular design pattern? Which one?

    Read the article

  • SharePoint Saturday DC

    - by Mark Rackley
    Wow… did you see this thing? 927 attendees? An exhibition hall full of vendors? 94 speakers? 100 sessions?? Insane is a word that comes to mind… SharePoint Saturday DC was definitely epic as far as SharePoint Saturdays go. I got to catch up with a lot of friends and make some new ones.  Met a couple of fans of the blog (hello ladies…;))  Did you know that people actually read this thing? I guess that means I need to stop putting so much garbage on here and more content. I’ll get right on that as soon as I find out how to add 6 hours to each day. Anyway, once again I did my “Wrapping Your Head Around the SharePoint Beast” session.  I tweaked it even more from Huntsville and presented to a packed room with some people sitting on the floor and standing in the aisles. It was a great crowd, very interactive and they seemed interested at least. Thank you guys so much for attending and please feel free to tell me of any suggestions you have to make the presentation better.  This is one of the presentations that will probably never die. Everyone beginning SharePoint development needs a good introduction and starting point. My goal is to make this THE session to see on the subject. So, a little interesting data about my class. Half of the room was brand new to SharePoint and only one person was using 2010. That tells me that this session still has legs and that 2007 isn’t going anywhere anytime soon.  I know my organization will be using 2007 for at least a couple more years. Oh yeah… the slide deck?  Here it is: SharePoint Saturday DC Slide Deck So, SharePoint Saturday was truly tremendous and if you weren’t there you missed out. @meetdux, @usher, and the rest of their crew did a spectacular job. You guys rock and are a huge asset to the community. Thanks for allowing me to speak. What’s up next for me?  I’m so glad you asked…. SHAREPOINT SATURDAY OZARKS IS JUNE 12TH! Although SharePoint Saturday Ozarks on June 12 in Harrison Arkansas will be a much more intimate event than DC, it promises to be a most memorable event. We’ve got over 30 speakers and sessions, some cool stuff to give away, and we’re going floating down the Buffalo River on the 13th. Let’s see you do THAT in DC.  :) Anyway, I hope to see you there and I would truly appreciate any help you can do to help publicize the event. We just got internet here in the hills and most people here are still looking for the “any” key….

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >