Search Results

Search found 15712 results on 629 pages for 'location href'.

Page 393/629 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Access PC Settings Easily from Your Desktop in Windows 8 and 8.1

    - by Akemi Iwaya
    Accessing your system’s settings in Windows 8 is not exactly the most straight-forward of processes, so if you need to change your settings often, then it can be a bit frustrating. With that in mind, the good folks over at 7 Tutorials have created an awesome shortcut that will take all the hassle out of accessing those settings, and make ‘tweaking’ Windows 8 much easier. After downloading the zip file, extract the exe file and place it in an appropriate folder, then create a shortcut. Once you have the new shortcut set up in the desired location (i.e. desktop or pinned to the taskbar), accessing your system’s settings has never been easier in Windows 8 and 8.1! Special Note: If you are someone who runs files through VirusTotal before using them, be aware that two listings there (Commtouch and Symantec) will flag the file as malware. We had no problems on our system whatsoever and believe the malware flags to be false positives. Download the Desktop Shortcut to PC Settings, for Windows 8 & 8.1 [7 Tutorials]     

    Read the article

  • VSNewFile: A Visual Studio Addin to More Easily Add New Items to a Project

    - by InfinitiesLoop
    My first Visual Studio Add-in! Creating add-ins is pretty simple, once you get used to the CommandBar model it is using, which is apparently a general Office suite extensibility mechanism. Anyway, let me first explain my motivation for this. It started out as an academic exercise, as I have always wanted to dip my feet in a little VS extensibility. But I thought of a legitimate need for an add-in, at least in my personal experience, so it took on new life. But I figured I can’t be the only one who has felt this way, so I decided to publish the add-in, and host it on GitHub (VSNewFile on GitHub) hoping to spur contributions. Adding Files the Built-in Way Here’s the problem I wanted to solve. You’re working on a project, and it’s time to add a new file to the project. Whatever it is – a class, script, html page, aspx page, or what-have-you, you go through a menu or keyboard shortcut to get to the “Add New Item” dialog. Typically, you do it by right-clicking the location where you want the file (the project or a folder of it): This brings up a dialog the contains, well, every conceivable type of item you might want to add. It’s all the available item templates, which can result in anywhere from a ton to a veritable sea of choices. To be fair, this dialog has been revamped in Visual Studio 2010, which organizes it a little better than Visual Studio 2008, and adds a search box. It also loads noticeably faster.   To me, this dialog is just getting in my way. If I want to add a JavaScript script to my project, I don’t want to have to hunt for the script template item in this dialog. Yes, it is categorized, and yes, it now has a search box. But still, all this UI to swim through when all I need is a new file in the project. I will name it. I will provide the content, I don’t even need a ‘template’. VS kind of realizes this. In the add menu in a class library project, for example, there is a “Add Class…” choice. But all this really does is select that project item from the dialog by default. You still must wait for the dialog, see it, and type in a name for the file. How is that really any different than hitting F2 on an existing item? It isn’t. Adding Files the Hack Way What I often find myself doing, just to avoid going through this dialog, is to copy and paste an existing file, rename it, then “CTRL-A, DEL” the content. In a few short keystrokes I’ve got my new file. Even if the original file wasn’t the right type, it doesn’t matter – I will rename it anyway, including the extension. It works well enough if the place I am adding the file to doesn’t have much in it already. But if there are a lot of files at that level, it sucks, because the new file will have the name “Copy of xyz”, causing it to be moved into the ‘C’ section of the alphabetically sorted items, which might be far, far away from the original file (and so I tend to try and copy a file that starts with ‘C’ *evil grin*). Using ‘Export Template’ To be completely fair I should at least mention this feature. I’m not even sure if this is new in VS 2010 or not (I think so). But it allows you to export a project item or items, including potential project references required by it. Then it becomes a new item in the available ‘installed templates’. No doubt this is useful to help bootstrap new projects. But that still requires you to go through the ‘New Item’ dialog. Adding Files with VSNewFile So hopefully I have sufficiently defined the problem and got a few of you to think, “Yeah, me too!”… What VSNewFile does is let you skip the dialog entirely by adding project items directly to the context menu. But it does a bit more than that, so do read on. For example, to add a new class, you can right-click the location and pick that option. A new .cs file is instantly added to the project, and the new item is selected and put into the ‘rename’ mode immediately. The default items available are shown here. But you can customize them. You can also customize the content of each template. To do so, you create a directory in your documents folder, ‘VSNewFile Templates’. In there, you drop the templates you want to use, but you name them in a particular way. For example, here’s a template that will add a new item named “Add TITLE”. It will add a project item named “SOMEFILE.foo” (or ‘SOMEFILE1.foo’ if that exists, etc). The format of the file name is: <ORDER>_<KEY>_<BASE FILENAME>_<ICON ID>_<TITLE>.<EXTENTION> Where: <ORDER> is a number that lets you determine the order of the items in the menu (relative to each other). <KEY> is a case sensitive identifier different for each template item. More on that later. <BASE FILENAME> is the default name of the file, which doesn’t matter that much, since they will be renaming it anyway. <ICON ID> is a number the dictates the icon used for the menu item. There are a huge number of built-in choices. More on that later. <TITLE> is the string that will appear in the menu. And, the contents of the file are the default content for the item (the ‘template’). The content of the file can contain anything you want, of course. But it also supports two tokens: %NAMESPACE% and %FILENAME%, which will be replaced with the corresponding values. Here is the content of this sample: testing Namespace = %NAMESPACE% Filename = %FILENAME% I kind went back and forth on this. I could have made it so there’d be an XML or JSON file that defines the templates, instead of cramming all this data into the filename itself. I like the simplicity of this better. It makes it easy to customize since you can literally just throw these files around, copy them from someone else, etc, without worrying about merge data into a central description file, in whatever format. Here’s our new item showing up: Practical Use One immediate thing I am using this for is to make it easier to add very commonly used scripts to my web projects. For example, uh, say, jQuery? :) All I need to do is drop jQuery-1.4.2.js and jQuery-1.4.2.min.js into the templates folder, provide the order, title, etc, and then instantly, I can now add jQuery to any project I have without even thinking about “where is jQuery? Can I copy it from that other project?”   Using the KEY There are two reasons for the ‘key’ portion of the item. First, it allows you to turn off the built-in, default templates, which are: FILE = Add File (generic, empty file) VB = Add VB Class CS = Add C# Class (includes some basic usings) HTML = Add HTML page (includes basic structure, doctype, etc) JS = Add Script (includes an immediately-invoking function closure) To turn one off, just include a file with the name “_<KEY>”. For example, to turn off all the items except our custom one, you do this: The other reason for the key is that there are new Visual Studio Commands created for each one. This makes it possible to bind a keyboard shortcut to one of them. So you could, for example, have a keyboard combination that adds a new web page to your website, or a new CS class to your class library, etc. Here is our sample item showing up in the keyboard bindings option. Even though the contents of the template directory may change from one launch of Visual Studio to the next, the bindings will remain attached to any item with a particular key, thanks to it taking care not to lose keyboard bindings even though the commands are completely recreated each time. The Icon Face ID Visual Studio uses a Microsoft Office style add-in mechanism, I gather. There are a predetermined set of built-in icons available. You can use your own icons when developing add-ins, of course, but I’m no designer. I just wanted to find appropriate-ish icons for the built-in templates, and allow you to choose from an existing built-in icon for your own. Unfortunately, there isn’t a lot out there on the interwebs that helps you figure out what the built-in types are. There’s an MSDN article that describes at length a way to create a program that lists all the icons. But I don’t want to write a program to figure them out! Just show them to me! Sheesh :) Thankfully, someone out there felt the same way, and uses a novel hack to get the icons to show up in an outlook toolbar. He then painstakingly took screenshots of them, one group at a time. It isn’t complete though – there are tens of thousands of icons. But it’s good enough. If anyone has an exhaustive list, please let me, and the rest of the add-in community know. Icon Face ID Reference Installing the Add-in It will work with Visual Studio 2008 and Visual Studio 2010. Just unzip the release into your Documents\Visual Studio 20xx\Addins folder. It contains the binary and the Visual Studio “.addin” file. For example, the path to mine is: C:\Users\InfinitiesLoop\Documents\Visual Studio 2010\Addins Conclusion So that’s it! I hope you find it as useful as I have. It’s on GitHub, so if you’re into this kind of thing, please do fork it and improve it! Reference: VSNewFile on GitHub VSNewFile release on GitHub Icon Face ID Reference

    Read the article

  • CIC 2010 - Ghost Stories and Model Based Design

    - by warren.baird
    I was lucky enough to attend the collaboration and interoperability congress recently. The location was very beautiful and interesting, it was held in the mountains about two hours outside Denver, at the Stanley hotel, famous both for inspiring Steven King's novel "The Shining" and for attracting a lot of attention from the "Ghost Hunters" TV show. My visit was prosaic - I didn't get to experience the ghosts the locals promised - but interesting, with some very informative sessions. I noticed one main theme - a lot of people were talking about Model Based Design (MBD), which is moving design and manufacturing away from 2d drawings and towards 3d models. 2d has some pretty deep roots in industrial manufacturing and there have been a lot of challenges encountered in making the leap to 3d. One of the challenges discussed in several sessions was how to get model information out to the non-engineers in the company, which is a topic near and dear to my heart. In the 2D space, people without access to CAD software (for example, people assembling a product on the shop floor) can be given printouts of the design - it's not particularly efficient, and it definitely isn't very green, but it tends to work. There's no direct equivalent in the 3D space. One of the ways that AutoVue is used in industrial manufacturing is to provide non-CAD users with an easy to use, interactive 3D view of their products - in some cases it's directly used by people on the shop floor, but in cases where paper is really ingrained in the process, AutoVue can be used by a technical publications person to create illustrative 2D views that can be printed that show all of the details necessary to complete the work. Are you making the move to model based design? Is AutoVue helping you with your challenges? Let us know in the comments below.

    Read the article

  • Couldn't find package - But package is listed in the Packages file

    - by Chris
    (Quoted items are redacted elements) I am using a private repository and an currently trying to repackage some packages 3rd-party packages. I extract the package, make a few modifications (just the control files to fit with company policy - though sometimes file install locations though not in this case) and repackage (and usually rename). Normally I copy the files into a new blank debhelper project and reconstruct the package, however, with a recent one I attempting to convert and some libraries and stuff aren't linking properly (I did copy the postinst, postrm, and preinst files along with all DEDIAN files exactly), the original package worked, but my repackage doesn't, despite providing the same files in the same locations and the same postinst and preinst. So I was attempting to just modify the current packages control files (as the original package is not very good and will not list in our repository and getting a better one from the 3rd party is not an option). I also renamed the package. I did the following: dpkg-deb -R "directory" Modify DEBIAN/control dpkg-deb -b "directory" "package name I want" I did this and put it in our repository. The package shows up in the "Packages" file on the repository and running apt-get update on the client side shows the package in: /var/lib/apt/lists/"server"_"location"_Packages However when I do an apt-get install on the package name (as listed in the Packages file - I did a copy paste) it says it can't find the package. Same with an apt-cache search The Packages listings is as follow (name redacted): Package: "package name" Priority: extra Section: unknown Maintainer: "maintainer" Architecture: any Version: 1.0-lucid5 Depends: libc Filename: "directory"/"package_filename" Size: 2206292 MD5sum: "md5sum" SHA1: "sha key" SHA256: "sha256 key" Description: "description" I am running as sudo (and tried as root as well). I don't understand why apt-get won't see the package. Can you point out any flaws in what I have done, or perhaps some help on getting apt-get to properly see the package. Or perhaps an alternative. I am not even sure if this is a valid way to repackage something. Thanks.

    Read the article

  • MySQL for Beginners Training-on-Demand First Hand Insight

    - by Antoinette O'Sullivan
    The MySQL for Beginners course is THE course to get you started with MySQL providing you a solid foundation in relational databases using MySQL as a learning tool. Oracle University recently released the Training-on-Demand option for this course.  Ben Krug from the MySQL product team is trying out the MySQL for Beginners Training-on-Demand course and reporting on his experience. You can follow Ben on MySQL Support Blogs. The MySQL for Beginners course is available as: Training-on-Demand: Follow streaming video of instructor delivery and perform hands-on exercises as your own pace. You can start training with 24 hours of purchase. Live-Virtual: Attend a live-instructor led class from your own desk. Hundreds of events on the schedule across timezones. In-Class: Travel to an education center to attend this instructor-led class. Some events on the schedule below:  Location  Date  Delivery Language  Warsaw, Poland  24 September 2012  Polish  Dublin, Ireland  15 October 2012  English  London, United Kingdom  11 September 2012  English  Rome, Italy  5 November 2012  Italian  Hamburg, Germany  3 December 2012  German  Lisbon, Portugal  5 November 2012  European Portugese  Amsterdam, Netherlands  10 December 2012  Dutch  Nieuwegein, Netherlands  18 February 2013  Dutch  Nairobi, Kenya  12 November 2012  English  Barcelona, Spain  5 November 2012  Spanish  Madrid, Spain  8 January 2013  Spanish  Latvia, Riga  12 November 2012  Latvian  Petaling Jaya, Malaysia  22 October 2012  English  Ottawa, Toronto and Montreal Canada  17 December 2012  English  Sao Paulo, Brazil  11 September 2012  Brazilian Portugese  Sao Paulo, Brazil  5 November 2012  Brazilian Portugese  For more information on the Authentic MySQL Curriculum, go to the Oracle University Portal - http://oracle.com/education/mysql

    Read the article

  • Power Dynamic Database-Driven Websites with MySQL & PHP

    - by Antoinette O'Sullivan
    Join major names among MySQL customers by learning to power dynamic database-driven websites with MySQL & PHP. With the MySQL and PHP: Developing Dynamic Web Applications course, in 4 days, you learn how to develop applications in PHP and how to use MySQL efficiently for those applications! Through a hands-on approach, this instructor-led course helps you improve your PHP skills and combine them with time-proven database management techniques to create best-of-breed web applications that are efficient, solid and secure. You can currently take this course as a: Live Virtual Class (LVC): There are a number events on the schedule to suit different timezones in January 2013 and March 2013. With an LVC, you get to follow this live instructor-led class from your own desk - so no travel expense or inconvenience. In-Class Event: Travel to an education center to attend this class. Here are some events already on the scheduled:  Where  When  Delivery Language  Lisbon, Portugal  15 April 2013  European Portugese  Porto, Portugal 15 April 2013   European Portugese  Barcelona, Spain 28 February 2013  Spanish  Madrid, Spain 4 March 2013   Spanish If you do not see an event that suits you, register your interest in an additional date/location/delivery language. If you want more indepth knowledge on developing with MySQL and PHP, consider the MySQL for Developers course. For full details on these and all courses on the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • Upcoming events 2011 IT Camp Saturday Tampa and Orlando Code Camp 2011

    - by Nikita Polyakov
    I’ll be speaking at a few upcoming events: Saturday March 19th 2011 IT Camp Saturday Tampa http://itcampsaturday.com/tampa This is a first of it’s kind – IT Pro camp, a more topic open then many traditional Code Camp and no so much code focused. Here is just a small sample: Adnan Cartwright Administrating your Network with Group Policy Nikita Polyakov Intro to Phone 7 Development Landon Bass Enterprise Considerations for SharePoint 2010 Michael Wells Intro to SQL Server for IT Professionals Keith Kabza Microsoft Lync Server 2010 Overview Check out the full session schedule for other session, if you are in the IT Pro field – you will find many sessions of interest here: http://itcampsaturday.com/tampa/2011/03/01/schedule/   Saturday March 26th 2011 Orlando Code Camp http://www.orlandocodecamp.com/ Just a highlight of a few sessions: Design & Animation Chris G. Williams: Making Games for Windows Phone 7 with XNA 4.0 Diane Leeper: Animating in Blend: It's ALIVE Diane Leeper: Design for Developers: Bad Design Kills Good Projects Henry Lee: Windows Phone 7 Animation Konrad Neumann: Being a Designer in a Developer's World Nikita Polyakov: Rapid Prototyping with SketchFlow in Expression Blend WP7 Henry Lee: Learn to Use Accelerometer and Location Service (GPS) in Windows Phone Application Joe Healy: Consuming Services in Windows Phone 7 Kevin Wolf: Work From Anywhere = WFA (Part 1) Kevin Wolf: Work From Anywhere = WFA (Part 2) Nikita Polyakov: WP7 Marketplace Place and Monetization Russell Fustino: Making (More) Money with Phone 7

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Installation

    - by Stuart Brierley
    I have previsouly detailed the installation of MySQL, the configuration of MySQL and the installation of the ODBC Data Connector for MySQL.  The reason I needed to install and configure these servers was to provide a test environment for a BizTalk Server 2009 solution I am working on where BizTalk will be querying and populating a MySQL database. To do this I then needed to install and add the Community ODBC adapter from Two Connect: "The Community BizTalk Adapter for ODBC is based on the code that was first made available on GotDotNet a few years ago. TwoConnect has refreshed this code, added an installer, and tested it against the latest BizTalk editions. We are releasing the updates back to the BizTalk developer, user and partner community as part of our ongoing community intitiatives. This is the second adapter package that TwoConnect makes available to the community after the very succesful release of the BizTalk WSE 3 adapter a couple of years ago. This adapter is useful in all ODBC based integration scenarios. The following are the new features added and fixes made to the old code base on GotDotNet." Detailed below are the installation instructions for this adapter.  Downloading and running the installer will load up the splash screen. Next you need to select the installation location for the adapter. You then need to confirm the installation following which you will be shown the installation progress. Assuming all has gone well you should see the installation complete screen. Once the installation has completed successfully you will then need to add the adapter to your BizTalk Server.  To do this open the BizTalk Administration console, expand the Platform Settings and right click on Adapters then select New\Adapter. You should then be able to select the ODBC adapter and choose the display name for the adapter. This adapter will then be shown in the BizTalk Administration console. Next I will be looking at using the ODBC Adapter when: Generating schemas Creating a receive port Creating a send port

    Read the article

  • Create a Shortcut To Group Policy Editor in Windows 7

    - by Mysticgeek
    If you’re a system administrator and find yourself making changes in Group Policy Editor, you might want to make a shortcut to it. Here we look at creating a shortcut, pinning it to the Taskbar, and adding it to Control Panel. Note: Local Group Policy Editor is not available in Home versions of Windows 7. Typing gpedit.msc into the search box in the Start menu to access Group Policy Editor can get old fast. To create a shortcut, right-click on the desktop and select New \ Shortcut. Next type or copy the following path into the location field and click Next. c:\windows\system32\gpedit.msc Then give your shortcut a name…something like Group Policy, or whatever you want it to be and click Finish. Now you have your Group Policy shortcut… If you want it on the Taskbar just drag it there to pin it. And that’s all there is to it!   If you want to change the icon, you can use one of the following guides… Customize Icons in Windows 7 Change a File Type Icon in Windows 7 Add Group Policy to Control Panel If you’re using non Home versions of XP, Vista, or Windows 7, check out The Geek’s article on how to Add Group Policy Editor to Control Panel. Similar Articles Productive Geek Tips Add Group Policy Editor to Control PanelQuick Tip: Disable Search History Display in Windows 7Remove Shutdown and Restart Buttons In Windows 7How To Disable Control Panel in Windows 7Allow Users To Run Only Specified Programs in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • This company buries Ashes on Space for $3000

    - by Gopinath
    Does Space burials sounds crazy to you? Then you may not be a big fan of science fictions or a Japanese. According to a study conducted by NASA many science fiction fans prefer their final rights to be held on space and you can read more details about the research over here on NASA website. The other people who fancy about space burials are Japanese Buddhists. For those who are not aware of Space burials, it’s a procedure in which a small sample of the cremated ashes of the deceased are launched into space using spacecraft. The spacecraft will remain in orbit around the Earth or other planets  for decades and eventually burning up in the atmosphere. Celestis, an US based company, is pioneer in memorial spaceflight business and so far they have conducted a total of 10 space burials. Few of the famous people buried in space are Gene Roddenberry(creator of Star Trek),  Gerard K. O’Neill (space physicist), Clyde Tombaugh (astronomer and discoverer of Pluto)  and complete list is available on this Wikipedia page In the coming months Celestis have planned for a  launch of its latest memorial spacecraft and you can send your loved one’s remains for just $3000. Once they put the ashes on space they will also let you track the location of the spacecraft in orbit using a real time feed. Story via BBC and cc image credit: flickr/gsfc

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • Tunlr Gives Non-US Residents Access to Hulu, Netflix, and More

    - by Jason Fitzpatrick
    If you’re outside the US market and looking to enjoy US streaming services like Hulu, Netflix, and more, Tunlr is a free and simple service that will get you connected. Unlike other tools that are more expensive (both in price and in hardware/bandwidth overhead) like VPN services, Tunlr doesn’t set up a full tunnel but instead serves as an alternative DNS server that allows you to access previously blocked content. From the Tunlr FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. Hit up the link below for more information about the service, including how to set it up on various operating systems, portable devices, and gaming consoles. Tunlr [via gHacks] HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now

    Read the article

  • First Step Towards Rapid Enterprise Application Deployment

    - by Antoinette O'Sullivan
    Take Oracle VM Server for x86 training as a first step towards deploying enterprise applications rapidly. You have a choice between the following instructor-led training: Oracle VM with Oracle VM Server for x86 1-day Seminar. Take this course from your own desk on one of the 300 events on the schedule. This seminar tells you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86 and to sustain the deployment of highly configurable, inter-connected virtual machines. Oracle VM Administration: Oracle VM Server for x86 3-day hands on course. This course teaches you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86. You learn how deploy and manage highly configurable, inter-connected virtual machines. The course teaches you how to install and configure Oracle VM Server for x86 as well as details of network and storage configuration, pool and repository creation, and virtual machine management.Take this course from your own desk on one of the 450 events on the schedule. You can also take this course in an Oracle classroom on one of the following events:  Location  Date  Delivery Language  Istanbul, Turkey  12 November 2012  Turkish  Wellington, New Zealand  10 Dec 2012  English  Roseveille, United States  19 November 2012  English  Warsaw, Poland  17 October 2012  Polish  Paris, France  17 October 2012  French  Paris, France  21 November 2012  French  Dusseldorfm Germany  5 November 2012  German For more information on Oracle's Virtualization courses see http://oracle.com/education/vm

    Read the article

  • Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications

    - by kimberly.billings
    To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle recently introduced the Oracle Communications Data Model (OCDM). With the OCDM, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the OCDM, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Hong Kong Broadband Network, the fastest growing and second largest broadband service provider in Hong Kong, enhanced its data warehouse using Oracle Communications Data Model. It went live with OCDM within three months, and has increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Read more about HKBN's use of OCDM. Read more about OCDM var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

  • The Windows Azure Software Development Kit (SDK) and the Windows Azure Training Kit (WATK)

    - by BuckWoody
    Windows Azure is a platform that allows you to write software, run software, or use software that we've already written. We provide lots of resources to help you do that - many can be found right here in this blog series. There are two primary resources you can use, and it's important to understand what they are and what they do. The Windows Azure Software Development Kit (SDK) Actually, this isn't one resource. We have SDK's for multiple development environments, such as Visual Studio and also Eclipse, along with SDK's for iOS, Android and other environments. Windows Azure is a "back end", so almost any technology or front end system can use it to solve a problem. The SDK's are primarily for development. In the case of Visual Studio, you'll get a runtime environment for Windows Azure which allows you to develop, test and even run code all locally - you do not have to be connected to Windows Azure at all, until you're ready to deploy. You'll also get a few samples and codeblocks, along with all of the libraries you need to code with Windows Azure in .NET, PHP, Ruby, Java and more. The SDK is updated frequently, so check this location to find the latest for your environment and language - just click the bar that corresponds to what you want: http://www.windowsazure.com/en-us/develop/downloads/ The Windows Azure Training Kit (WATK) Whether you're writing code, using Windows Azure Virtual Machines (VM's) or working with Hadoop, you can use the WATK to get examples, code, PowerShell scripts, PowerPoint decks, training videos and much more. This should be your second download after the SDK. This is all of the training you need to get started, and even beyond. The WATK is updated frequently - and you can find the latest one here: http://www.windowsazure.com/en-us/develop/net/other-resources/training-kit/     There are many other resources - again, check the http://windowsazure.com site, the community newsletter (which introduces the latest features), and my blog for more.

    Read the article

  • PHP Aspect Oriented Design

    - by Devin Dixon
    This is a continuation of this Code Review question. What was taken away from that post, and other aspect oriented design is it is hard to debug. To counter that, I implemented the ability to turn tracing of the design patterns on. Turning trace on works like: //This can be added anywhere in the code Run::setAdapterTrace(true); Run::setFilterTrace(true); Run::setObserverTrace(true); //Execute the functon echo Run::goForARun(8); In the actual log with the trace turned on, it outputs like so: adapter 2012-02-12 21:46:19 {"type":"closure","object":"static","call_class":"\/public_html\/examples\/design\/ClosureDesigns.php","class":"Run","method":"goForARun","call_method":"goForARun","trace":"Run::goForARun","start_line":68,"end_line":70} filter 2012-02-12 22:05:15 {"type":"closure","event":"return","object":"static","class":"run_filter","method":"\/home\/prodigyview\/public_html\/examples\/design\/ClosureDesigns.php","trace":"Run::goForARun","start_line":51,"end_line":58} observer 2012-02-12 22:05:15 {"type":"closure","object":"static","class":"run_observer","method":"\/home\/prodigyview\/public_html\/public\/examples\/design\/ClosureDesigns.php","trace":"Run::goForARun","start_line":61,"end_line":63} When the information is broken down, the data translates to: Called by an adapter or filter or observer The function called was a closure The location of the closure Class:method the adapter was implemented on The Trace of where the method was called from Start Line and End Line The code has been proven to work in production environments and features various examples of to implement, so the proof of concept is there. It is not DI and accomplishes things that DI cannot. I wouldn't call the code boilerplate but I would call it bloated. In summary, the weaknesses are bloated code and a learning curve in exchange for aspect oriented functionality. Beyond the normal fear of something new and different, what are other weakness in this implementation of aspect oriented design, if any? PS: More examples of AOP here: https://github.com/ProdigyView/ProdigyView/tree/master/examples/design

    Read the article

  • cURL works but PHP cURL fails to internet [migrated]

    - by wrk2bike
    Trying to diagnose an issue using PHP to cURL to an Internet location on a RedHat Linux server. cURL is installed and working, and: <?php var_dump(curl_version()); ?> shows all the correct information in the output. The issue is I can use PHP to cURL to localhost on the box itself, but not the Internet (see below). Normally I'd suspect the firewall, but I can cURL from the command line to the Internet without a problem. The box can also update it's own software packages, etc. What am I missing? My test is: <?php function http_head_curl($url,$timeout=30) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_TIMEOUT, $timeout); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $res = curl_exec($ch); if ($res === false) { throw new RuntimeException("cURL exception: ".curl_errno($ch).": ".curl_error($ch)); } return trim($res); } // Succeeds, displaying headers echo(http_head_curl('localhost')); // Fails: echo(http_head_curl('www.google.com')); ?>

    Read the article

  • ADF Mobile Client is now Generally Available!

    - by joe.huang
    ADF Mobile Client is now generally available!  The press release went out this morning, and the ADF Mobile Client extensions can now be downloaded in the JDeveloper Update Center.  There is also a new Oracle Mobile Computing Strategy White Paper and Data Sheet available, for a high level overview of ADF Mobile. To get started with ADF Mobile Client development, please leverage the following resources: Oracle Technology Network ADF Mobile Landing Page: Review this page for all available resources for ADF Mobile development. Getting Start with ADF Mobile Client Demo: Short demo of the end-to-end development process. Tutorial for Mobile Application Development using ADF Mobile Client ADF Mobile Client Developer Guide ADF Mobile Client Samples: available in the JDeveloper Extension itself.  Located in <JDeveloper Install Location>/jdev/extensions/oracle.adfnmc.core/Samples directory.  Blogs will follow, describing each of the sample applications in more detail. Oracle Database Mobile Server: If database synchronization is needed, please follow this link to download/install Mobile Server. Leverage JDeveloper Forum for any ADF Mobile related questions. You will need the latest (11g Patch Set 3, or 11.1.1.4.0) version of JDeveloper to use this extension.  To download the ADF Mobile Client extension in JDeveloper, you would go to Help Menu, select “Check For Update”, and look for ADF Mobile Client extension in the Official Oracle Extensions and Updates center.  You can also directly download the extension from Oracle Technology Network. Check it out!  For any issues with accessing any of the links above, please contact me directly. Thanks, Joe Huang ([email protected])

    Read the article

  • Optimizing MySQL Database Operations for Better Performance

    - by Antoinette O'Sullivan
    If you are responsible for a MySQL Database, you make choices based on your priorities; cost, security and performance. To learn more about improving performance, take the MySQL Performance Tuning course.  In this 4-day instructor-led course you will learn practical, safe and highly efficient ways to optimize performance for the MySQL Server. It will help you develop the skills needed to use tools for monitoring, evaluating and tuning MySQL. You can take this course via the following delivery methods:Training-on-Demand: Take this course at your own pace, starting training within 24 hours of registration. Live-Virtual Event: Follow a live-event from your own desk; no travel required. You can choose from a selection of events to suit your timezone. In-Class Event: Travel to an education center to take this course. Below is a selection of events already on the schedule.  Location  Date  Delivery Language  London, England  26 November 2013  English  Toulouse, France  18 November 2013 French   Rome, Italy  2 December 2013  Italian  Riga, Latvia  3 March 2014  Latvian  Jakarta Barat, Indonesia 10 December 2013  English   Tokyo, Japan  17 April 2014  Japanese  Pasig City, Philippines 9 December 2013   English  Bangkok, Thailand  4 November 2013  English To register for this course or to learn more about the authentic MySQL curriculum, go to http://education.oracle.com/mysql. To see what an expert has to say about MySQL Performance, read Dimitri's blog.

    Read the article

  • Virtualbox does not run: NS_ERROR_FAILURE

    - by dschinn1001
    here is ubuntu 12.10 virtual-box is somehow not working: I was trying to install win7 on to an usb-hard-disk. boinc is switched off and RAM-size is set to 4096 MB (too big ? of possible 8 Gibi ) report of virtual-box is: the com-object for virtualbox could not be created. the application is now ended. Start tag expected, '&lt;' not found. Location: '/home/$user/.VirtualBox/VirtualBox.xml', line 1 (0), column 1. /build/buildd/virtualbox-4.1.18-dfsg/src/VBox/Main/src-server/VirtualBoxImpl.cpp[484] (nsresult VirtualBox::init()). Fehlercode:NS_ERROR_FAILURE (0x80004005) Komponente:VirtualBox Interface:IVirtualBox {c28be65f-1a8f-43b4-81f1-eb60cb516e66} comment of me: why is virtualbox installing xml into folder of $user in .VirtualBox ? should it not be on usb-harddisk ? (with 500 Gibi ) first installation attempt was breaking off (with win7 in 64Bit) should I try virtual-box (ubuntu 64Bit) with win7 in 32Bit ? should I leave RAM-size of virtual-box to default 512 MB ? thanks for reply

    Read the article

  • Unleash AutoVue on Your Unmanaged Data

    - by [email protected]
    Over the years, I've spoken to hundreds of customers who use AutoVue to collaborate on their "managed" data stored in content management systems, product lifecycle management systems, etc. via our many integrations. Through these conversations I've also learned a harsh reality - we will never fully move away from unmanaged data (desktops, file servers, emails, etc). If you use AutoVue today you already know that even if your primary use is viewing content stored in a content management system, you can still open files stored locally on your computer. But did you know that AutoVue actually has - built-in - a great solution for viewing, printing and redlining your data stored on file servers? Using the 'Server protocol' you can point AutoVue directly to a top-level location on any networked file server and provide your users with a link or shortcut to access an interface similar to the sample page shown below. Many customers link to pages just like this one from their internal company intranets. Through this webpage, users can easily search and browse through file server data with a 'click-and-view' interface to find the specific image, document, drawing or model they're looking for. Any markups created on a document will be accessible to everyone else viewing that document and of course real-time collaboration is supported as well. Customers on maintenance can consult the AutoVue Admin guide or My Oracle Support Doc ID 753018.1 for an introduction to the server protocol. Contact your local AutoVue Solutions Consultant for help setting up the sample shown above.

    Read the article

  • Desktop icons disappears when Nautilus is launched, until next boot

    - by Santosh
    What happens: When I log in into my Ubuntu everything is normal, I have some icons on my desktop and I can see them at this point of time. As soon as I click on the explorer (nautilus) on the Launcher bar, everything goes (disappers) and never comes. No matter how many time you click on the launcher, you can't open nautilus. I tried opening nautilus from the terminal, get the following: santosh@santosh:~$ nautilus Initializing nautilus-gdu extension ** (nautilus:2158): DEBUG: SyncDaemon already running, initializing SyncdaemonDaemon object Initializing nautilus-open-terminal extension (nautilus:2158): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed (nautilus:2158): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed Segmentation fault I suspect on "Segmentation fault" on the last line, whats that? I was amazed when I run this command in sudo.: santosh@santosh:~$ sudo nautilus Initializing nautilus-gdu extension ** (nautilus:2216): DEBUG: Syncdaemon not running, waiting for it to start in NameOwnerChanged Initializing nautilus-open-terminal extension (nautilus:2216): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed (nautilus:2216): Gtk-CRITICAL **: gtk_action_set_visible: assertion `GTK_IS_ACTION (action)' failed Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. As soon as I type sudo nautilus and hit enter, nautilus starts and the desktop background changes (to the default which ubuntu has). Don't know why but at this point as soon as I click on Desktop (from the left pane) then nautilus closes. Did anyone has same issue? I am corrently working with commandline to do my work and its a big pain. Additional Information: Another thing I have noticed that when I want to open the location of PDF file I am reading in document viewer (by clicking the folder icon). It gives error "Could not open the containing folder" and "Failed to execute child process "nautilus" (Permission denied)". Any idea?                

    Read the article

  • Start daemon after specific samba share is mounted

    - by getack
    I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >