Search Results

Search found 12376 results on 496 pages for 'active pattern'.

Page 241/496 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • DIY HDTV Antenna Sticks To Your Window without Blocking the View

    - by Jason Fitzpatrick
    This DIY fractal-based HDTV antenna is cheap, easy to craft, and can be stuck unobtrusively on your window for better signal gains. Courtesy of HTPC-DIY, this simple build uses aluminum foil, a printed fractal pattern, clear plastic, and some basic hardware to create a lightweight and transparent antenna you can affix to a window without significantly blocking light from entering the window. Hit up the link below for the full build details as well as designs for other DIY antennas. DIY Flexible Fractal Window HDTV Antenna [via Hack A Day] HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • Bringing in New Architecture During Maintenance on Legacy Systems

    - by Mike L.
    I have been tasked with adding some new features to a legacy ASP.NET MVC2 project. The codebase is a disaster and I want to write these new features with some thought behind the implementation and not just throw these new features into the mess. I would like to introduce things like dependency injection and the orchestrator pattern; just to the code that I am going to write. I don't have enough time to try to refactor the entire system. Is it OK to not be consistent with the rest of the codebase and add new features following different design principles? Should I not introduce new patterns and just get the features implemented? I feel like it might be confusing to the next person to see parts of the system using a design that other parts are not following.

    Read the article

  • Oversizing images to produce better looking pages?

    - by Joannes Vermorel
    In the past, improper image resizing used to be a big no-no of web design (not mentioning improper compression format). Hence, for years I have been sticking to the policy where images (PNG or JPG) are resized on the server to match the resolution pixel-wise they will have with the rendered page. Now, recently, I hastily designed a HTML draft with oversized images, using inline CSS style such as width:123px and height:123px to resize the images. To my (slight) surprise, the page turned out to look much better that way. Indeed, with better screen resolution, some people (like me), tend to browse with some level of zoom (aka 125% or even 150% zoom), otherwise fonts are just too small on-screen. Then, if the image is strictly sized, the enlarged image appears blurry (pixel interpolation effect), but if the image is oversized the results is much better. Obviously, oversizing images is not an acceptable pattern if your website is intended for mobile browsing, but is there case where it would be considered as acceptable? Especially if the extra page weight is small anyway.

    Read the article

  • Google Authorship: can I display:none for link to profile?

    - by RubenGeert
    I'd like to have my 'mugshot' in Google's SERPs but I couldn't care less about Google+. I don't really want to link my website to Google+ either. Can I use CSS display:none; on the link leading to my profile and still have authorship, which looks like <a href='https://plus.google.com/111823012258578917399?rel=author' rel='nofollow'>Google</a>? Will the nofollow attribute here spoil things? I don't want to lose 'link juice' on Google+ if I don't have to. Now Google should crawl only the HTML but I'm sure they'll figure out the link is not visible (perhaps it's technically even cloaking. Does anybody have experience with this situation? And do I really have to become (reasonably) active on Google+ in order for authorship to show? This answer suggests I do but I didn't read anything on that in Google's guidelines.

    Read the article

  • Source Browsing in FireFox

    - by lavanyadeepak
    Source Browsing in FireFox Just casually observed this a few minutes back with my Mozilla Firefox 3.6.3. When you do a view source of any  page in Internet Explorer it just renders as editable inoperative HTML. However in Firefox the hyperlinks are shown clickable and active. When you click on any hyperlink the most obvious and expected output would be that the target page would appear in one of the new tab in the parent browser. However the View Source window refreshed with the HTML source of the new page. I believe this gesture of Firefox would help us to take a journey back into Lynx Text Browsing in a way.

    Read the article

  • One to many problem with implementing 301 redirect after changed urls

    - by user16136
    I have a problem. I had an old dynamic url which I have now split into multiple static urls. e.g. www.mydomain.com/product.php?type=1&id=2 www.mydomain.com/product.php?type=2&id=3 www.mydomain.com/product.php?type=2&id=4, etc which I have changed to something like www.mydomain.com/electronics/radio www.mydomain.com/electronics/television www.mydomain.com/mobile/smartphone, etc. Google has previously indexed the dynamic urls and search results show the old urls. I want search to point to the new urls. I have kept the old url active, so both urls work. How can I set up a 301 redirect in this case? I run IIS and it only allows a page to be redirected to 1 url. Should I deactivate the old dynamic url? In that case I lose all the previous seo rankings..

    Read the article

  • How do I start working as a programmer - what do I need?

    - by giorgo
    i am currently learning Java and PHP as I have some projects from university, which require me to apply both languages. Specifically, a Java GUI application, connecting to a MySQL database and a web application that will be implemented in PHP/MySQL. I have started learning the MVC pattern, Struts, Spring and I am also learning PHP with zend. My first question is: How can I find employment as a programmer/software engineer? The reason I ask is because I have sent my CV into many companys, but all of them stated that I required work experience. I really need some guidance on how to improve my career opportunites. At present, I work on my own and haven't worked in collaboration with anyone on a particular project. I'm assuming most people create projects and submit them along with their CVs. My second question is: Everyone has to make a start from somewhere, but what if this somewhere doesn't come? What do I need to do to create the circumstances where I can easily progress forward? Thanks

    Read the article

  • Lean/Kanban *Inside* Software (i.e. WIP-Limits, Reducing Queues and Pull as Programming Techniques)

    - by Christoph
    Thinking about Kanban, I realized that the queuing-theory behind the SW-development-methodology obviously also applies to concurrent software. Now I'm looking for whether this kind of thinking is explicitly applied in some area. A simple example: We usually want to limit the number of threads to avoid cache-thrashing (WIP-Limits). In the paper about the disruptor pattern[1], one statement that I found interesting was that producer/consumers are rarely balanced so when using queues, either consumers wait (queues are empty), or producers produce more than is consumed, resulting in either a full capacity-constrained queue or an unconstrained one blowing up and eating away memory. Both, in lean-speak, is waste, and increases lead-time. Does anybody have examples of WIP-Limits, reducing/eliminating queues, pull or single piece flow being applied in programming? http://disruptor.googlecode.com/files/Disruptor-1.0.pdf

    Read the article

  • Amazon Kindle e-Ink based device programming: Java ME CDC old school

    - by hinkmond
    If you like doing Amazon Kindle development in the old-school way (Java ME CDC-based apps) on their e-Ink based readers, then here's how to download and use the Amazon Kindle Development Kit (KDK). See: Download Amazon KDK Here's a quote: We're excited to introduce the all- new Kindle family: Kindle, Kindle Touch, and [blah-blah]. The KDK has APIs, tools, and documentation to help you create active content for Kindle, Kindle Touch, and other E Ink Kindles. Kickin' old school with Java ME CDC technology is the way to go. You can come up with the next Word with Friends this way. Hinkmond

    Read the article

  • South African MVPs deserve their title.

    - by MarkPearl
    Recently I read a post by someone who felt the Microsoft MVP program had failed. My local experience with the MVP program would tend for me to disagree. On Saturday I attended a free Windows Phone 7 event organized by Robert MacLean and Rudi Grobler both of whom are local MVP’s. First of all, kudos to them for organizing the event which included a free lunch and flash stick and had some great content for a free event. Secondly, this is not the first time that either of these two MVP’s have organized events. They are active in the community, present at the majority of local events and are always approachable and give an “honest” opinion. For me, that is what an MVP stands for and at least in my region I feel that the MVP program is a real success.

    Read the article

  • How to clear a Firefox address bar without selecting its content?

    - by zuba
    Sometimes I need to move an url from an app to a browser. I select the url, say in gvim, and make Firefox window active. Then I see that I should clear address bar before pasting the new url, which requires selecting existing url, which wipes the new url from PRIMARY clipboard out. What is the best way to put the new url from PRIMARY clipboard to address bar? Is there a shortcut to clear address bar and then to move focus there? ps I know I can use Ctrl-C to put the new url to CLIPBOARD clipboard, but I prefer to use PRIMARY clipboard.

    Read the article

  • Google Maps API: Premier License or excess map loads?

    - by j0nes
    I am currently looking for a way on how to deal with the Google Maps API usage limits. I am planning a redesign of our page that will probably get around 2 million map loads per month. This will surely break the usage limit of 750000 map loads per month available in the free version. If we pay for excess map loads, this means we would have to pay 5000$ per month. The other option would be to use a Premier license, however there is very few information available on the usage limits for this and the price. I have filled the request form to get a custom offer from Google, but I did not get any response yet. Can anyone of the Premier license holders tell me which option will be cheaper for my usage pattern, paying for Premier license or paying for excess map loads?

    Read the article

  • Confused about nova-network

    - by neo0
    I'm so sorry because this question doesn't related to Ubuntu. I asked in Openstack forum but this forum is not very active. So I think if someone have experience with Openstack Nova can help me with my problem. I've read some explanations about nova-network and how to configure it like this one from wiki: http://wiki.openstack.org/UnderstandingFlatNetworking I'm confusing about a detail. If every traffic from the instances must go through nova controller node, then why we still need the public interface for nova-compute node? Is it necessary? What happen when a request from outside to an instance. For example I have a controller node and a nova-compute node. In nova-compute node I run an instance with a Wordpress website. Then someone connect to the public IP of this instance. So the request go directly from router to the nova-compute node or from router to controller node then nova-compute node? Thank you!

    Read the article

  • Python Multiprocessing with Queue vs ZeroMQ IPC

    - by Imraan
    I am busy writing a Python application using ZeroMQ and implementing a variation of the Majordomo pattern as described in the ZGuide. I have a broker as an intermediary between a set of workers and clients. I want to do some extensive logging for every request that comes in, but I do not want the broker to waste time doing that. The broker should pass that logging request to something else. I have thought of two ways :- Create workers that are only for logging and use the ZeroMQ IPC transport Use Multiprocessing with a Queue I am not sure which one is better or faster for that matter. The first option does allow me to use the current worker base classes that I already use for normal workers, but the second option seems quicker to implement. I would like some advice or comments on the above or possibly a different solution.

    Read the article

  • Is "watermarking" code with random trailing whitespace a good way to detect plagiarism?

    - by paperjam
    Consider this: int f(int x) { return 2 * x * x; } and this int squareAndDouble(int y) { return 2*y*y; } If you found these in independent bodies of code, you might give the two programmers the benefit of the doubt and assume they came up with more-or-less the same function independently. But look at the whitespace at the end of each line of code. Same pattern in both. Surely evidence of copying. On a larger piece of code, correlation of random whitespace at line ends would be irrefutable evidence of a shared origin. Now aside from the obvious weaknesses: e.g. visible or obvious in some editors, easily removed, I was wondering if it was worth deploying something like this in my open source project. My industry has a history of companies ripping off open source projects.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Where can a list of Desktop web browsers be found at?

    - by Sn3akyP3t3
    I have another question posted in regards to the practicality of whitelisting. In this question I'm simply looking for an frequently updated list of top known used Desktop web browsers to use as part of my whitelist. I'm not trying to target any specific OS so please show one, show all. The list of browsers for desktops isn't exploding, but it does grow. I've only recently been made aware of other browsers that have multiple rendering engines. I'm not always on top of the text based browsers found out there either. I'm aware of the mobile browser platform and there is an active list used with regular expression for identification purposes that I will use as well as whatever I can find for the desktop platforms.

    Read the article

  • Reach More Customers - Partner Event Publishing System Now Available!

    - by swalker
    The Partner Event Publishing Service is now available to partners that have completed at least one Specialization. If you are an active Gold, Platinum or Diamond OPN member having accomplished at least one Specialization then you can start publishing your standalone live events today, including both in-person and webinar activities. Events are published on both Oracle.com/events and on Oracle’s internal calendar. You benefit by reaching new potential customers and creating more brand awareness.  Reach more customers by publishing Partner led events on Oracle.com/events by filling out a simple spreadsheet and submitting them to [email protected]. Click here for more information, and click here to download the Partner Event Publishing Form spreadsheet.

    Read the article

  • How can I test a parser for a bespoke XML schema?

    - by Greg B
    I'm parsing a bespoke XML format into an object graph using .NET 4.0. My parser is using the System.XML namespace internally, I'm then interrogating the relevant properties of XmlNodes to create my object graph. I've got a first cut of the parser working on a basic input file and I want to put some unit tests around this before I progress on to more complex input files. Is there a pattern for how to test a parser such as this? When I started looking at this, my first move was to new up and XmlDocument, XmlNamespaceManager and create an XmlElement. But it occurs to me that this is quite lengthy and prone to human error. My parser is quite recursive as you can imagine and this might lead to testing the full system rather than the individual units (methods) of the system. So a second question might be What refactoring might make a recursive parser more testable?

    Read the article

  • Oracle Communications Calendar Server: Upgrading to Version 7 Update 3

    - by joesciallo
    It's been some time since I have posted an entry. Now, with the release of Oracle Communications Calendar Server 7 Update 3, it seems high time to jump start this blog again. To begin with, check out what's new in this release: Authenticating Against an External Directory Booking Window for Calendars Changes to the davadmin Command Enable and Disable Account Autocreation LDAP Pools New Configuration Parameters New Languages New populate-davuniqueid Utility New Schema Objects Non-active Calendar Accounts Are No Longer Searched or Fetched Remote Document Store Authentication The upgrade is a bit more complicated than normal, as you must first apply some new schema elements to your Directory Server(s). To do so, you need to get the comm_dssetup 6.4 patch, patch the comm_dssetup script, and then run the patched comm_dssetup against your Directory Server(s) instances. In addition, if you are using the nsUniqueId attribute as your deployment's unique identifier, you'll want to change that to the new davUniqueId attribute. Consult the Upgrade Procedure for details, as well as DaBrain's blog, before forging ahead with this upgrade. Additional quick links: Problems Fixed in This Release Known Issues Calendar Server Unique Identifier Changes to the davadmin command Get the Calendar Server patch Get the comm_dssetup patch

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • Synergy - easy share of keyboard and mouse between multiple computers

    Did you ever have the urge to share one set of keyboard and mouse between multiple machines? If so, please read on... Using multiple machines Honestly, as a software craftsman it is my daily business to run multiple machines - either physical or virtual - to be able to solve my customers' requirements. Recent hardware equipment allows this very easily. For laptops it's a no-brainer to attach a second or even a third screen in order to extend your native display. This works quite handy and in my case I used to attached two additional screens - one via HD15 connector, the other via HDMI. But... as it's a laptop and therefore a mobile unit there are slight restrictions. Detaching and re-attaching all cables when changing locations is one of them but hardware limitations, too. After all, it's a laptop and not a workstation. I guess, that anyone working in IT (or ICT) has more than one machine at their workplace or their home office and at least I find it quite annoying to have multiple sets of keyboard and mouse conquering my remaining space on my desk. Despite the ugly looks of all those cables and whatsoever 'chaos of distraction' I prefer a more clean solution and working environment. This allows me to actually focus on my work and tasks to do rather than to worry about choosing the right combination of keyboard/mouse. My current workplace is a patch work of various pieces of hardware (approx. 2-3 years): DIY desktop on Ubuntu 12.04 64-bit, Core2 Duo (E7400, 2.8GHz), 4GB RAM, 2x 250GB HDD, nVidia GPU 512MB Dell Inspiron 1525 on Windows 8 64-bit, 4GB RAM, 200GB HDD HP Compaq 6720s on Windows Vista 32-bit, Core2 Duo (T5670, 1.8GHz), 2GB RAM, 160GB HDD Mac mini on Mac OS X 10.7, Core i5 (2.3 GHz), 2GB RAM, 500GB HDD I know... Not the latest and greatest but a decent combination to work with. New system(s) is/are already on the shopping list but I live in the 'wrong' country to buy computer hardware. So, the next trip abroad will provide me with some new stuff. Using multiple operating systems The list of hardware above already names different operating systems, and actually I have only one preference: Linux. But still my job as a software craftsman for Visual FoxPro and .NET development requires other OSes, too. Not a big deal, it's just like this. Additionally to those physical machines, there are a bunch of virtual machines around. Most of them running either Windows XP or Windows 7. Since years I have the practice that each development for one customer is isolated into its own virtual machine and environment. This keeps it clean and version-safe. But as you can easily imagine with that setup there are a couple of constraints referring to keyboard and mouse. Usually, those systems require their own pieces of hardware attached. As stated, I don't like clutter on my desk's surface, so a cross-platform solution has to come in here. In the past, I tried it with various applications, hardware or network protocols like X11, RDP, NX, TeamViewer, RAdmin, KVM switch, etc. but the problem in this case is that they either allow you to remotely connect to the other system or exclusively 'bind' your peripherals to the active system. Not optimal after all. Synergy to the rescue Quote from their website: "Synergy lets you easily share your mouse and keyboard between multiple computers on your desk, and it's Free and Open Source. Just move your mouse off the edge of one computer's screen on to another. You can even share all of your clipboards. All you need is a network connection. Synergy is cross-platform (works on Windows, Mac OS X and Linux)." Yep, that's it! All I need for my setup here... Actually, I couldn't believe it myself that I didn't stumble over synergy earlier but 'Get over it' and there we go. And despite the fact that it is Open Source, no, it's also for free. Donations for the developers are very welcome and recently they introduced Synergy Premium. A possibility to buy so-called premium votes that can be used to put more weight / importance on specific issues or bugs that you would like the developers to look into. Installation and configuration Simply download the installation packages for your systems of choice, run the installer and enter some minor information about your network setup. I chose my desktop machine for the role of the Synergy server and configured my screen setup as follows: The screen setup allows you currently to build or connect up to 15 machines. The number of screens can be higher as those machine might have multiple screens physically attached. Synergy takes this into the overall calculations and simply works as expected. I tried it for fun with a second monitor each connected to both laptops to have a total number of 6 active screens. No flaws after all - stunning! All the other machines are configured as clients like so: Side note: The screenshot was taken on Windows 8 and pasted via clipboard into Gimp running on Ubuntu. Resume Synergy is now definitely in my box of tools for my daily work, and amongst the first pieces of software I install after the operating system. It just simplifies my life and cleans my desk. Never again without Synergy!Now, only waiting for an Android version to integrate my Galaxy Tab 10.1, too. ;-) Please, check out that superb product and enjoy sharing one keyboard, one mouse and one clipboard between your various machines and operating systems.

    Read the article

  • Internet Connectivity Indicicator on Unity

    - by Sathish
    How can I check whether my internet connection is active on Ubuntu. If I am connected to a wired or wi-fi network, the indicator applet shows that I'm connected. But there is now way to find out the internet is working or not. I have some problem in my internet connectivity and I frequently lose my connection. I found this link is useful Internet connectivity indicator applet But I don't know that where should I use this code! #!/bin/bash if ping -c 1 -W 2 google.com > /dev/null; then echo "Up" else echo "Down" fi

    Read the article

  • Do delegates defy OOP

    - by Dave Rook
    I'm trying to understand OOP so I can write better OOP code and one thing which keeps coming up is this concept of a delegate (using .NET). I could have an object, which is totally self contained (encapsulated); it knows nothing of the outside world... but then I attach a delegate to it. In my head, this is still quite well separated as the delegate only knows what to reference, but this by itself means it has to know about something else outside it's world! That a method exists within another class! Have I got myself it total muddle here, or is this a grey area, or is this actually down to interpretation (and if so, sorry as that will be off topic I'm sure). My question is, do delegates defy/muddy the OOP pattern?

    Read the article

  • Is it a good idea to simplify an character -driven game engine to the point it's unnecessary to learn scripting/programming ?

    - by jokoon
    I remember, and I still think, that one cannot even make a prototyped 3D game to test just simple behaviors without using gigantic tools like unity or knowing extensive C++ programming, design pattern, a decent or basic 3D engine, etc. Now I'm wondering, since I know programming, that I'm still more lucky that the ones who need to learn programming prior to know how to make something: even scripted engines such as unity are not for kids, and to my sense they tend to dictate their ways of doing things, which is not the case with engine like ogre or irrlicht. I remember toying a little with the blender game engine, it was possible to link states or something I don't remember very well. Now I'm thinking that character driven games occupies a big part of the game market. Do you think it is a good idea to make a character-controlled oriented game engine which allows only to build AI instead of anything else ?

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >