Search Results

Search found 1931 results on 78 pages for 'clever human'.

Page 42/78 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    - by ETC
    If you’re one of the less fortunate (namely those forgotten by their carrier when it comes to phone OS upgrade time) you’ve got a friend in Cyanogen. They’ve rolled out a new Release Candidate update that includes Android 2.3 and a host of performance tweaks. First thing to note is that this is an RC and if you upgrade from CyanogenMod 6 to CyanogenMod 7 RC you’ll be trading a little bit of stability and a few features that haven’t made the jump from 6 to 7 in return for the newest features of Android 2.3. If you’re not comfortable with that wait for CyanogenMod 7 to update to a final release. For the intrepid, hit up the link below to read more and grab a copy. CyanogenMod-7 Release Candidates! [Cyanogen via Download Squad] Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • Supporting more than one codebase in ANSI-C

    - by Ilker Murat Karakas
    I am working on a project, with an associated Ansi-C code base. (let me call this the 'main' codebase). I now am confronted with a typical problem (stated below), which I believe I would be able to solve much easily if I had an object-oriented language at hand. The problem is this: I will have to start more than one codebases; i.e. I will have to start supporting a parallel codebase (even maybe more in the future). The initial codebases for all the new (i.e. parallel) codebases will initially be identical as the old (i.e. 'main') codebase. As we are talking about the 'C' language, I have till now been thinking of adding '#ifdef' statements to code, and writing the branch-spacific code inside those 'ifdef' blocks. Hoping that I made the problem clear (enough!), I would like to hear thoughts on clever patterns that would help me handle this problem elegantly in Ansi C. Cheers

    Read the article

  • Display message when user leaves site

    - by Brian Rasmusson
    Hi, I'm looking for a way to display a message to the user if he leaves my site after only viewing one page. I found this (http://www.pgrs.net/2008/1/30/popup-when-leaving-website) clever solution, but it has a few flaws: staying_in_site = false; Event.observe(document.body, 'click', function(event) { if (Event.element(event).tagName == 'A') { staying_in_site = true; } }); window.onunload = popup; function popup() { if(staying_in_site) { return; } alert('I see you are leaving the site'); } It displays the message also when refreshing the page or using the back button. Do you know a better solution or how to fix it in the above code? I'm no javascript master :) My intention is to add the code on very specific landing pages only, and display the message when people leave the page without downloading my trial software or reading other pages on my site.

    Read the article

  • What can be done against language inertia?

    - by gerrit
    Often, projects use programming language X, but would use programming language Y if they were started from scratch. For example, big numerical models may be written entirely in Fortran. Whereas this might be a reasonable choice for the components that need to run fast (alternative would be C or C++), it might be a poor choice for components that either do not need to run fast (such as things dealing with human input or simple visualisations), or where runtime is not the limiting factor (such as I/O, particularly when from the network). Another example may be when a project is built using a propriety language (such as Matlab; no, FOSS clones are not good enough) and was started at a time when FOSS alternatives were not viable, but ten years later, they are; and it would be beneficial to migrate. However, due to language inertia, a migration does not happen. Code that works should not be touched, porting code is a time-consuming, expensive process, and programmers are familiar in language X but not necessarily in language Y. Still, in the long term, a migration would likely be beneficial. Can anything be done to mitigate the problems associated with language inertia? Are there any notable examples of big projects that have successfully overcome this problem? Or is a project bound to stick forever with the initial choices?

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Kathryn Perry
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world.The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3DMeg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP."The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions."Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform."With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps.Continue reading the full article here.

    Read the article

  • Tricking the server to load files faster?

    - by Yongho
    If we have a website with multiple images and videos, I've read that it's best to serve them from other domains so that the browser can simultaneously download a bunch of files, rather than waiting one by one for each file to be downloaded. For example, if we have a website http://example.com/, we might consider serving: Videos from http://video.example.com/ Images from http://images.example.com/ etc. Question: can we achieve the simultaneous downloading by tricking the browser into believing that the files are hosted there, or do they actually need to be at that location? We can, for example, pretend to serve video from http://video.example.com/ when actually it's just a clever htaccess rewrite that ACTUALLY serves from http://example.com/video.php. In this case, the video is being served from the main domain but because we refer it as http://video.example.com/, it may think that it's another domain and thus load files simultaneously, rather than one by one. Is this feasible?

    Read the article

  • Trying to parse xml, but xmldocument.loadxml() is trying to download?

    - by maxp
    I have a string input that i do not know whether or not is valid xml. I think the simplest aprroach is to wrap new XmlDocument().LoadXml(strINPUT); In a try/catch. The problem im facing is, sometimes strINPUT is an html file, if the header of this file contains <!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Transitional//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd""> <html xml:lang=""en-GB"" xmlns=""http://www.w3.org/1999/xhtml"" lang=""en-GB""> ...like many do, it actually tries to make a connection to the w3.org url, which i really dont want it doing. Anyone know if its possible to just parse the string without trying to be clever and checking external urls? Failing that is there an alternative to xmldocument?

    Read the article

  • best way to host multiple wordpress site on single vps [migrated]

    - by Ben
    Not sure if this is webmaster or a WordPress question, it's a bit half and half, sorry if I'm posting in the wrong place. Without using Multi-Site or installing new WordPress CMS' in second-level domains, what's the best way to get multiple WordPress installs running on my VPS (running Linux powered CentOS 6 with WHM and cPanel)? It's currently working but only by setting the permalinks option to the default setting, so the URLs aren't human-friendly. I have come across something called WPSiteStack, though I'd really rather not go down this route. Long story short, I need the following: Seperate installs so one core / theme / plugin update doesn't affect all sites and increases security of all sites; 'Pretty' permalinks; Each WordPress install must be in the root of it's own domain to ensure that I can accurately measure my clients' quotas; It may also be worth noting that some functions within each install use the $_SERVER['DOCUMENT_ROOT'] and $_SERVER['HOST'] variables. I have already edited the httpd-vhosts.conf, httpd.conf and .htaccess files but this hasn't made any changes. So any ideas what I'm missing or doing wrong? Any help is much appreciated.

    Read the article

  • Does Firefox Phishing Protection / Safebrowsing have any privacy implications?

    - by Nowo
    The current google results are outdated. What is the current status? I was on www.mozilla.org/en-US/legal/privacy/firefox.html and saw "Protection Against Suspected Forgery and Attack Sites Features". Can you translate please more to human speak? First they download a list and compare local... Ok... Here it gets messy "If there is a match, Firefox will check with its third party provider to ensure that the website is still on the blacklist. The information sent between Firefox and its third party provider(s) are hashed URLs. In fact, multiple hashed URLs are sent with the real hash so that the third party provider(s) will not know what site you are visiting." - If that hash were send to mozilla, they would knew which site were accessed? "In order to safeguard your privacy, Firefox will not transmit the complete URL of web pages that you visit to anyone other than Mozilla and its service providers." - In other words, Mozilla and its service providers get all complete URLs (of sites, which were in blacklist)? "While it is possible that a third party service provider may determine the actual URL from the hashed URL sent, Mozilla’s policy is to require [...]" - Privacy is depends on policy rather than technology?

    Read the article

  • How to block the UI during asynchronous operations in WPF

    - by mcintyre321
    We have a WPF app (actually a VSTO WPF app). On certain controls there are multiple elements which, when clicked, load data from a web service and update the UI. Right now, we carry out these web requests synchronously, blocking the UI thread until the response comes back. This prevents the user clicking around the app while the data is loading, potentially putting it into an invalid state to handle the data when it is returned. Of course the app becomes unresponsive if the request takes a long time. Ideally, we'd like to have the cancel button active during this time, but nothing else. Is there a clever way of doing this, or will we have to switch the requests to execute asynchronously using backgroundworker and write something that disables all the controls apart from the cancel button while a request is in progress?

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Is the development of CLI apps considered "backwards"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backwards", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

  • User upload file above web root with php

    - by Chris
    I have a website where local bands can have a profile page, I'm implementing an upload system so that they can add songs to their profile. I want to make sure that clever visitors to my website cannot download their songs. I was thinking about uploading them to above the folder for my domain so that they cannot be accessed directly. Is this a good idea and/or possible? If not, what do you suggest I do to try and avoid users downloading songs. I'm already using a flash player to try and prevent downloads.

    Read the article

  • What's the best way to manage error logging for exceptions?

    - by Peter Boughton
    Introduction If an error occurs on a website or system, it is of course useful to log it, and show the user a polite message with a reference code for the error. And if you have lots of systems, you don't want this information dotted around - it is good to have a single centralised place for it. At the simplest level, all that's needed is an incrementing id and a serialized dump of the error details. (And possibly the "centralised place" being an email inbox.) At the other end of the spectrum is perhaps a fully normalised database that also allows you to press a button and see a graph of errors per day, or identifying what the most common type of error on system X is, whether server A has more database connection errors than server B, and so on. What I'm referring to here is logging code-level errors/exceptions by a remote system - not "human-based" issue tracking, such as done with Jira,Trac,etc. Questions I'm looking for thoughts from developers who have used this type of system, specifically with regards to: What are essential features you couldn't do without? What are good to have features that really save you time? What features might seem a good idea, but aren't actually that useful? For example, I'd say a "show duplicates" function that identifies multiple occurrence of an error (without worrying about 'unimportant' details that might differ) is pretty essential. A button to "create an issue in [Jira/etc] for this error" sounds like a good time-saver. Just to re-iterate, what I'm after is practical experiences from people that have used such systems, preferably backed-up with why a feature is awesome/terrible. (If you're going to theorise anyway, at the very least mark your answer as such.)

    Read the article

  • Mimic property/list changes on an object on another object

    - by soundslike
    I need to mimic changes (property/list) changes on an object and then apply it to another object to keep the structure/property the same. In essence it's like cloning etc. the biz rules require certain properties to not be applied to the other object, so I can't just clone the object otherwise this would be easy. I've already walked the source object to get INotifyPropertyChanged and IListChanged events, so I have the "source" and the args (Property or List) changed event notifications. Given that I guess I could build a reflection "hierarchy path" starting from the top level of the source object to get to the Property or List changed "source" (which could be several levels deep). Ignoring for the moment that certain object properties should not propagate to the other object, what's a way to build this "path"? Is a brute force top level down to build the "path" (and discard on the way back up if we don't hit the original changed event "source") the only way to do it? Any clever ideas on how to mimic changes from one object to another object?

    Read the article

  • How to render the properties of the view model's base class first when using ViewData.ModelMetadata.

    - by Martin R-L
    When I use the ViewData.ModelMetadata.Properties in order to loop the properties (with an additional Where(modelMetadata => modelMetadata.ShowForEdit && !ViewData.TemplateInfo.Visited(modelMetadata))), and thereby create a generic edit view, the properties of the view model's base class are rendered last. Is it possible to use a clever OrderBy() or is there another way to first get the properties of the base class, and then the sub class'? Reverse won't do the trick since the ordering of each class' properties is perfectly fine. A workaround would of course be composition + delegation, but since we don't have mixins, it's too un-DRY IMHO, why I seek a better solution if possible.

    Read the article

  • TeamCity run Nunit tests in Parallel

    - by Bob Sinclar
    So I was thinking that there must be a better way to run NUnit tests for a .net project via teamcity. Currently the build of the project takes about 10 minutes , and the testing step takes 30ish minutes. I was thinking about splitting up the Nunit tests into 3 groups, assigning them each to a different agent. And then make sure they have a build dependency on the initial build before they start. This was the best way i thought of doing it, Is there a different way I should also consider? On a side note Is it possible to combine all the Nunit tests at the end to get one report from the tests being build on 3 different machines? I dont think this is possible unless someone thought of a clever hack.

    Read the article

  • Basic is Best

    - by Eric A. Stephens
    Fellow foodies will recognize the recent movement towards "farm-to-table" restaurants. These venues attempt to simplify their menus and source ingredients as close to the source as possible. I had the opportunity to dine at such a restaurant the other evening. I was gushing about the appetizer to my server when she described the preparation for the item and then punctuated her comments with "basic is best". I reminded my fellow enterprise architect diners there was an architecture lesson in that statement. They rolled their eyes and chuckled. But they also knew I was right. I'm reminded of Frederick Brooks' book The Mythical Man Month and his latest The Design of Design. The former must read book talks about complexity. But he refrains from damning all complexity. The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body. But sometimes a simple solution is the best approach. Fewer applications (think: portfolio rationalization). Fewer components. Fewer lines of code. Whatever level of abstraction you are working at, less is more. I'm reminded of the enterprise architecture principle "Control Technical Diversity". At one firm I created pithy catch phrases for each principles. I named this one "Less is More". But perhaps another variation is what my server said the other night, "Basic is Best".

    Read the article

  • Calculating probability that a string has been randomized? - Python

    - by RadiantHex
    Hi folks, this is correlated to a question I asked earlier (question) I have a list of manually created strings such as: lucy87 gordan_king fancy_unicorn77 joplucky_kanga90 base_belong_to_narwhals and a list of randomized strings: johnkdf pancake90kgjd fancy_jagookfk manhattanljg What gives away that the last set of strings are randomized is that sequences such as 'kjg', 'jgf', 'lkd', ... . Any clever way I could separate strings that contain these apparently randomized strings from the crowd? I guess that this plays a lot on the fact that certain characters are more likely to be placed next to others (e.g. 'co', 'ka', 'ja', ...). Any ideas on this one? Kylotan mentioned Reverend, but I am not sure if it can be used fr such purpose. Help would be much appreciated!

    Read the article

  • Visually and audibly unambiguous subset of the Latin alphabet?

    - by elliot42
    Imagine you give someone a card with the code "5SBDO0" on it. In some fonts, the letter "S" is difficult to visually distinguish from the number five, (as with number zero and letter "O"). Reading the code out loud, it might be difficult to distinguish "B" from "D", necessitating saying "B as in boy," "D as in dog," or using a "phonetic alphabet" instead. What's the biggest subset of letters and numbers that will, in most cases, both look unambiguous visually and sound unambiguous when read aloud? Background: We want to generate a short string that can encode as many values as possible while still being easy to communicate. Imagine you have a 6-character string, "123456". In base 10 this can encode 10^6 values. In hex "1B23DF" you can encode 16^6 values in the same number of characters, but this can sound ambiguous when read aloud. ("B" vs. "D") Likewise for any string of N characters, you get (size of alphabet)^N values. The string is limited to a length of about six characters, due to wanting to fit easily within the capacity of human working memory capacity. Thus to find the max number of values we can encode, we need to find that largest unambiguous set of letters/numbers. There's no reason we can't consider the letters G-Z, and some common punctuation, but I don't want to have to go manually pairwise compare "does G sound like A?", "does G sound like B?", "does G sound like C" myself. As we know this would be O(n^2) linguistic work to do =)...

    Read the article

  • Is a "model" branch a common practice?

    - by dukeofgaming
    I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) Schema source (a .sql file) Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. Edit: I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary-looking merges. I'm also doing feature branching.

    Read the article

  • empty() not a valid callback?

    - by user151841
    I'm trying to use empty() in array mapping in php. I'm getting errors that it's not a valid callback. $ cat test.php <? $arrays = array( 'arrEmpty' => array( '','','' ), ); foreach ( $arrays as $key => $array ) { echo $key . "\n"; echo array_reduce( $array, "empty" ); var_dump( array_map("empty", $array) ); echo "\n\n"; } $ php test.php arrEmpty Warning: array_reduce(): The second argument, 'empty', should be a valid callback in /var/www/authentication_class/test.php on line 12 Warning: array_map(): The first argument, 'empty', should be either NULL or a valid callback in /var/www/authentication_class/test.php on line 13 NULL Shouldn't this work? Long story: I'm trying to be (too?) clever and checking that all array values are not empty strings.

    Read the article

  • SnapBird Supercharges Your Twitter Searches

    - by ETC
    Twitter’s default search tool is a bit anemic. If you want to supercharge your Twitter search, fire up web-based search tool SnapBird and dig into your past tweets as well as those of friends and followers. Yesterday I was trying to find a tweet I’d sent some time last year regarding my search for an application that could count keystrokes for inclusion in my review of the app I finally found to fulfill the need, KeyCounter. Searching for it with Twitter’s search tool yielded nothing. One simple search at SnapBird and I nailed it. SnapBird requires no authentication to search public tweets (both your own and those of your friends and follows) but does require authentication in order to search through your sent and received direct messages. SnapBird is a free service. SnapBird Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • ArchBeat Link-o-Rama for October 23, 2013

    - by OTN ArchBeat
    Virtual Dev Day: Oracle ADF Development - Web, Mobile, and Beyond This free virtual event includes technical sessions that range from introductory to deep dive, covering Oracle ADF and Oracle ADF Mobile. Multiple tracks cover every interest and every level and include live online Q&A for answers to your technical questions. Register now! Americas: Tuesday, November 19, 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT APAC: Thursday, November 21, 10am–1:30pm IST (India) / 12:30pm–4pm SGT (Singapore) / 3:30pm–7pm AESDT EMEA: Tuesday, November 26, 9am-1pm GMT / 1pm-5pm GST/ 2:30pm-6:30pm IST A Roadmap for SOA Development and Delivery | Mark Nelson Do you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. Updated ODI Statement of Direction | Robert Schweighardt Heads up Oracle Data Integrator fans! A new product statement of direction document is available, offering "an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB)." Java-Powered Robot Named NAO Wows Crowds | Tori Wieldt Java community manager Tori Wieldt interviews a robot and human. Nordic OTN Tour 2013 | Lonneke Dikmans Oracle ACE Director Lonneke Dikmans checks in from the Stockholm leg of the Nordic OTN Tour for 2013, sponsored by the Danish Oracle User Group and featuring fellow ACE Directors Tim Hall and Sten Vesterli, plus local speakers at various stops. Lonneke's post include the slides from three of the presentations. Thought for the Day "Some people approach every problem with an open mouth." — Adlai E. Stevenson23rd Vice President of the United States (October 23, 1835 – June 14, 1914) Source: brainyquote.com

    Read the article

  • People != Resources

    - by eddraper
    Ken Tabor’s blog post “They Are not Resources – We Are People” struck a chord with me.  I distinctly remember hearing the term “resources” within the context of “people” for the first time back in the late 90’s.  I was in a meeting at Compaq and a manager had been faced with some new scope for an IT project he was managing.  His response was that he needed more “resources” in order to get the job done.  As I knew the timeline for the project was fixed and the process for acquiring additional funding would almost certainly extend beyond his expected delivery date, I wondered what he meant.  After the meeting, I asked him what he meant… his response was that he needed some more “bodies” to get the job done.  For a minute, my mind whirred… why is it so difficult to simply say “people?”  This particular manager was neither a bad person nor a bad manager… quite the contrary.  I respected him quite a bit and still do.  Over time, I began to notice that he was what could be termed an “early adopter” of many “Business speak” terms – such as “sooner rather than later,” “thrown a curve,” “boil the ocean” etcetera.  Over time, I’ve discovered that much of this lexicon can actually be useful, though cliché and overused.  For example, “Boil the ocean” does serve a useful purpose in distilling a lot of verbiage and meaning into three simple words that paint a clear mental picture.  The term “resources” would serve a similar purpose if it were applied to the concept of time, funding, or people.  The problem is that this never happened.  “Resources”, “bodies”, “ICs” (individual contributors)… this is what “people” have become in the IT business world.  Why?  We’re talking about simple word choices here.  Why have human beings been deliberately dehumanized and abstracted in this manner? What useful purpose does it serve other than to demean and denigrate?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >