Search Results

Search found 3551 results on 143 pages for 'canonical sources'.

Page 104/143 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Should I convert overly-long UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overly-long UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • Setting a property from one collection to another collection

    - by ooo
    I have two colluections List<Application> myApps; List<Application> yourApps; These lists have overlapping overlapping data but they are coming from different sources and each source has some missing field data. Application object has a property called Description Both collections have a unique field called Key i want to see if there is a LINQ solution to: Loop through all applications in myApps and look at the key and see if that existing in yourApps. If it does, i want to take the description property from that application in yourApps and set the description property on the application on myApps to that same value i wanted to see if there was any slick way using lambda expressions (instead of having to have loops and a number of if statements.)

    Read the article

  • 'Caching' a large table in ASP.NET

    - by TheNewGuy
    I understand that each page refresh, especially in 'AjaxLand', causes my back-end/code-behind class to be called from scratch... This is a problem because my class (which is a member object in System.Web.UI.Page) contains A LOT of data that it sources from a database. So now every page refresh in AjaxLand is causing me to making large backend DB calls, rather than just to reuse a class object from memory. Any fix for this? Is this where session variables come into play? Are session variables the only option I have to retain an object in memory that is linked to a single-user and a single-session instance?

    Read the article

  • Retrieving Gtk::Widget's relative position: get_allocate() doesn't work

    - by a-v
    I need to retrieve the position of a Gtk::Widget relative to its parent, a Gtk::Table. Most sources (e.g. http://library.gnome.org/devel/gtk-faq/stable/x642.html) say that one needs to call Gtk::Widget::get_allocation(). However, the returned Gtk::Allocation object always contains x = -1, y = -1, width = 1, height = 1. I have to note that this happens before the Gtk::Table object is actually exposed and rendered. A call to show_all_children() or check_resize(), which I would expect to recalculate child widget geometry, doesn't help. What am I doing wrong? Thanks in advance.

    Read the article

  • SSRS - rsAccessDenied error

    - by user1718857
    I have created one SSRS text report. I have built and deployed it successfully. I am also able to view it from the report manager succesfully. I am a local admin on windows server but not on SQL Server. Issue :- I am trying to schedule a report to run on a daily basis. When I go to Data sources to store my windows id credentials, it's not allowing me to do so throwing below error. The permissions granted to user 'app\abcid' are insufficient for performing this operation. (rsAccessDenied) Things I have tried :- Added my windows domain id to report manager and given all roles that are there still no success. Added server to the trusted sites in the IE, still no success.

    Read the article

  • Rails: What's the suggested approach to retrieve xml from an outside source

    - by Syrahn
    Rails newbie (though long time programmer) here. I'm writing an test app that retrieves data from several outside sources (think Twitter, RSS feeds, etc.) and under certain circumstances, it stores that data in a db (or presents it to the user). The data model and the views are trivial. What I'm having difficulty with is making the actual xml HTTP call to the outside source and deserialize the xml response so I can query/use it in my controller/helper. What library/gem should I use to accomplish this? I tried looking this up around the net, but only came up with some article from 2006 which, knowing how fast Rails has developed, might well be completely deprecated. Your help is much appreciated.

    Read the article

  • Automatically save CSS changes made to existing styles in Chrome dev tools?

    - by styke
    I've already mapped the necessary files to the local resource - however, while that does allow me to save any changes made to a file in the Sources panel, I was wondering if it's possible to automatically save changes to CSS made in the Elements panel. Otherwise at the moment, any changes made to the style in the Elements panel seem to exist only there. I remember at some point there used to be a little indicator of the file and line number next to a class/id etc. in the Styles tab of the Elements panel - surely it can't be that hard to simply 'update' any changes to that style rule considering Chrome knows exactly where it's coming from (in the case that it's a stylesheet and not an inline style?). It would be a great relief to my workflow. The answers to this similar question are obsolete.

    Read the article

  • Broken php/localhost/something

    - by ghego1
    I was trying to install the mcrypt libraries following this tutorial (http://www.glenscott.co.uk/blog/2011/08/29/install-mcrypt-php-extension-on-mac-os-x-lion/), but something must have gone wrong and now when I load a php page on my localhost I see this: query="SELECT DISTINCT ".$field." as a,".$field2." as b FROM ".$tab." ".$where. " Group by ".$field." order By ".$orderBy; return $this->query; } And all the remaining code of the php page that should get loaded. I've retrieved the previous versions of the private/etc folder and usr/lib/php folder with time machine but it didn't help. And now if I execute sudo pachectl restart it gives me this error: sudo: no valid sudoers sources found, quitting (while before it worked. PS I'm on a mac with Mountain Lion

    Read the article

  • Which Android hardware devices should I test on? [closed]

    - by Tchami
    Possible Duplicate: What hardware devices do you test your Android apps on? I'm trying to compile a list of Android hardware devices that it would make sense to buy and test against if you want to target an as broad audience as possible, while still not buying every single Android device out there. I know there's a lot of information regarding screen sizes and Android versions available elsewhere, but: when developing for Android it's not terribly useful to know if the screen size of a device is 480x800 or 320x240, unless you feel like doing the math to convert that into Android "units" (i.e. small, normal, large or xlarge screens, and ldpi, mdpi, hdpi or xhdpi densities). Even knowing the dimensions of a device, you cannot be sure of the actual Android units as there's some overlap, see Range of screens supported in the Android documentation Taking into account the distribution of Platform versions and Screen Sizes and Densities, below is my current list based on information from the Wikipedia article on Comparison of Android devices. I'm fairly sure the information in this list is correct, but I'd welcome any suggestions/changes. Phones | Model | Android Version | Screen Size | Density | | HTC Wildfire | 2.1/2.2 | Normal | mdpi | | HTC Tattoo | 1.6 | Normal | mdpi | | HTC Hero | 2.1 | Normal | mdpi | | HTC Legend | 2.1 | Normal | mdpi | | Sony Ericsson Xperia X8 | 1.6/2.1 | Normal | mdpi | | Motorola Droid | 2.0-2.2 | Normal | hdpi | | Samsung Galaxy S II | 2.3 | Normal | hdpi | | Samsung Galaxy Nexus | 4.0 | Normal | xhdpi | | Samsung Galaxy S III | 4.0 | Normal | xhdpi | **Tablets** | Model | Android Version | Screen Size | Density | | Samsung Galaxy Tab 7" | 2.2 | Large | hdpi | | Samsung Galaxy Tab 10" | 3.0 | X-Large | mdpi | | Asus Transformer Prime | 4.0 | X-Large | mdpi | | Motorola Xoom | 3.1/4.0 | X-Large | mdpi | N.B.: I have seen (and read) other posts on SO on this subject, e.g. Which Android devices should I test against? and What hardware devices do you test your Android apps on? but they don't seem very canonical. Maybe this should be marked community wiki?

    Read the article

  • Mercurial: two separate repos somewhat related (yes I'm getting confused)

    - by Lo'oris
    I have a local repository, let's call it ONE. ONE is the actual program. It's an android program, in case it matters for some reason. I have a remote repository, let's call it EXT. EXT is somewhat a library, used by ONE. ONE has a complex directory structure, mandated by android. The main sources are in src/bla/bla/ONE. Since ONE uses EXT, to do it I had to create another directory next to that one, that is src/bla/bla/EXT. I think would like to keep them separated in two repositories, but I need for them to actually be in this same directory structure to compile ONE. At the moment I just created a symlink to do it, but I wonder if there is a better way of doing that, that uses some hg feature.

    Read the article

  • Assign RegEx submatches to variables or map (C++/C)

    - by Michael
    I need to extract the SAME type of information (e.g. First name, Last Name, Telephone, ...), from numerous different text sources (each with a different format & different order of the variables of interest). I want a function that does the extraction based on a regular expression and returns the result as DESCRIPTIVE variables. In other words, instead of returning each match result as submatch[0], submatch[1], submatch[2], ..., have it do EITHER of the following: 1.) return std::map so that the submatches can be accessed via: submatch["first_name"], submatch["last_name"], submatch["telephone"] 2.) return a variables with the submatches so that the submatches can be accessed via: submatch_first_name, submatch_last_name, submatch_telephone I can write a wrapper class around boost::regex to do #1, but I was hoping there would be a built-in or a more elegant way to do this in C++/Boost/STL/C.

    Read the article

  • How do (or can I) hack a gem temporarily while looking for a bug?

    - by Tom Andersen
    I have a gem installed in my home directory on a laptop (eg not THE server). I have installed ruby 1.9.1 and also some other gems, notably right_aws - which allows access to s3, etc with ruby. All works, except there is a bug when I do a query on SimpleDB, and the returned list of items includes an item with any two byte utf-8 character in its itemName(). So I look through the sources of the right_aws gem installed on my machine, and I can see some places where I would like to test a fix. If I edit the file, save changes, (needing a password), then restart the server (script/server), it ignores my changes. I am quite new at ruby - do you have to 'compile' or other similar move to get the source code changes made take effect? I can see the edited file is changed by viewing it in terminal, etc.

    Read the article

  • The best approach to customize Bootstrap Less files and keep it easy to be updated to future versions

    - by user322896
    I'm wondering what the best way would be to customize the less files in Bootstrap and, at the mean time, keep it easy to be updated to future Bootstrap versions. It's straightforward to just modify the less files, but the problem is that when the next version of Bootstrap comes out, it might be painful to upgrade (because all the changes are already deeply mixed with the original sources.) Another approach would be similar to the open closed principle, that is, keeping the original less files unchanged, and adding my customized less files to overwrite the CSS rules I need. When Bootstrap gets updated, (hopefully) I can simply replace the less files and everything would work magically. However, regardless of the correctness of my assumption, the same CSS rules would be scattered in even more places and hard to manage. Also, the more we overwrite the CSS (not for compatibility or other purposes), the more bandwidth we waste. I know this highly depends on how the author of Bootstrap would handle the structure of the framework or even the naming of CSS rules, but I'd still like to hear everybody's opinions. Thanks.

    Read the article

  • php vs python django or something else for CMS module

    - by Michael
    We're looking to develop a CMS module for our website and I need some help in choosing the language/framework for this project. Basically we need to develop a "help' module like this one from ebay http://pages.ebay.com/help/index.html which will contain a lot of static pages with nice URLs for SEO. The application must run fast using low computer resources. We have been looking to use php on a custom made mvc framework but we received advice from other sources that py/django is the exactly language/framework that we need in terms of maintainability and development speed because it was developed for exactly this kind of projects so I need an expert advice on this matter with pro and cons for each choice.

    Read the article

  • Merging XMLTextReaders in C#

    - by smithchelluk
    I have a website that needs to pull information from two diffferent XML data sources. Originally I only need to get the data from one source so I was building a URL in the backend that went and retrieved the data from the XML site and then parsed and rendered it in the front end of the website. Now I have to use a 2nd data source and merge the result sets (which are identically structured XML) into one result set. Here is the code I am currently using to get one XML feed. sUrl = sbUrl.ToString(); //The URL for the XML feed XmlDocument xDoc = new XmlDocument(); StringBuilder oBuilder = new StringBuilder(); //The parsed HTML output XmlTextReader oXmlReader = new XmlTextReader(sUrl); oXmlReader.Read(); xDoc.Load(oXmlReader); XmlNodeList List = xDoc.GetElementsByTagName("result"); foreach (XmlNode node in List) { XmlElement key = (XmlElement)node; //BUILD THE OUTPUT HERE } Thanks in advance for your help.

    Read the article

  • Sphinx without using an auto_increment id

    - by squeeks
    I am current in planning on creating a big database (2+ million rows) with a variety of data from separate sources. I would like to avoid structuring the database around auto_increment ids to help prevent against sync issues with replication, and also because each item inserted will have a alphanumeric product code that is guaranteed to be unique - it seems to me more sense to use that instead. I am looking at a search engine to index this database with Sphinx looking rather appealing due to its design around indexing relational databases. However, looking at various tutorials and documentation seems to show database designs being dependent on an auto_increment field in one form or another and a rather bold statement in the documentation saying that document ids must be 32/64bit integers only or things break. Is there a way to have a database indexed by Sphinx without auto_increment fields as the id?

    Read the article

  • Can you /should you learn SEO techniques

    - by Mark
    I know very little about Search engine optimization however from discussions with other I am now unsure where to start. Are there any books or do these date so quickly that they are obsolete? Do all website give you mis-information or are there any reliable sources? Is it just a case of trial and error and in turn experience? Is it event worth learning the techniques as search engines change their algorithms so regularly? I wonder if it just better to spend the time to ensure you have a regularly updated will written web site with quality content, site map, quality links etc..

    Read the article

  • Can I expand macro JUST ONE TIME in specific target?

    - by naive231
    A = "demo" %.o:%.cpp $(CC) -c $^ $(A) -o $@ default:$(all_objs) game:A = $(shell read -p 'Enter game version: ' gv && echo $$gv) game:$(all_objs) Just a snippet makefile above. If I make game, main problem is each compilation of sources will expand $(A) and it will request user to input game version over and over. $(A) has default content "demo" only if user doesn't make game target. So, is there any way to set $(A) to be expanded && ?

    Read the article

  • Migrating an Access Database into SharePoint 2007.

    - by Mike T
    To my surprise and delight I read that an adminsitrator can import (nearly directly) an Access 2007 database into a sharepoint site. Automagically, the database in transformed into lists and views with some table lookup thrown in for good measure. With Access 2007 installed on the client machine, even the forms and what not can still be reused. To me... this sounds to good to be true. Has anyone actually dones this? With all this good news, where is the bad stuff and pitfalls to this. Depending on the size of the database, wouldn't this some how "gum up the works" in the SharPoint database? Sources: http://madhurahuja.blogspot.com/2007/01/adding-data-to-sharepoint-l-ists-in.html http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/17745835-a861-4984-9f44-7291fdae7d07

    Read the article

  • DataTable from TextFile?

    - by Craig
    I have taken over an application written by another developer, which reads data from a database, and exports it. The developer used DataTables and DataAdaptors. So, _dataAdapter = new SqlDataAdapter("Select * From C....", myConnection); and then ExtractedData = new DataTable("CreditCards"); _dataAdapter.Fill(ExtractedData); ExtractedData is then passed around to do different functions. I have now been told that I need to, in addition to this, get the same format of data from some comma separated text files. The application does the same processing - it's just getting the data from two sources. So, I am wondering if I can get the data read into a DataTable, as above, and then ADD more records from a CSV file. Is this possible?

    Read the article

  • Conditional Statements - If Then vs. Select Case

    - by cloyd800
    I'm a bit new to programming, and based on the few sources I've read both on the web and the books I'm learning to teach myself they are able to define what IF THEN and SELECT CASE conditional statements are, but have failed to give a comparison as to why I would use one over the other and what best practices decide this. If I'm understanding these conditional statements correctly, then both are based on a set of conditions with an outcome based around meeting these conditions, and if no conditions are met then an alternative outcome can be defined. I'm having trouble in understanding when I would use an IF THEN statement, and when I'd use a SELECT CASE statement, and what best practices are used to define this decision. Any insight on this would be greatly appreciated!

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • Excel Template Teaser

    - by Tim Dexter
    In lieu of some official documentation I'm in the process of putting together some posts on the new 10.1.3.4.1 Excel templates. No more HTML, maskerading as Excel; far more flexibility than Excel Analyzer and no need to write complex XSL templates to create the same output. Multi sheet outputs with macros and embeddable XSL commands are here. Their capabilities are pretty extensive and I have not worked on them for a few years since I helped put them together for EBS FSG users, so Im back on the learning curve. Let me say up front, there is no template builder, its a completely manual process to build them but, the results can be fantastic and provide yet another 'superstar' opportunity for you. The templates can take hierarchical XML data and walk the structure much like an RTF template. They use named cells/ranges and a hidden sheet to provide the rendering engine the hooks to drop the data in. As a taster heres the data and output I worked with on my first effort: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... </LIST_G_DEPT> </EMPLOYEES> Structured XML coming from a data template, check out the data template progression post. I can then generate the following binary XLS file. There are few cool things to notice in this output. DEPARTMENT-EMPLOYEE master detail output. Not easy to do in the Excel analyzer. Date formatting - this is using an Excel function. Remember BIP generates XML dates in the canonical format. I have formatted the other data in the template using native Excel functionality Salary Total - although in the data I have calculated this in the template Conditional formatting - this is handled by Excel based on the incoming data Bursting department data across sheets and using the department name for the sheet name. This alone is worth the wait! there's more, but this is surely enough to whet your appetite. These new templates are already tucked away in EBS R12 under controlled release by the GL team and have now come to the BIEE and standalone releases in the 10.1.3.4.1+ rollup patch. For the rest of you, its going to be a bit of a waiting game for the relevant teams to uptake the latest BIP release. Look out for more soon with some explanation of how they work and how to put them together!

    Read the article

  • Should I upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" and what care do I need to take if I upgrade?

    - by PHPLover
    I'm basically a Web Developer(PHP Developer) by profession. I mainly work on PHP, jQuery, AJAX, Smarty, HTML and CSS, Bootstrap front-end web development framework. I've also installed and using IDEs/editors like Sublime Text, NetBeans. I'm also using Git repository for my website development as a versioning tool. I'm using "Ubuntu 12.04 LTS" on my machine almost since last two years. My machine configuraion is as follows: Memory : 3.7 GiB Processor : Intel® Core™ i3 CPU M 370 @ 2.40GHz × 4 Graphics : Unknown OS type : 64-bit Disk : 64-bit The important softwares present on my machine and which I'm using daily for my work are as follows: PHP : PHP 5.3.10-1ubuntu3.13 with Suhosin-Patch (cli) (built: Jul 7 2014 18:54:55) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies Apache web server : /usr/sbin/apachectl: 87: ulimit: error setting limit (Operation not permitted) Server version: Apache/2.2.22 (Ubuntu) Server built: Jul 22 2014 14:35:25 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.3.12 Compiled using: APR 1.4.6, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/apache2/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" MySQL : 5.5.38-0ubuntu0.12.04.1 Smarty : 2.6.18 **NetBeans :** NetBeans IDE 8.0 (Build 201403101706) Sublime Text 2 : Version 2.0.2, Build 2221 Yesterday suddenly a pop-up message appeared on my screen asking me to upgrade to "Ubuntu 14.04 'Trusty Tahr'". I'd also be very happy to upgrade my system to "Ubuntu 14.04 'Trusty Tahr'". Following are the issues about which I'm little bit scared about and I need you all talented people's expert advice/help/suggestions on it: Will upgrading to "Ubuntu 14.04 'Trusty Tahr'" affect the softwares I mentioned above? I mean will I need to re-install/un-install and install these softwares too? Do I really need to and is it really a worth to upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" now? If I upgrade to "Ubuntu 14.04 'Trusty Tahr'" what advantage I'll get from web developer's point of view? Will the upgrade be hassle free and will I be ablr to continue my on-going work without any difficulties? Is "Ubuntu 14.04 'Trusty Tahr'" a LTS version and if yes till when it's going to provide support? These are the five crucial queries I have. If you want any further explanation from me please feel free to ask me. Thanks for spending some of your vaulable time in reading and understanding my issue. Any kind of help/comment/suggestion/answer would be highly appreciated. Though if someone gives canonical, precise and up to the mark answer, it will be of great help to me as well as other web developers using Ubuntu around the world. Once again thank you so much you great people around the globe. Waiting for your precious replies.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >