Search Results

Search found 2529 results on 102 pages for 'term'.

Page 12/102 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Erlang Types Specifications

    - by Chang
    I recently read the source code of couch-db, I find this type definition which i don't understand: -type branch() :: {Key::term(), Value::term(), Tree::term()}. -type path() :: {Start::pos_integer(), branch()}. -type tree() :: [branch()]. I did read Erlang doc, But what is the meaning of Start, Key, Value and Tree? From what i understand, they are Erlang variables! I didn't find any information about this in Erlang doc.

    Read the article

  • Jquery autocomplete with Dynamic input box.?

    - by Kaps Hasija
    I have done so much R & D for Jquery Auto complete, I found some result but not as much i needed. I am giving you code which are currently i am using . <input type="text" value="|a" name="completeMe" id="Subject" />// This input box will created by Dynamic using forloop // My Jquery Code $(function () { $("#Subject").autocomplete({ source: '/Cataloging/Bib/GetSubject', minLength: 1, select: function (event, ui) { // Do something with "ui.item.Id" or "ui.item.Name" or any of the other properties you selected to return from the action } }); }); // My Action Method public ActionResult GetSubject(string term) { term = term.Substring(2, term.Length-2); return Json(db.BibContents.Where(city => city.Value.StartsWith(term)).Select(city => city.Value), JsonRequestBehavior.AllowGet); } // My code is running with static input but while creating Dynamic I need to use live event but i don't know how can i use Live Event with this code. NOTE: I am using static value of input "|a" after rendering on action i am removing that first two char to make proper search from database. Thanks

    Read the article

  • select2: "text is undefined" when getting json using ajax

    - by user3046715
    I'm having an issue when getting json results back to select2. My json does not return a result that has a "text" field so need to format the result so that select2 accepts "Name". This code works if the text field in the json is set to "text" but in this case, I cannot change the formatting of the json result (code outside my control). $("#e1").select2({ formatNoMatches: function(term) {return term +" does not match any items." }, ajax: { // instead of writing the function to execute the request we use Select2's convenient helper url: "localhost:1111/Items.json", dataType: 'jsonp', cache: true, quietMillis: 200, data: function (term, page) { return { q: term, // search term p: page, s: 15 }; }, results: function (data, page) { // parse the results into the format expected by Select2. var numPages = Math.ceil(data.total / 15); return {results: data.Data, numPages: numPages}; } } }); I have looked into the documentation and found some statements you can put into the results such as text: 'Name', but I am still getting "text is undefined". Thanks for any help.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • Help with search query in Codeigniter

    - by Indigo
    Hi All, Im still fairly new to codeigniter and am wondering if someone can help me with this please? Im just trying to do a very basic search query in Codeigniter, but for some reason, the results are ignoring my "status = published" request... The code is: $this->db->like('title', $term); $this->db->or_like('tags', $term); $data['results'] = $this->db->get_where('resources', array('status' => 'published')); And this dosent work either: $this->db->like('title', $term); $this->db->or_like('tags', $term); $this->db->where('status', 'published'); $data['results'] = $this->db->get('resources'); Im sure its something basic? Help please?

    Read the article

  • terminal: where am I?

    - by sid_com
    Is there a variable or a function, which can tell me the actual position of the cursor? #!/usr/bin/env perl use warnings; use 5.012; use Term::ReadKey; use Term::Cap; use POSIX; my( $col, $row ) = GetTerminalSize(); my $termios = new POSIX::Termios; $termios->getattr; my $ospeed = $termios->getospeed; my $terminal = Tgetent Term::Cap { TERM => undef, OSPEED => $ospeed }; # some movement ... # at which position (x/y) is the cursor now?

    Read the article

  • Why are invariants important in Computer Science

    - by Antony Thomas
    I understand 'invariant' in its literal sense. I also recognize them when I type code. But I don't think I understand the importance of this term in the context of computer science. Whenever I read conversations\white papers about language design from famous programmers\computer scientists, the term 'invariant' keeps popping up as a jargon; and that is the part I don't understand. What is so special about it?

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • Refactoring Part 1 : Intuitive Investments

    - by Wes McClure
    Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult.  This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection.  How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred? The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable. I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects. Signs that refactoring could help: Testability How can decreasing test time not pay dividends? One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks! If we can’t test fast then we can’t change fast, nor with confidence. Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users.  If you cannot run an integration test within a test runner in your IDE, it’s time to refactor. Readability Are variables named meaningfully via a ubiquitous language? Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area? Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc) Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data. Are constants abstracted and named, or are they just inline? Do people constantly bitch about the code/design? If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability). Complexity Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go.  Refactoring will often converge upon a simplification step after enough time, keep an eye out for this. Understandability In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy. Unused code How can deleting things not help? This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)! If you don’t know where to start when refactoring, this is an excellent starting point! Duplication Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition. 80/20 and the Boy Scouts It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas. This gives a huge productivity edge on developers that don’t. Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.   Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • Diminishing Returns on Additional Developers

    - by smp7d
    Is there a term to describe the point at which adding more developers to a software project will provide diminishing returns? I realize that at a high level, it is more complicated that just a number of developers at which the project will be at productive capacity (ex/ state of the project, quality of the added developer), but I am trying to come up with a way to relate this to non-technical management through repetition. I'm basically looking for a term which invokes a strong image like "terminal velocity", except for Brook's Law.

    Read the article

  • Big-name School for Undergrad Students

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • Do I need to go to a big-name university?

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • What is A Keyword Enriched Article? - And is it Important For Your Business?

    Surely many of you will have the idea about the term "Key word Enriched Article" but certainly many of you will be unfamiliar with this term. So I will try to share my knowledge with you people in simple words. In a simple word we can say articles which contain keywords or key phrases which match the words type in any search engine are Key word Enriched Articles.

    Read the article

  • Cloud Computing Demystified

    There is a new term that is blazing in the world of IT; cloud computing. While the term is gaining more and more momentum many people are still unsure as to what the heck it is.

    Read the article

  • Do You Know About Web 2.0?

    You might have heard the term "Web 2.0" thrown around by your peers or colleagues. It is a term used to describe contemporary web applications. Web 2.0 sites are very interactive and user-centric.

    Read the article

  • How a Good SEO Company Can Benefit Your Online Marketing Campaign

    The key benefit of integrating SEO into your marketing plan is to increase natural targeted traffic to your website. This is done by optimising your site for the keywords you want to be listed for, for example, you want to be number one on Google for the key phrase "widgets", then you would optimise your site or page around this term. As SEO offers a great return on investment, it is often considered to be more beneficial in the long term than any other form of marketing.

    Read the article

  • Keywords - A Primer For Website Owners

    Certain terms you hear over and over when you are starting to look at marketing your CPA website. First and foremost to Internet marketers is the term "keywords." The term is quite popular so you might have heard it before and still had questions about what they do.

    Read the article

  • Colorizing your terminal and shell environment?

    - by Stefan Lasiewski
    I spend most of my time working in Unix environments and using Terminal emulators. I try to use color on the commandline, because color makes the output more useful and intuitive. What are some good ways to add color to my terminal environment? What tricks do you do? What pitfals have you encountered? Unfortunately, support for color is wildly variable depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here's what I do currently, after alot of experimentation: I tend to set 'TERM=xterm-color', which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I'm trying to keep things simple and generic, if possible. Many OSs set things like 'dircolors' and by default, and I don't want to modify this everywhere. So I try to stick with the defaults. Instead tweak my Terminal's color configuration. Use color for some unix commands (ls, grep, less, vim) and the Bash prompt. These commands seem to the standard "ANSI escape sequences" I've managed to find some settings which are widely supported, and which don't print gobbledygook characters in older environments (even FreeBSD4!) (For the most part). From my .bash_profile ### Color support # The Terminal application typically does 'export TERM=term=color' # Some terminal types will print Black, White & underlined with these settings. OS=`uname -s` case "$OS" in "SunOS" ) # Solaris9 ls doesn't allow color, so use special characters instead. LS_OPTS='-F' ;; "Linux" ) # GNU tools supports colors! See dircolors to customize colors export LS_OPTS='--color=auto' # Color support using 'less -R' alias less='less --RAW-CONTROL-CHARS' alias ls='ls ${LS_OPTS} export GREP_OPTIONS="--color=auto" ;; "Darwin"|"FreeBSD") # Most FreeBSD & Apple Darwin supports colors # LS_OPTS="-G" export CLICOLOR=true alias less='less --RAW-CONTROL-CHARS' export GREP_OPTIONS="--color=auto" ;; esac

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • Terminal proxy or screen without terminal emulation

    - by ZyX
    How can I make terminal applications immune to terminal emulator close, but still able to use all virtual terminal features? I see this must be something like screen, but without VT100 terminal emulation, something which will just apply whatever application does with "terminal proxy"'s terminal (like outputting something to stdout/stderr or using stty to set terminal options) to the terminal this proxy runs in. // I know about screen and altscreen on, but it makes either this (screen with TERM=screen): or this (screen with TERM=rxvt-unicode): while I want this (rxvt-unicode without screen): I have figured out that everything looks fine if I compile rxvt-unicode with USE=-xterm-color (in fact vim looks like on the second picture even without screen if I add this USE flag) and set TERM=screen-256color, but I do not like this workaround because it actually changes colors and I can't be sure that it will always change them only this way:

    Read the article

  • Home and End keys in Emacs don't work when run from Tmux

    - by Jan Stolarek
    When I run Emacs from Tmux, the Home and End keys do not work (Home key runs the Search command as if C-s was pressed). The problem started when I added this in my ~/.bashrc file: TERM="xterm" export TERM I've read somewhere that TERM variable should not be set manually but this was the only way I was able to solve problems with colors. Without this setting I got different colors in Emacs when running directly from the terminal and different when running from Tmux. This option caused some of the keys not to work in Emacs when it was run from Tmux, so I added this line to my ~/.tmux.conf: set-window-option -g xterm-keys on This solved problem with all keys except for Home and End. Any ideas how to make these keys work again?

    Read the article

  • How do I ask screen to behave like a standard bash shell?

    - by thornomad
    Just learned about the screen command on linux - it is genius. I love it. However, the actual terminal/prompt in screen looks and behaves differently than my standard bash prompt. That is, the colors aren't the same, tab completion doesn't seem to work, etc. Is there a way I can tell screen to behave just like a normal (at least, normal as in what I am used to) bash prompt ? Additional Information I am connecting via ssh from a Mac (Terminal) to a headless linux box (Ubuntu). After logging in, I have TERM=xterm-color and when I run screen I have TERM=screen. Am going to try the suggestions below to see if I can change the $TERM value first.

    Read the article

  • Merge Two Folders To Act As One (Software Raid 0?)

    - by Dboy1612
    Using Windows 7, I'm trying to setup what I've come to call a "Software Raid of Folders", not completely sure it's the right term, but I'm sure anyone who knows the true term will understand what I'm getting at. I have two folders, on two seperate harddrive, I would like to "merge" these folders while keeping them on seperate harddrive so they act as one folder. Example: Music and Videos are to be merged together to a new folder called "Merged" Music runs off of Harddrive 1 Videos runs off of Harddrive 2 Anything new saved inside Merged is saved within Videos that runs off of Harddrive 2 Now you see how I came up with the term "Software Raid", it's like an average RAID 0 setup, but instead I want to do it with just two specific folders on two different drives within Windows. Any help on this is apprecieated!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >