Search Results

Search found 9494 results on 380 pages for 'least squares'.

Page 161/380 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Dual Boot, Dual Hard Drives!

    - by Mars
    I'm posting this question after reading most of similar ones. My situation is different here in the fact that I'm installing on SSD and not partitioning my HDD, and that I can actually boot! I'm just looking to improve the convenience of having easier way to choose. 1- I have a Dell Inspiron 15R SE. It has HDD (1TB) and SSD (32GB). I managed to do whatever things I did in distant past to set the SSD free (I don't really care how fast my system boots). Now I wanted to install Linux on the SSD and leave the HDD untouched. It's way too precious for me to mess with it. So, I repartitioned the SSD to: 30GB for /root, 1GB for /swap, and 100MB for /boot. I installed Linux on the root and the GRUB on boot (of the SSD). Now GRUB immediately boots into linux and doesn't allow me to boot to Windows. BUT! If I enable UEFI Boot manager and choose "Windows Boot Manager" after hitting F12, I can boot into Windows 8 normally. I'd say that's pretty ok, except, I'd prefer to have the option to boot into which one or at the very least, default to boot to Windows. 2- I'm concerned that if I now delete the SSD partition, that the boot will break and I won't be able to boot anything! Does this seem like a valid concern? I made that choice of having linux on SSD because I'm going to be training on it, so I expect multiple resets from time to time.

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Do you store mysql exports in your version control tool for reverting to in event of error?

    - by Rob
    We run an internal web server with in-house software to run a manufacturing line. When new product features are to be added, either or both of the following occur: changes to the in-house server software may be required to support these - these are for significant changes in functionality, being code drive. changes to the MySQL database for new entries for the part numbers, these are for smaller changes, configurations, changes to already existing values and parameters -- such changes don't require code changes. Ideally we'd want our changes to be here rather than in item 1. Item 1 is version controlled in Subversion, so previous revisions can be referred to for rolling back to in the event of problems introduced in the latest revision. But what about changes to the MySQL database? We have quality processes to ensure that such changes are error-free but there is always a chance that errors can pass through, e.g. mistake in data entry or faults with the code that uses the MySQL corrupting the database etc. We have a automated backup every 6 hours but what if we want more manual defined checkpoints in between these intervals, we could use the same backup system but I wondered if folks here used other methods to store previous states of databases, e.g. exporting the database as a plain text SQL dump -- at least with this method it would be possible to see diffs e.g. in Beyond Compare for trouble shooting. Thoughts?

    Read the article

  • Should I expect my team to have more than a basic proficiency with our source control system?

    - by Joshua Smith
    My company switched from Subversion to Git about three months ago. We had weeks of advance notice prior to the switch. Since I'd never used Git before (or any other DVCS), I read Pro Git and spent a little time spinning up my own repositories and playing around, so that when we switched I'd be able to keep working with minimal pain. Now I'm the 'Git guy' by default. With a couple of exceptions, most of my team still has no idea how Git works. For example, they still think of branches as complete copies of the source code, and even go so far as to clone the repo into multiple folders (one per branch). They generally look at Git as a scary black box. Given the fundamental nature of source control in our daily work (not to mention the ridiculous amount of power Git affords us), I'm of the opinion that any dev who doesn't achieve a certain level of proficiency with it is a liability. Should I expect my team to have at least some understanding of how Git works internally, and how to use it beyond the most basic pull/merge/push operations? Or am I just making something out of nothing?

    Read the article

  • Stacks in C++

    - by MarkPearl
    So some more basics… One of the things you will be taught at any college after conquering arrays is different derivatives of collections. Stack is one of the simplest of those and very useful… A stack is a LIFO (last in first out) data structure and has at least two basic method calls – push & pop. Push, “pushes” an item on the top of the stack. Pop, removes the top most item off the stack. Because all elements on a stack are of the same type, one can use an array to implement a stack or a linked list. With the array based approach, the first element in a stack would be the first element in the array, the second on the stack would be the second on the array, etc. One limitation with an array implementation of a stack is that unless the array is dynamic, one would have to have a preset max stack size (based on the bounds of the array). Linked lists is another approach that gets past this boundary by allowing you to dynamically grow or shrink a collection of data. Stacks have many applications… a typical computer science example would be Postfix Expression Calculator, where the LIFO principle is maintained.

    Read the article

  • Microsoft Tag Tagged Me

    - by Brian Schroer
    I got EXTREMELY lucky last week and won an HP Mini 311 notebook from a Microsoft Tag Twitter contest. I did my required tweet to enter last Tuesday, and one hour later received notification that I had won the weekly drawing. Apparently you can tweet up to 500 times (I pity the followers of those who do that), so it was really lucky that I won, and I sympathize with those who had been really trying. If you would like to try your luck, there are seven weekly prizes left, and you can find out about the contest here: http://tag.microsoft.com/ttcontest.aspx For a free PC, I thought it was the least I could do to find out what Microsoft Tag is. I was vaguely aware of those pastel-y triangle-y square things that look like someone put one of Don Johnson’s Miami Vice outfits through a shredder, and knew that the company I work for (one of the world’s largest consumer products companies) was looking into putting them on our products, packaging and advertising, but didn’t know much more about the technology. I thought they were just an improvement over bar codes, and would be used in retail store scanners, but I was mistaken. These tags are meant to be scanned by consumers using their mobile phones, to get instant access to information, websites, reviews, etc. Scanning a tag can open a web page, import a contact card, or dial a phone number, play a video… Tag reader software can be installed on Windows Mobile, iPhone, Symbian, Blackberry, Android, J2ME, and other phones (and I suspect that it will be available for Windows Phone 7 also :). There are built-in tracking, metrics and analysis tools, to help companies using Tag make decisions about their marketing expenditures. (And they don’t have to look Miami Vice-y – They can be customized to reflect the personality of the person or a brand.) Looks like interesting stuff. You can find out more at http://tag.microsoft.com.

    Read the article

  • Problems with Maverick upgrade

    - by altenuta
    I upgraded to Maverick 10.10 from Lucid. I have an old Toshiba Satellite with a 1.1 MHz and 256MB RAM. Initially I couldn't get my wireless to work. That solved itself after installing various updates and programs. The problems that remain are: I have to authorize at least 2 times at start-up. This machine is Ubuntu only. No boot load screen. I have a ton of programs and system directories that are in my home folder. Is this normal? It is difficult to wake the computer from sleep. Usually I just shut it down and restart. Tonight I waited and got a message about corrupt memory. The computer takes forever to do just about everything. Slow to start programs or doing things on the web. I am a longtime Mac user (since 1986). I also manage a network of several windoze machines. I am definitely a GUI guy and do very little in the terminal so I really need to know where to begin to get things straightened out. Can I rescue this machine without wiping it and doing a fresh install? This is basically a hobby machine. Aside from all the programs and upgrades I've installed, I have almost no files or documents to worry about saving. Anyone have any ideas about the problems I'm having and the best way to proceed? Thanks, Al

    Read the article

  • Oracle Financials In the News

    - by Di Seghposs
    Coming off of OpenWorld and all the excitement around Oracle’s “Cloud” strategy, we thought we’d share what others had to say recently about Oracle’s financial solutions in and out of the cloud: Information Management, the educated reader’s choice for the latest news, commentary and feature content serving the information technology and business community, had an interesting blog post from Bill McNee of Saugatuck Technology, entitled, “A Bull Market for Finance Cloud Apps”. In the post, he highlights Oracle as one of the ‘significant players’ in the space… Oracle: As recently announced, Oracle is now aggressively marketing its Oracle Fusion Financials Cloud Service to midsize and large enterprise customers. While we anticipate that this solution set will primarily appeal to a portion of the existing Oracle customer footprint, rather than taking share from competitors, it is embedding some strong mobile and social capabilities that should help it gain traction. Read the full article - “A Bull Market for Finance Cloud Apps” Ventana Research, a leading benchmark research and advisory services firm, made mention to Oracle Fusion Financials in a recent blog post. While we all know ‘boring is cool’, it was cool to see Robert Kugel, SVP Research, discussing Oracle’s Fusion Financials strategy. Here’s some excerpts: “For at least the next five years I believe Oracle has a good strategy, because the transition from the existing Oracle ERP offerings to Fusion Financials can be less painful than similar migrations…” “Deploying Fusion GL can facilitate a more consistent and faster way to execute finance department functions.” “Fusion Financials is the go-forward accounting and financial applications suite that will coexist…” “Whether or not it’s time to migrate, I think all users of Oracle’s E-Business Suite, Oracle Applications, PeopleSoft and JD Edwards software should consider Fusion GL as part of an ongoing program to extract more value from their core financial systems.” Read the full article - “Oracle Fusion Financials: Boring is Cool”

    Read the article

  • How to justify technology choice to customer?

    - by suslik
    When freelancing / contracting a customer will typically specify functional requirements, acceptance criteria, etc, and the implementation details are in the developer's hands. As a developer your technology choice is a balancing act between what you are most familiar with, technologically what the right tool appears to be, ease of finding coders with this skill and their expense, and a few other factors. I'm in a situation where I have evaluated my options and selected a somewhat obscure open source technology that I believe will get me there faster, easier, and be more maintanable in the long term. It's different, but I think that that is what the requirements call for. The customer has inquired about what I'm going to use to build the solution, and now they are concerned because they've never heard of it before. The reasons for my choice are mostly technical, whereas the customer isn't (but they know some buzzwords!). Explaining these technical reasons will not be easy, and I am not sure if that is the right way to approach this situation anyway. And that's my question: what is the right way to approach this situation so as to cause the least amount of headache for everyone involved?

    Read the article

  • Ubuntu 12.04 stopped recognizing my BenQ monitor and reduced resolution to 1024x768

    - by Omri
    A few days ago I installed Ubuntu 12.04 32bit Desktop. It recognized my hardware without a problem (at least that I know of) and all worked fine. I left my system running (it is at work) through the night because it is also working as a database server and when I came today to work the resolution was 1024x768 (the monitor recommends 1920x1080) even though in the Display section of the System Settings it was recognized as BenQ, and no higher resolution was offered. After a restart, the monitor name changed from BenQ to Unknown. This is a desktop computer. I also installed gtk-redshift and f.lux. I checked Additional Drivers to see if there is something I can install but it didn't find anything. I tried to Google it but I didn't find anything about a monitor stopping being recognized after it was already working. I did enable some PPAs yesterday, namely webupd8, mozillateam/thunderbird-stable and some other, and I also followed the instructions to patch the NotifyOSD to be more friendly: sudo add-apt-repository ppa:caffeine-developers/ppa sudo add-apt-repository ppa:leolik/leolik sudo apt-get update sudo apt-get upgrade sudo apt-get install libnotify-bin pkill notify-osd sudo add-apt-repository ppa:nilarimogard/webupd8 sudo apt-get update sudo apt-get install notifyosdconfig I now purged both caffeine-developers and leolik PPAs in the hope it will help, but no change. Has there been a change in the packages that could introduce this problem? Any help will be very appreciated :-) Omri

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Chainloading GRUB2 from BURG

    - by WindowsEscapist
    I have an old PC with Puppy Linux in addition to Ubuntu and Windows XP. THis creates a LOT of menu entries (all of which I would like to keep): Ubuntu 10.04 Ubuntu Recovery Mode Memtest x86 Memtest Serial Windows XP Pro Precise Puppy Linux Precise Puppy TORAM Puppy 4.3.1 Puppy 4.3.1 TORAM Plop Boot Manager (for booting to USB, PC doesn't have BIOS option). Now, in my fancy shiny laptop I've gotten really attached to BURG, and would like a setup where I have a Windows icon, an Ubuntu icon, and an arrow that chainloads GRUB2 so that I can boot from USB or run Puppy if need be (all these entries will obviously not fit into the BURG theme I use, Lightness). The problem is, GRUB2 can't install the the beginning of a partition like it used to be able to (I am reluctant to specify andything with --force at the end), at least, without warning that warn: This is a BAD idea!. So I'm kind of at a loss here. I can't see how the folding option would work, because all of those other options would have the same icon once I unfolded (Lightness is non-text-based). If I do embed GRUB using grub-install /dev/sdax --force, how do I chainload it with BURG? Is there another way?

    Read the article

  • Doing a passable 4X game AI

    - by Extrakun
    I am coding a rather "simple" 4X game (if a 4X game can be simple). It's indie in scope, and I am wondering if there's anyway to come up with a passable AI without having me spending months coding on it. The game has three major decision making portions; spending of production points, spending of movement points and spending of tech points (basically there are 3 different 'currency', currency unspent at end of turn is not saved) Spend Production Points Upgrade a planet (increase its tech and production) Build ships (3 types) Move ships from planets to planets (costing Movement Points) Move to attack Move to fortify Research Tech (can partially research a tech i.e, as in Master of Orion) The plan for me right now is a brute force approach. There are basically 4 broad options for the player - Upgrade planet(s) to its his production and tech output Conquer as many planets as possible Secure as many planets as possible Get to a certain tech as soon as possible For each decision, I will iterate through the possible options and come up with a score; and then the AI will choose the decision with the highest score. Right now I have no idea how to 'mix decisions'. That is, for example, the AI wishes to upgrade and conquer planets at the same time. I suppose I can have another logic which do a brute force optimization on a combination of those 4 decisions.... At least, that's my plan if I can't think of anything better. Is there any faster way to make a passable AI? I don't need a very good one, to rival Deep Blue or such, just something that has the illusion of intelligence. This is my first time doing an AI on this scale, so I dare not try something too grand too. So far I have experiences with FSM, DFS, BFS and A*

    Read the article

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • What do you do to make sure you take proper/enough breaks, while avoiding unwanted side-effects of break taking?

    - by blueberryfields
    preamble It seems to me that computer programmers are one of a select few groups of people who actually take pleasure from sitting in front of computers for long periods of time. Most people in other professions actively dislike their time at computers, and do their best to avoid it (so, I assume, they don't have problems taking breaks). At least for me, having external cues for taking breaks, and clear instructions on what to do with each break (stretch, go for a walk, close my eyes, look into a distance of preferably a few km and focus on faraway objects, etc...), is a must. So far, I've just been making up the breaks and tools to get them as I go along, based on what looks to be low-specificity information found on the net (generic stuff ala ergonomics advice for office staff). This has led to all sorts of side effects - loss of attention as I get distracted if I walk around, breaks in flow with alarm clocks interrupting my thoughts, and people around me assuming I'm low on work due to the frequency of my walking around compared to everyone else. /preamble tl;dr Taking breaks is important My internal break taking system doesn't work, and ad-hoc ones have unwanted side effects What do you do to make sure you take proper breaks? How do you avoid unwanted side-effects, such as getting distracted or interrupting flow or giving your co-workers the impression you're spending a lot of time goofing off?

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • How do I initialize a Scala map with more than 4 initial elements in Java?

    - by GlenPeterson
    For 4 or fewer elements, something like this works (or at least compiles): import scala.collection.immutable.Map; Map<String,String> HAI_MAP = new Map4<>("Hello", "World", "Happy", "Birthday", "Merry", "XMas", "Bye", "For Now"); For a 5th element I could do this: Map<String,String> b = HAI_MAP.$plus(new Tuple2<>("Later", "Aligator")); But I want to know how to initialize an immutable map with 5 or more elements and I'm flailing in Type-hell. Partial Solution I thought I'd figure this out quickly by compiling what I wanted in Scala, then decompiling the resultant class files. Here's the scala: object JavaMapTest { def main(args: Array[String]) = { val HAI_MAP = Map(("Hello", "World"), ("Happy", "Birthday"), ("Merry", "XMas"), ("Bye", "For Now"), ("Later", "Aligator")) println("My map is: " + HAI_MAP) } } But the decompiler gave me something that has two periods in a row and thus won't compile (I don't think this is valid Java): scala.collection.immutable.Map HAI_MAP = (scala.collection.immutable.Map) scala.Predef..MODULE$.Map().apply(scala.Predef..MODULE$.wrapRefArray( scala.Predef.wrapRefArray( (Object[])new Tuple2[] { new Tuple2("Hello", "World"), new Tuple2("Happy", "Birthday"), new Tuple2("Merry", "XMas"), new Tuple2("Bye", "For Now"), new Tuple2("Later", "Aligator") })); I'm really baffled by the two periods in this: scala.Predef..MODULE$ I asked about it on #java on Freenode and they said the .. looked like a decompiler bug. It doesn't seem to want to compile, so I think they are probably right. I'm running into it when I try to browse interfaces in IntelliJ and am just generally lost. Based on my experimentation, the following is valid: Tuple2[] x = new Tuple2[] { new Tuple2<String,String>("Hello", "World"), new Tuple2<String,String>("Happy", "Birthday"), new Tuple2<String,String>("Merry", "XMas"), new Tuple2<String,String>("Bye", "For Now"), new Tuple2<String,String>("Later", "Aligator") }; scala.collection.mutable.WrappedArray<Tuple2> y = scala.Predef.wrapRefArray(x); There is even a WrappedArray.toMap() method but the types of the signature are complicated and I'm running into the double-period problem there too when I try to research the interfaces from Java.

    Read the article

  • Troubleshooting VMware on Ubuntu

    Summary of different problems while using VMware products on Ubuntu. This article is going to be updated from time to time with new information about running VMware products more or less smoothly on Ubuntu. Following are links to existing articles: Running VMware Player on Linux (xubuntu Hardy Heron) Running VMware Server on Linux (version 1.0.6 on xubuntu) Using ext4 in VMware machine   VMware mouse grab/ungrab problem (Source: LinuxInsight) Upgrading GTK library in Ubuntu since Karmic Koala gives you a strange mouse behaviour. Even if you have "Grab when cursor enters window" option set, VMware won't grab your pointer when you move mouse into the VMware window. Also, if you use Ctrl-G to capture the pointer, VMware window will release it as soon as you move mouse around a little bit. Quite annoying behavior... Fortunately, there's a simple workaround that can fix things until VMware resolves incompatibilities with the new GTK library. VMware Workstation ships with many standard libraries including libgtk, so the only thing you need to do is to force it to use it's own versions. The simplest way to do that is to add the following line to the end of the /etc/vmware/bootstrap configuration file and restart the Workstation. export VMWARE_USE_SHIPPED_GTK="force" The interface will look slightly odd, because older version of GTK is being used, but at least it will work properly. Note: After upgrading a new Linux kernel, it is necessary to compile the VMware modules, this requires to temporarily comment the export line in /etc/vmware/bootstrap.

    Read the article

  • Is dependency injection by hand a better alternative to composition and polymorphism?

    - by Drake Clarris
    First, I'm an entry level programmer; In fact, I'm finishing an A.S. degree with a final capstone project over the summer. In my new job, when there isn't some project for me to do (they're waiting to fill the team with more new hires), I've been given books to read and learn from while I wait - some textbooks, others not so much (like Code Complete). After going through these books, I've turned to the internet to learn as much as possible, and started learning about SOLID and DI (we talked some about Liskov's substitution principle, but not much else SOLID ideas). So as I've learned, I sat down to do to learn better, and began writing some code to utilize DI by hand (there are no DI frameworks on the development computers). Thing is, as I do it, I notice it feels familiar... and it seems like it is very much like work I've done in the past using composition of abstract classes using polymorphism. Am I missing a bigger picture here? Is there something about DI (at least by hand) that goes beyond that? I understand the possibility of having configurations not in code of some DI frameworks having some great benefits as far as changing things without having to recompile, but when doing it by hand, I'm not sure if it's any different than stated above... Some insight into this would be very helpful!

    Read the article

  • There is No Scrum without Agile

    - by John K. Hines
    It's been interesting for me to dive a little deeper into Scrum after realizing how fragile its adoption can be.  I've been particularly impressed with James Shore's essay "Kaizen and Kaikaku" and the Net Objectives post "There are Better Alternatives to Scrum" by Alan Shalloway.  The bottom line: You can't execute Scrum well without being Agile. Personally, I'm the rare developer who has an interest in project management.  I think the methodology to deliver software is interesting, and that there are many roles whose job exists to make software development easier.  As a project lead I've seen Scrum deliver for disciplined, highly motivated teams with solid engineering practices.  It definitely made my job an order of magnitude easier.  As a developer I've experienced huge rewards from having a well-defined pipeline of tasks that were consistently delivered with high quality in short iterations.  In both of these cases Scrum was an addition to a fundamentally solid process and a huge benefit to the team. The question I'm now facing is how Scrum fits into organizations withot solid engineering practices.  The trend that concerns me is one of Scrum being mandated as the single development process across teams where it may not apply.  And we have to realize that Scurm itself isn't even a development process.  This is what worries me the most - the assumption that Scrum on its own increases developer efficiency when it is essentially an exercise in project management. Jim's essay quotes Tobias Mayer writing, "Scrum is a framework for surfacing organizational dysfunction."  I'm unsure whether a Vice President of Software Development wants to hear that, reality nonwithstanding.  Our Scrum adoption has surfaced a great deal of dysfunction, but I feel the original assumption was that we would experience increased efficiency.  It's starting to feel like a blended approach - Agile/XP techniques for developers, Scrum for project managers - may be a better fit.  Or at least, a better way of framing the conversation. The blended approach. Technorati tags: Agile Scrum

    Read the article

  • What does "fully supported" mean in context of Radeon Opensource Video Driver?

    - by stevecoh1
    UPDATE: This is not a request for support of my specific issue. Details of that issue are here: How to recover from bad upgrade to 13.04 (Unity very slow) . I have "solved" that issue, for the time being anyway, by loading alternative lighter weight desktops. This question was opened specifically to question the meaning of the documentation at https://help.ubuntu.com/community/RadeonDriver . END OF UPDATE There it is, in Black and White: https://help.ubuntu.com/community/RadeonDriver Fully Supported All these Radeon(HD) cards and derivatives have good 3D acceleration support. This is not an exhaustive list: ... RV610/RV630 Radeon HD 2400/2600/2700/4200/4225/4250 Yet in my case (the HD2400) this proves to be manifestly untrue, at least if "Fully Supported" means sufficient to run Unity in Ubuntu 13.04. It runs all the applications I can launch under Unity, but Unity itself is unbearably slow. It's quite striking really. Click on the "Dash" - go get a cup of coffee. Type a key in the Unity search box, wait five seconds for it to appear. Type Alt-tab and wait five seconds for the screen to finish painting. None of these issues appear outside of Unity components. As you all know, there are complaints about slow performance all over the Internet about Unity. Shouldn't this page somehow address this issue? Especially if "fully supported" doesn't mean sufficiently to run the default modern Ubuntu release. What does "fully supported" mean?

    Read the article

  • mod_rewrite works within directory not on root

    - by Anvesh Saxena
    I am having problem in my RewriteRule for the tags portion. What I am able to debug is that the rule is been triggered at least because the page "tags.php" is been rendered but without the URL parameters. This .htaccess file with the rules is within root for my sub-domain and has following content for tags postion. # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ tags.php?tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ tags.php?tag_name=$1 RewriteRule ^tags/?$ tags.php?tag_name= Another problem that I ain't able to debug is that the similar .htaccess file exists for a directory within my sub domain and is working as expected with the necessary URL parameters also been available. The .htaccess file within the directory reads as follows # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ restAPI.php?type=tags&tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ restAPI.php?type=tags&tag_name=$1 RewriteRule ^tags/?$ restAPI.php?type=tags&tag_name= Could anyone point me the problem that I might be having in my Rewrite rules, I am also facing Internal server error sometimes which I am second guessing is due to the linked problem. Note:- I have Apache version 2.2.23 on my shared hosting.

    Read the article

  • Read only file system error on ubuntu after partitioning

    - by Ranjith R
    I am not sure if I am the root cause of this problem but this is what I did: I wanted latest ubuntu and latest linux mint together on my thinkpad laptop. Windows 7 was already there. I already had mint. So I put in the USB with ubuntu image and started installing ubuntu. I chose to install side by side. It was taking a long time to finish defragmenting and partitioning. I decided to give up as I became a little impatient and I pressed the skip button. After the skipping, I realized that the partitioning was complete and went ahead with installing ubuntu. Now the linux mint OS starts reporting the file system as read only at least once every day and I have restart and tell the OS to fix errors in hard disk. After I press F key, the system fixes the issues, restarts and all is well again. Is there some way to fix the issue permanently. I think reinstalling will solve the issues, but I can not do it as I have a lot of data and I will have reinstall and configure a lot of softwares that I use daily. I checked the smart check in disk utility and the hard disk seems to be fine Also I checked both the partitions for errors with disk utility and the report says they are fine. Is there something I can do before I reinstall?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >