Search Results

Search found 1128 results on 46 pages for 'sees'.

Page 19/46 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • http to https upgrade -- SEO troubles

    - by SLIM
    I upgraded my site so that all pages have gone from using http to https. I didn't consider that Google treats https pages differently than http. I re-created my sitemap to so that all links now reflect the new https and let it be for a few days. (Whoops!) Google is now re-indexing all https pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new https. The problem is that Google sees all of these as brand new pages when many of them have a long http history. Of course most of you will recognize the problem, I didn't set up a 301 from the old http to the new https. Is it too late to do this? Should I switch my sitemap back to http and then 301 to the new https? Or should I leave the sitemap as is, and setup 301 redirects anyway.. I'm not even sure if Google is trying to reach the http site anymore. Currently the site is doing 303 redirects (from http to https), although I haven't figured out why yet. Thanks for any suggestions you can offer.

    Read the article

  • How do I install Ubuntu 13.10 from a partition on my Mac?

    - by Barry
    I am trying to install Ubuntu 13.10 on my Macbook Air. I've previously had no issue installing from a USB stick to this machine. However, I don't currently have access to a USB stick or any external media at all! What I've done so far is partitioned my SSD into 3 partitions. One holds OS X, another is a 5gb partition intended for the install ISO, and a third is intended to be the target for that install. The second two partitions are formatted as FAT. I've used dd (with and without bs=1m) to "burn" my ISO to the small 5gb FAT partition. I also at one point tried using hdituil to convert my ISO file to IMG and went through the same process with same result below. After "burning" my ISO to the small partition, I reboot into Refind. Refind sees my small 5gb partition perfectly well, and when I select that partition it loads GRUB appropriately. However, from here, regardless of what I choose, Ubuntu will start to load and then after a few minutes crash out to: BuzyBox V1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell (ash) Enter 'help' for a list of built in commands. (initramfs) unable to find a medium containing a live file system. I've Googled this error and found a number of people encountering it when trying to install from USB, but no solutions seem applicable to my case (installing from a partition on my SSD, to another partition on my SSD). Is there any solution to this, or do I just need to wait a few days until I have access to a USB stick? Many thanks in advance, and apologies for length -- I figured I'd err on the side of being exhaustive rather than having people suggest things I've already tried.

    Read the article

  • Finishing an iteration early

    - by f1dave
    I'd like some input on this on those working with agile methodologies... A current project is finding that development on our planned user stories is finishing some time before the end of the iteration, and that the testing effort and business acceptance is what's actually dragging us out longer towards the end. This means that the devs in question have spare time, and they're essentially going out to the iteration+1 backlog and starting work on cards there before our current iteration cards are 'done'. As iteration manager, I want to put a stop to this - I want a more team-orientated approach where the group takes ownership of getting all the cards done, as opposed to "Well, dev's done so what do I dev next?" The problem I face is convincing the team of this. On one hand, I understand why the devs don't want to test the code they've written (there are unit tests they write of course, but the manual testing to be done could be influenced by their bias). The team sees working ahead as making our next iterations easier, because a lot of the work is done before we start. I see this as screwing with the whole system of planning/actuals - but it's difficult to convince the team as to why this matters. What advice can you guys and girls give? How do we stop devs reaching ahead? What should they be doing instead? How much of a problem is this in the scheme of things, if things are still getting done?

    Read the article

  • Ubuntu 12.04 64 bit "unable to find medium with live filesystem" AFTER normal install

    - by user88710
    So, I got a new computer (64 bit quad core yada yada). pulled my Ubuntu SSD drive from old machine, installed it into new machine. (my intention here is to have Ubuntu installed on the 120G SSD, Win7 on the main drive) downloaded 64 bit Ubuntu, burned it to a disk. rebooted with Live CD, installed Ubuntu to the SSD drive, had no problems rebooted again, got the grub menu, selected Ubuntu after a minute i got this - "unable to find medium with live filesystem" booting into windows, explorer doesnt even see the SSD. Device manager sees it though. I assume this is because its formatted with ext4. so, The liveCD saw the SSD just fine, installed fine, but when i try to boot ubuntu, i get the error above, heeellllpppp! UPDATE: small update. Windows did a software update that apparently wiped out my grub, so I guess grub was installed on the main drive. I reinstalled Ubuntu (again) on the SSD drive but, still no joy with booting from it. same error message as above.

    Read the article

  • Using AdSense to show ads to logged-in users

    - by John
    I know that you can grant authorization permissions to Google AdSense so that it can 'log in' and see what other logged in users can see (e.g. in a private forum), so that the ads it displays are better targetted. Extending this principle further: I am making a site which will show completely different content for each individual user (i.e. not 'common' content like a forum in which everybody sees essentially the same thing). You could think of this content as similar to the way each Facebook user has a different news feed, but it is the 'same' page. Complicating things further, the URLs for this site will be simple, e.g. '/home' and '/somepage', and will not usually include unique identifiers to differentiate between users (e.g. '/home?user=32i42'). My questions are: Is creating an account purely for AdSense to log in to the site with worth it in this case, seeing as it will be seeing it's own 'personalized' version and not any other user's? More importantly: is that against the Google AdSense Terms of Service? (I can't seem to figure that one out) How would you go about this problem?

    Read the article

  • What is the philosophy/reasoning behind C#'s Pascal-casing method names?

    - by Nocturne
    I'm just starting to learn C#. Coming from a background in Java, C++ and Objective-C, I find C#'s Pascal-casing its method-names rather unique, and a tad difficult to get used to at first. What is the reasoning and philosophy behind this? I'm guessing it is because of C# properties. Unlike in Objective-C, where method names can be exactly the same as an instance variables, this is not the case with C#. I would guess one of the goals with properties (as it is with most of the languages that support it) is to make properties truly indistinguishable from variables and methods. So, one can have an "int x" in C#, and the corresponding property becomes X. To ensure that properties and methods are indistinguishable, all method names I'm guessing are also therefore expected to start with an uppercase letter. (This is just my hypothesis based on what I know of C# so far—I'm still learning). I'm very curious to know how this curious guideline came into being (given that it's not something one sees in most other languages where method names are expected to start with a lowercase letter) (EDIT: By Pascal-casing, I mean PascalCase (which is basically camelCase but starting with a capital letter). Method names typically start with a lowercase letter in most languages)

    Read the article

  • What data should be cached in a multiplayer server, relative to AI and players?

    - by DevilWithin
    In a virtual place, fully network driven, with an arbitrary number of players and an arbitrary number of enemies, what data should be cached in the server memory, in order to optimize smooth AI simulation? Trying to explain, lets say player A sees player B to E, and enemy A to G. Each of those players, see player A, but not necessarily each other. Same applies to enemies. Think of this question from a topdown perspective please. In many cases, for example, when a player shoots his gun, the server handles the sound as a radial "signal" that every other entity within reach "hear" and react upon. Doing these searches all the time for a whole area, containing possibly a lot of unrelated players and enemies, seems to be an issue, when the budget for each AI agent is so small. Should every entity cache whatever enters and exits from its radius of awareness? Is there a great way to trace the entities close by without flooding the memory with such caches? What about other AI related problems that may arise, after assuming the previous one works well? We're talking about environments with possibly hundreds of enemies, a swarm.

    Read the article

  • SQL Server DBA - How to get a good one!

    - by ETFairfax
    I'm a lone developer. I am currently developing an application which is seeing me get way way way out of my depth when it comes to SQL DBA'ing, and have come to realise that I should hire a DBA to help me (which has full support from the company). Problem is - who? This SO thread sees someone hire a DBA only to realise that they will probably cause more harm then good! Also, I have just had a bad experience with a ASP.NET/C# contractor that has let us down. So, can anyone out there on SO either... a) Offer their services. b) Forward me onto someone that could help. c) Give some tips on vetting a DBA. I know this isn't a recruitment site, so maybe some good answers for c) would be a benefit for other readers!! BTW: The database is SQL Server 2008. I'm running into performance issues (mainly timeouts) which I think would be sorted out by some proper indexing. I would also need the DBA to provide some sort of maintenance plan, and to review how our database will deal what we intend at throwing at it in the future!

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Kathryn Perry
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world.The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3DMeg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP."The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions."Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform."With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps.Continue reading the full article here.

    Read the article

  • Installing Ubuntu on Asus G75VW (UEFI)

    - by user101653
    You all are my last hope... help! I bought an Asus G75VW from Best Buy. It has the new UEFI BIOS instead of the old style BIOS (1980's) and has Windows 8 preinstalled. I cannot get the G75VW to install Ubuntu 12.10 in EFI mode. I did get Ubuntu to load if I changed the BIOS to CSM and the computer sees and installs Ubuntu in "legacy mode". I attempted boot repair, and Ubuntu will load after 1 minute but as legacy BIOS only. If I changed the BIOS to UEFI "Binary is whitelisted" is displayed and I get a purple screen. My goal... keep my preinstalled Windows 8 on internal drive bay 1 and install Ubuntu 12.10 on internal drive bay 2... and somehow make a choice on which to choose. I am at a loss. I am a software programmer, but I am very bad at understanding BIOS and partitioning. Any ideas? Has anyone done what I want to do. This is a full second day on my "issue"! If I cannot get Ubuntu installed, I'm returning the laptop. And "wait" until these obstacles of UEFI/EFI and properly handled to allow people to load EFI based Ubuntu without a hitch. Thanks, Dave

    Read the article

  • Connect to NFS on availability

    - by berkes
    What would be a good way to automatically mount an NFS when it gets/is available? I have the following: Media server at home, running Ubuntu, 10.10 with GUI *) Laptop often at home, often on the road or at clients. Ubuntu 10.10 with GUI. What I'd like is my laptop connecting to the nfs (or any other mountable networked filesystem) so that Banshee sees all the music, new podcast-entries (and video) from that media-server. I already have firefly (mt-daapd) running, which works, but is flakey on both server-side and client-side. But its biggest downside, is that I cannot easily fix metadata on files on the media-server this way. DAAP is read-only by design. I can mount nfs manually, through a sudo mount /media/nfsmultimedia/. I am not looking for a manual, or howto on setting up a NFS client and server. Merely a way to have this more transparently working. Obviously I'd like the NFS to be unmounted if the network is no longer available (i.e. when I open my laptop-lid on my clients buro). It may be, that an NFS is not suited for this, in that case, I'd love to hear other options. :) *) Actually: I also have a fileserver, backupserver and webserver to which I'd like to connect in a somewhat similar way. Right now I connect to these over SSH, using gvfs.

    Read the article

  • 14.04.1 LTS 64 bit from USB does not see my windows 7 when I go to install it

    - by W J
    I suppose this is as much a question as a heads up/warning 14.04.1 LTS only gave me the option of writing over everything on one of my windows 7 machines. If I'd pushed the wrong button and continued I would have lost some mighty important items. On a similar windows laptop I succesfully installed 14.04.1 LTS 32 bit alongside windows 7 rather easily ( and I dig it!), there was a prompt in that case that let me select to install it alongside my windows OS.Yikes, not in this case. This laptop was formatted NTFS, the Ubuntu usb pendrive I formatted fat32. could be a clue? It looks like there is an advanced install ubuntu, but I am not that advanced. I may try to use windows diskformat (What fdisk is gone?) to make a partition, then see if the ubuntu on my usb stick "sees" my windows. If anybody has a better plan let me know. Mahalo AHA!? p.s. its a SSD harddrive, perhaps thats the crux?

    Read the article

  • NVIDIA X Server TwinView isses with 2 GeForce 8600GT cards

    - by Big Daddy
    Full disclosure, I am relatively new to Linux and loving the learning curve but I am undoubtedly capable of ignorant mistakes I have a rig I am building for my home office desktop. The basics: Gigabyte MB AMD 64bt processor Ubuntu 12.04 64bit two Nvidia GeForce 8600GT video cards using a SLI bridge two 22" DVI input HP monitors So here is my issue (it is driving me nuts) with X Server. If I plug both monitors into GPU 0 X Server auto configures TwinView and all is grand, works like a charm, though both are running off of GPU 0. If I plug one monitor into GPU 0 and one monitor into GPU 1, X Server enables the monitor on GPU 0, sees but keeps the monitor on GPU 1 disabled. My presumption (we all know the saying about presumptions) is that all I would have to do is select the disabled monitor on GPU 1 and drop down the Configure pull down and select TwinView...problem is when the monitors are plugged in this way, the TwinView option is greyed out and can not be selected. What am I not understanding here? Is there some sort of configuration I need to do elsewhere for Ubuntu to utilize both GPU's? Any help will be most appreciated, thanks in advance.

    Read the article

  • How to Manage and Use LVM (Logical Volume Management) in Ubuntu

    - by Justin Garrison
    In our previous article we told you what LVM is and what you may want to use it for, and today we are going to walk you through some of the key management tools of LVM so you will be confident when setting up or expanding your installation. As stated before, LVM is a abstraction layer between your operating system and physical hard drives. What that means is your physical hard drives and partitions are no longer tied to the hard drives and partitions they reside on. Rather, the hard drives and partitions that your operating system sees can be any number of separate hard drives pooled together or in a software RAID Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • How do you convince the client their application's backend needs a rewrite?

    - by Richard DesLonde
    I have been supporting a LOB winforms application for a client the last 3 years. The application is built with a simple monolithic architecture and uses .NET 2.0. The application is a core part of their operations and its longevity is paramount. It needs to evolve with their evolving business processes, as well as implement improved functionality etc....this brings me to believe that this application needs an overhaul of sorts on the back-end. The problem is changing a back-end is "invisible"...i.e. the user never actually sees it. It's a quality of the system that is changing (stability, maintainability, reliability, longevity), not some functional requirement that will be easily seen...i.e. the ROI is not obvious. There is a lot of new functionality to be added to the front-end as well (user experience). I am considering a strategy of changing the back-end over time...i.e. when making a change or adding a feature to the front-end, change those components in the back-end that are affected, eventually you get to everything. How do I convince the client that we need to rebuild the back-end?

    Read the article

  • Weird 302 Redirects in Windows Azure

    - by Your DisplayName here!
    In IdentityServer I don’t use Forms Authentication but the session facility from WIF. That also means that I implemented my own redirect logic to a login page when needed. To achieve that I turned off the built-in authentication (authenticationMode="none") and added an Application_EndRequest handler that checks for 401s and does the redirect to my sign in route. The redirect only happens for web pages and not for web services. This all works fine in local IIS – but in the Azure Compute Emulator and Windows Azure many of my tests are failing and I suddenly see 302 status codes where I expected 401s (the web service calls). After some debugging kung-fu and enabling FREB I found out, that there is still the Forms Authentication module in effect turning 401s into 302s. My EndRequest handler never sees a 401 (despite turning forms auth off in config)! Not sure what’s going on (I suspect some inherited configuration that gets in my way here). Even if it shouldn’t be necessary, an explicit removal of the forms auth module from the module list fixed it, and I now have the same behavior in local IIS and Windows Azure. strange. <modules>   <remove name="FormsAuthentication" /> </modules> HTH Update: Brock ran into the same issue, and found the real reason. Read here.

    Read the article

  • Dual Monitor 'How To' for 12.04

    - by Kim Prince
    I recently built my own PC and was delighted with the result, except for a problem with dual monitors. After having tried a few different combinations of hardware, I think what I really need is a 'how to' explanation. My motherboard is an MSI Z77MA-G45, which has an analogue, a DVI, and a HDMI port. Initially I hooked monitors up to the DVI and analogue port and it seemed to work fine. Both screens worked independently of each other, it was great. After a few days I started turning my PC off at night, and when I tried to turn it back on it would boot into the terminal mode. I would have to turn one of the monitors off and after rebooting a few times, it would eventually boot into an X Window session. Occasionally I would see an error relating to Xorg. I upgraded the motherboard BIOS but that made no difference. Eventually, I installed a graphics card - an NVIDIA GeForce GT 520. Now it seems that my on board graphics have been disabled completely, and I am reliant on the graphics card. Furthermore, the graphics card seems to only recognise one screen at a time. (The first time I rebooted with both plugged in, it flashed up a message saying that it was auto-selecting DVI). Anyhow, I think I need some 'how to' (or perhaps 'where to'), from here. For example, is X Windows configuration the next place to look? And how do I go about configuring X Windows? (Note that in Systems Settings it says my graphics driver is 'unknown', and when I ask it to detect monitors, it sees only the one!)

    Read the article

  • What should you do when presented with a horrible design?

    - by plua
    Our firm makes websites. We also design websites. But sometimes our client brings his/her own design. This is often made by an in-house designer, or it is the same design they used for something else. However, sometimes these designs look awful. And I am talking really unprofessional, unbalanced, uncool. But the client really wants this design. I really do not like working with a design that is so awful. It takes away all pleasure in coding. You code. You check the demo. Works great. Looks awful. It's just not fun. And ultimately the client might be happy, but 1) I do not feel proud of the final product and 2) the community sees you 'develop' ugly websites, which is bad for your image. Anybody experiencing this kind of stuff? What do you recommend? I've been thinking: Blocking these clients. If somebody has an 'own' design, ask to see it first. Then somehow politely decline. Drawback: you lose a client. Create a new design. Have our in-house designers work one something really cool. Drawbacks: client would need to pay for this (without asking for it), or it will be declined and the company loses time = money. And it might come as an insult if you propose a new design out of the blue. THEIR designer won't like it for sure. Put a clear disclaimer at the bottom of the site: Website design by XXXXX, Website development by US. Helps for the community-impact (if people pay attention), but not for the uneasy feeling.

    Read the article

  • Best way to lay-out the website when sections of it are almost identical

    - by Linas
    so, I have a minisite for the mobile application that I did. The mobile application is a public transport (transit) schedule viewer for a particular city (let's call it Foo), and I'm trying to sell it via that minisite. I publish that minisite in www.myawesomeapplication.com/foo/. It has the usual "standard" subpages, like "About", "Compatible phones", "Contact", etc. Now, I have decided to create analogue mobile application for other cities, Bar and Baz. These mobile applications (products) would be almost identical to the one for the Foo city, thus the minisites for those would (should) look very similar too (except for some artwork and Foo = Bar replacement). The question is: what do you think would be the most logical way to lay-out the website in this situation, both from the business and search engine perspective? In other words, should I just duplicate the /foo/ website to /bar/ and /baz/, or would it be better to try to create a single website under root path (/)? I don't want search engine penalties for almost-duplicate information under /foo/, /bar/ and /baz/, and also I don't want a messy, non-localized website (I guess the user is more likely to buy something if he/she sees "This-and-that is the application for NYC, the city you live in", not "This-and-that is the application for city A, city B, ..., NYC, ..., and city Z.")

    Read the article

  • Google is re-indexing pages after redirecting URLs from HTTP to HTTPS incorrectly

    - by SLIM
    I upgraded my site so that all pages have gone from using HTTP to HTTPS. I didn't consider that Google treats HTTPS pages differently than HTTP. I recreated my sitemap to so that all links now reflect the new HTTPS URLs and let it be for a few days. (Whoops!) Google is now re-indexing all the HTTPS pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new HTTPS pages. The problem is that Google sees all of these as brand new pages when many of them have a long HTTP history. Of course most of you will recognize the problem, I didn't set up a 301 from the old HTTP to the new HTTPS URLs. Is it too late to do this? Should I switch my sitemap back to HTTP URLs and then 301 redirect to the new HTTPS URls? Or should I leave the sitemap as is, and setup 301 redirects anyway... I'm not even sure if Google is trying to reach the HTTP site anymore. Currently the site is doing 303 redirects (from HTTP to HTTPS), although I haven't figured out why yet.

    Read the article

  • Turn Based Event Algorithm

    - by GamersIncoming
    I'm currently working on a small roguelike in XNA, which sees the player in a randomly generated series of dungeons fending off creeps, as you might expect. As with most roguelikes, the player makes a move, and then each of the creeps currently active on screen will make a move in turns, until all creeps have updated, and it return's to the player's go. On paper, the simple algorithm is straightforward: Player takes turn Turn Number increments For each active creep, update Position Once all active creeps have updated, allow player to take next turn However, when it comes to actually writing this in more detail, the concept becomes a bit more tricky for me. So my question comes as this: what is the best way to handle events taking turns to trigger, where the completion of each last event triggers the next, when dealing with a large number of creeps (probably stored as an array of an enemy object), and is there an easier way to create some kind of engine that just takes all objects that need updating and chains them together so their updates follow suit? I'm not asking for code, just algorithms and theory in the direction of objects triggering updates one after the other, in a turn based manner. Thanks in advance. Edited: Here's the code I currently have that is horrible :/ if (player.getTurnOver() && updateWait == 0) { if (creep[creepToUpdate].getActive()) { creep[creepToUpdate].moveObject(player, map1); updateWait = 10; } if (creepToUpdate < creep.Length -1) { creepToUpdate++; } else { creepToUpdate = 0; player.setTurnOver(false); } } if (updateWait > 0) { updateWait--; }

    Read the article

  • Leadership Tip&ndash;Vent Up!

    - by D'Arcy Lussier
    Leadership is difficult, for many reasons. One of those reasons is that we not only need to keep ourselves motivated when difficult or challenging times come, but we also need to motivate our teams and keep them focussed on the tasks at hand regardless of the mortars being rained down around them. Inexperienced (and experienced) leaders can fall into the “me-too” mentality – that is, the leader sees themselves as part of the team member instead of the leader of the team. Once a leader changes the teams view that he/she is a peer and not the leader, dynamics can change on the team. One of the biggest dangers is that the leader starts sharing frustrations, fears, concerns, etc. with the team that they’re supposed to be leading on to victory. This can destroy a team’s morale and productivity. One simple thing you can do to counter this is remember this rule when it comes to venting: Vent Up! Don’t vent sideways or down, vent up. Vent to the people above you – they’re the ones that tend to have the power to actually change things anyway. You as a leader stay healthy by getting your frustrations and concerns off your chest, your team is still insulated from it, and your superiors are aware of issues that need to be addressed or can coach you through the obstacles. D

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >