Search Results

Search found 7452 results on 299 pages for 'power loss'.

Page 155/299 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • XNA 4.0 - Purple/Pink Tint Over All Sprites After Viewing in FullScreen

    - by D. Dubya
    I'm a noob to the game dev world and recently finished the 2D XNA tutorial from http://www.pluralsight.com. Everything was perfect until I decided to try the game in Fullscreen mode. The following code was added to the Game1 constructor. graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 480; graphics.IsFullScreen = true; As soon as it launched in Fullscreen, I noticed that the entire game was tinted. None of the colours were appearing as they should. That code was removed, the game then launched in the 800x480 window, however the tint remained. I commented out all my Draw code so that all that was left was GraphicsDevice.Clear(Color.CornflowerBlue); //spriteBatch.Begin(); //gameState.Draw(spriteBatch, false); //spriteBatch.End(); //spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive); //gameState.Draw(spriteBatch, true); //spriteBatch.End(); base.Draw(gameTime); The result was an empty window that was tinted Purple, not Blue. I changed the GraphicsDevice.Clear colour to Color.White and the window was tinted Pink. Color.Transparent gave a Black window. Even tried rebooting my PC but the 'tint' still remains. I'm at a loss here.

    Read the article

  • UFW blocking random packets on 443

    - by s2jcpete
    All, I have UFW setup to allow traffic on port 443. It works as expected, though I have a large amount of UFW Block log entries. To Action From -- ------ ---- 80 ALLOW Anywhere 443 ALLOW Anywhere 22222 ALLOW Anywhere 80 ALLOW Anywhere (v6) 443 ALLOW Anywhere (v6) 22222 ALLOW Anywhere (v6) However in my syslog file I see this: [UFW BLOCK] IN=eth0 OUT= MAC=XXX SRC=<foreignip> DST=<serverip> LEN=40 TOS=0x00 PREC=0x00 TTL=116 ID=22025 DF PROTO=TCP SPT=49622 DPT=443 WINDOW=0 RES=0x00 ACK RST URGP=0 About 30 or so seconds later pound (which I'm using for SSL decryption and port redirection) throws a connection timed out messsage. I'm assuming this is because UFW is blocking the packet. I'm at a loss as to an explination. Could the packet be malformed or something, is this normal? Edit - I have since changed the /etc/defaults/ufw and set ipv6=no, so the v6 rules are no longer in the mix. The server is still showing the block / connection timed out behavior though. The new ufw status output is: Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 80 ALLOW IN Anywhere 443 ALLOW IN Anywhere 22222 ALLOW IN Anywhere

    Read the article

  • To Virtual or Not to Virtual

    - by Kevin Shyr
    I recently made a comment "I hate everything virtual" while responding to a SQL server performance question.  I then promptly fired up my Hyper-V development environment to do my proof of concept stuff, and realized that I made the cardinal sin of making a generalized comment about something, instead of saying "It depends". The bottom line is if the virtual environment gives the throughput that the server needs, then it is not that big of a deal.  I just have seen so many environment set up with SQL server sitting in virtual environment sitting in a SAN, so on top of having to plan for loss data, I now have to plan for my virtual environment failing for so many different reasons, thought SQL 2012 High Availability Group should make that easier.  To me, a virtual environment makes sense for a stateless application with big scalibility requirement, but doesn't give much benefit to an application where performance and data integrity are both important.  If security is not a concern, I would just build servers with multiple instances on them to balance the workload. Maybe this is also too generalized a comment, and I'll confess that I'm not a DBA by trade.  I'd love to hear the pros and cons of virtualizing a SQL server, or other examples where virtualization makes total sense (not just money, but recovery, rollback, etc.)

    Read the article

  • Reduce weight in healthy way

    - by krnites
    This post is my daily summary of activities that I am going to take to reduce my overall weight by 15 lbs. I am not an overweight person as per my height of 5.11 I have decent weight of 178.4 and good body build. But from a long time(approx 3 month) I am thinking of getting 2-3 abs (note not 6 abs) and reducing my weight to below 170 ( which I was 2 years ago). This post will work as my daily diary of what I ate, what exercises I did, what apps/software I used to monitor my weight loss. Sometime it will not contain much information but some time when I will research will contain information for people who really want to reduce weight. My current target for next few week is to run 2.5 miles everyday under 30 minutes. Eat a lot of fruits and vegetable. No burger or tacos. No meat either fat or lean. And checking body weight after every four hour. Here are reading for today10:00 AM  - 178.22:00 PM    - 178.4I have already ran for 2 miles in 25 minutes, did a little bit of shoulder exercise and had eaten small bowl of vegetable biryani and two bread.I hope this post and all coming post will give the reader the first hand experience of a person who had read every blog and articles related to reducing weight and now trying to do it by implementing all of them by self. My approach will be a healthy way of reducing weight, I am not going to starve my self or going to eat only fruits all day. I will enjoy all the food item that I like to eat and will work hard on my body. This way I will / reduce the weight naturally and will increase the flexibility, durability and immunity of my system /body.

    Read the article

  • Computer becomes unreachable on lan after some time

    - by Ashfame
    I work on my laptop and ssh into my desktop. I use a lot of key based authentication for many servers for work but recently I couldn't login because ssh would pick up and try all the keys and it stops trying before ultimately falling back to password based login. So right now I am using this command: ssh -X -o PubkeyAuthentication=no [email protected] #deskto The issue is after sometime the desktop would just become unreachable from laptop. I won't be able to open its localhost through IP and today I tried ping'in it and found a weird thing. Instead of 192.168.1.4, it tries to ping 192.168.1.3 which I am sure is the root cause as it just can't reach 192.168.1.4 when its actually trying for 192.168.1.3 Ping command output: ashfame@ashfame-xps:~$ ping 192.168.1.4 PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data. From 192.168.1.3 icmp_seq=1 Destination Host Unreachable From 192.168.1.3 icmp_seq=2 Destination Host Unreachable From 192.168.1.3 icmp_seq=3 Destination Host Unreachable From 192.168.1.3 icmp_seq=4 Destination Host Unreachable From 192.168.1.3 icmp_seq=5 Destination Host Unreachable From 192.168.1.3 icmp_seq=6 Destination Host Unreachable From 192.168.1.3 icmp_seq=7 Destination Host Unreachable From 192.168.1.3 icmp_seq=8 Destination Host Unreachable From 192.168.1.3 icmp_seq=9 Destination Host Unreachable ^C --- 192.168.1.4 ping statistics --- 10 packets transmitted, 0 received, +9 errors, 100% packet loss, time 9047ms pipe 3 Also the ping command message comes in multiple and not one by one. (izx answer's the weirdness I thought there was in ping command.) I did check for desktop, its local IP is still the same, so something is going on in my laptop. Any ideas? P.S. - Laptop runs Ubuntu 12.04 & Desktop runs Ubuntu 11.10 Laptop is connected through wifi to router and Desktop is connected through LAN to router. Update: Even after setting up static IP leases in router settings, I again ran into this issue.

    Read the article

  • Multiplayer approach for tablets on wi-fi (FPS/TPS)? Server authority, etc

    - by Fraggle
    Looking for some guidance or what has worked well for others in implementing a multiplayer FPS/TPS type game on tablets (probably just 2-6 players at a time). The main issue being that tablets/phones are typically "less" connected than say a console or pc might be. And therefore, my thought is that to have complete Server authority of everything is not going to work. But maybe I'm off base on that. So I guess I'm struggling with what (if anything) should happen on a central server and what should happen locally. Or is centralized approach even needed? Some approaches I might do: Player movement : my thought is to control this locally (player-owner) and update server with positon (which then sends out to other clients). Use client side prediction for opponent players so that connection loss will not show a plane for example stop in mid air. Server will send update and try to smoothly correct an opponent player position to server updated one.But don't update owners position on owners device from server. Powerups (health kit/ammo/coins/etc) : need to see them disappear immediately, so do it locally. Add the health locally, but perhaps allow for server correction. If server doesn't see player near that powerup, reject the powerup and adjust server health for player. Fire weapons: Have to see it happen right away, so fire locally, create local bullet and send on its way. Send rpc to server so that this player on other clients also fires. Hit detection: Get's trickier. Make bullet/projectile disappear locally, and perhaps perform local hit animations (shaking, whatever). non-authoritative approach= take the damage locally and send rpc to server or others to update health and inform of hit. Authoritative approach-Don't take the damage, or adjust health. Server will do that if it detects a hit. Anyway that's my current thought stream. Let me know what you think of the above or what has worked for you.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Blogger Blog Takes Ages to Load after Custom Domain Redirection

    - by abhisek
    I recently bought a custom domain for a blogger blog (technabled.com) I have for sometime now. I followed the instructions on blogger's documentation. I added A-name records and CNAME records with my DNS provider. But, now, some strange problems are cropping up. If I connect to my broadband network and then ping technabled.com, it times out. Then, if I visit the webpage, which takes almost one and half minutes to load, and then if I ping technabled.com, it shows expected result. This is not just me. I asked some of the regular readers, who reported the same issue. As a result of this, I am losing a lot of visits. What is stranger is that the subsequent visits to the blog is faster. I have checked with a few online services to test the performance. WebPageTest seems to say the same thing: http://www.webpagetest.org/result/110117_1N_7PE/ (please see the First View / Repeat View time) Also, the pagespeed score is not that bad. So I am ruling out other possibilities. I am at a loss as to what I should do to find a solution. Help is much appreciated. :)

    Read the article

  • Wired Connection Problem

    - by Dave
    After upgrading to 12.04 my interent connection no longer works. More precisely it is really, really, slow, and occasionally will connect, but do so only for a few moments and then disappear again. I am on a Lenovo Workstation e20. Output of ifconfig: eth0 Link encap:Ethernet HWaddr 70:f3:95:00:64:3e inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::72f3:95ff:fe00:643e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7398 errors:0 dropped:74 overruns:0 frame:0 TX packets:6684 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5407828 (5.4 MB) TX bytes:854343 (854.3 KB) Interrupt:20 Memory:fb120000-fb140000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1587 errors:0 dropped:0 overruns:0 frame:0 TX packets:1587 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:152089 (152.0 KB) TX bytes:152089 (152.0 KB) I am really at a loss for what to do. I am relatively new to Ubuntu, searched the other user questions and couldn't figure this out.

    Read the article

  • Twinview broken on upgrade to ubuntu 10.10

    - by mapkyca
    I have been on 9.10 for over a year on the grounds that if it ain't broke, don't fix it. However, I had a spare weekend and figured it was probably about time... I performed an upgrade to 10.4, and everything seemed to proceed smoothly, so I took the plunge and went for 10.10. Disaster. My twinview Nvidia display which had been working perfectly is now broken. On boot everything seems fine, but when X starts and the second monitor springs into life the primary winks out and switches off - almost as if its been put into an unsupported display mode. The system seems to think there's a second monitor - the nvidia logo is split across the two screens, but it can't seem to start. Things I've tried: Swapping the monitors (one is older than the other, and its definitely the port not the actual monitor) Rolling back to an old Xorg conf from prior to the upgrade Installing a non-beta driver direct from Nvidia (this seems to start both monitors but then apparently stops boot and causes the second display to 'wink'. Twinview seems non-functional, both displays are mirrors) Disabling EDID Disabling twinview, logging in and attempting to use the Nvidia config to re-detect the monitors (second monitor is falsely detected and won't go higher than 1024x768. Selecting 'apply' causes one screen to go blank and the other to display garbage) googling for about 5 hours looking for similar problems, none of the offered solutions seemed to work I'm at a loss, and it is looking very much like I'm going to have to go through a time consuming reinstall to downgrade back to the working 10.4. Any thoughts?

    Read the article

  • High Availability

    - by mattjgilbert
    Udi Dahan presented at the UK Connected Systems User Group last night. He discussed High Availability and pointed out that people often think this is purely an infrastructure challenge. However, the implications of system crashes, errors and resulting data loss need to be considered and managed by software developers. In addition a system should remain both highly reliable (backwardly compatible) and available during deployments and upgrades. The argument is that you cannot be considered highly available if your system is always down every time you upgrade. For our recent BizTalk 2009 upgrade we made use of our Business Continuity servers (note the name, rather than calling them Disaster Recovery servers ? ) to ensure our clients could continue to operate while we upgraded the Production BizTalk servers. Then we failed back to the newly built 2009 environment and rebuilt the BC servers. Of course, in the event of an actual disaster there was a window where either one or the other set were not available to take over – however, our Staging machines were already primed to switch to production settings, having been used for testing the upgrade in the first place.   While not perfect (the failover between environments was not automatic and without some minimal outage) planning the upgrade in this way meant BizTalk was online during the rebuild and upgrade project, we didn’t have to rush things to get back on-line and planning meant we were ready to be as available as we could be in the event of an actual disaster.

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • ubuntu 12.10 Lenovo b570e, WiFi connected but not working

    - by koogee
    I'm running ubuntu 12.10 liveUSB on a lenovo b570e. It has an atheros wifi card that connects with my home network but I can't browse AT ALL. My network is multiple clients --> router --> isp modem --> Internet I can ping my router (192.168.0.1) but not my isp's modem(192.168.15.1). I have 3 other computers connected to the same router that are working fine (infact i'm posting from one) ping -c 3 google.com unknown host google.com ping -c 3 8.8.8.8 shows 100% packet loss I think its some networking issue. I tried directly connecting it to the router via ethernet cable but same issue. It gets an ip, shows LAN connected but can't browse. If I connect it directly to the isp modem via ethernet cable it starts working fine. Connection Information shows: Interface: 802.11 wifi (wlan0) driver: ath9k security: wpa/wpa2 speed: 150mb/s ip: 192.168.0.106 broadcast: 192.168.0.255 subnet:255.255.255.0 default route: 192.168.0.1 primary dns: 192.168.15.1 i have restarted the router and modem many times. Rebooted the LiveUSB many times.

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • Computer Says No: Mobile Apps Connectivity Messages

    - by ultan o'broin
    Sharing some insight into connectivity messages for mobile applications. Based on some recent ethnography done my myself, and prompted by a real business case, I would recommend a message that: In plain language, briefly and directly tells the user what is wrong and why. Something like: Cannot connect because of a network problem. Affords the user a means to retry connecting (or attempts automatically). Mobile context of use means users use anticipate interruptibility and disruption of task, so they will try again as an effective course of action. Tells the user when connection is re-established, and off they go. Saves any work already done, implicitly. (Bonus points on the ADF critical task setting scale) The following images showing my experience reading ADF-EMG Google Groups notification my (Android ICS) Samsung Galaxy S2 during a loss of WiFi give you a good idea of a suitable kind of messaging user experience for mobile apps in this kind of scenario. Inline connection lost message with Retry button Connection re-established toaster message The UX possible is dependent on device and platform features, sure, so remember to integrate with the device capability (see point 10 of this great article on mobile design by Brent White and Lynn Hnilo-Rampoldi) but taking these considerations into account is far superior to a context-free dumbed down common error message repurposed from the desktop mentality about the connection to the server being lost, so just "Click OK" or "Contact your sysadmin.".

    Read the article

  • When NOT to use a framework

    - by Chris
    Today, one can find a framework for just about any language, to suit just about any project. Most modern frameworks are fairly robust (generally speaking), with hour upon hour of testing, peer reviewed code, and great extensibility. However, I think there is a downside to ANY framework in that programmers, as a community, may become so reliant upon their chosen frameworks that they no longer understand the underlying workings, or in the case of newer programmers, never learn the underlying workings to begin with. It is easy to become specialized to a degree that you are no longer a 'PHP programmer' (for example), but a "Drupal programmer", to the exclusion of anything else. Who cares, right? We have the framework! We don't need to know how to "do it by hand"! Right? The result of this loss of basic skills (sometimes to the extent that programmers who don't use frameworks are viewed as "outdated") is that it becomes common practice to use a framework where it is not required or appropriate. The features the framework facilitates wind up confused with what the base language is capable of. Developers start using frameworks to accomplish even the most basic of tasks, so that what once was considered a rudimentary process now involves large libraries with their own quirks, bugs, and dependencies. What was once accomplished in 20 lines is now accomplished by including a 20,000 line framework AND writing 20 lines to use the framework. Conversely, one does not want to reinvent the wheel. If I'm writing code to accomplish some basic, common little task, I might feel like I am wasting my time when I know that framework XYZ offers all the features I am after, and a whole lot more. The "whole lot more" part still has me worried, but it doesn't seem that many even consider it anymore. There has to be a good metric to determine when it is appropriate to use a framework. What do you consider the threshold to be, how do you decide when to use a framework, or, when not.

    Read the article

  • Corporate Efficiency

    - by AndyScott
    Thoughts on streamlining the process of getting someone up to speed when they join a project as a new hire; or as is common in some companies, switch from one project to another: Has anyone heard of a strategy (including emphasis towards consistent, ongoing documentation) that would bring a user up to speed quickly? Has there been any thought given to focused documentation, specific to a role within a project? Or formalized mentoring within a project, that goes beyond a “system walkthrough”?   Often it's overlooked what time is wasted when a senior level worker is brought on board.  It's assumed that they will know the right questions to ask. They are the type of people that normally learn quickly, and in their own ways, so let them get by with what's out there.   Having a user without a computer will cost you measurable worker hours, making it an easy target to shoot at (and rightly so). Not getting them up to speed as quickly as possible is an efficiency issue, that seems to have become an industry standard as an accepted loss. Given the complexity of the projects within most companies, and the frequency with which users are shifted from one project to another based on need; I think this is an area that bears consideration.

    Read the article

  • Expanding development team for a startup

    - by acjohnson55
    I'm a software developer and co-founder of a start up that's in a sprint to launch a web app the next 2 months. We have about 3 months of burn time we have before we need to get some funding. By that time, we want to have a product with active users, and ideally some revenue. I'm fairly confident that I can accomplish the task by myself, but I have also never launched a project of this magnitude. The better product we can build in this timespan, the faster we can grow our user base, and the better our fundraising options will be. So I'm looking to bring someone onboard to hack with me. Maybe more than one person. Good help is hard to find, as we all know, and while I'm willing to share equity, I also want that to be contingent on a productive fit. What is the best approach to a trial-type framework for hiring another developer? Something where the other person feels that their work will be rewarded if they do well and that they can't be left empty-handed at my whim, but where I know that if it turns out not to be a good fit, I can pull the cord without significant loss?

    Read the article

  • Update Manager got stuck (but not frozen) while installing downloaded updates. What should I do?

    - by WarriorIng64
    I have just gotten my Ubuntu 12.04 LTS desktop computer reassembled after a trip back home and connected it to my parent's wireless Internet connection. The connection seems quite shaky (disconnects half the time, likely an ongoing issue with the wireless card I have installed), and it struggled to download updates because of the constant interruptions. Eventually, it managed to download the updated packages and started installing them. I got up and left it to do its work. When I came back, I saw it was still having trouble staying connected to the wireless (no surprise there), but then I noticed that it seemed like Update Manager had stopped making progress on the installation. I opened the Details pane to see what it was last doing: My guess was that the installation script for flashplugin-installer couldn't complete the download until I stabilized the Internet connection. I hooked my Ubuntu laptop up to my desktop via Ethernet and shared its wireless connection using this guide, and as I am typing this now from my desktop you can see that the connection issue was successfully worked around. However, even with a stable connection established, Update Manager seems "stuck" at its current position and won't go any further. It's not totally frozen, but I can't do anything beyond open/close the Details pane as the Cancel button is grayed out. I know it can cause big problems if updates are stopped during installation, but I'm at a loss as to how this situation should be handled. I'm sure it should finish normally if I can just find a way to restart Update Manager, but the question is how this should be approached. How can I safely get my updates to finish installing?

    Read the article

  • What actions to take when people leave the team?

    - by finrod
    Recently one of our key engineers resigned. This engineer has co-authored a major component of our application. We are not hitting Truck number yet though, but we're getting close :) Before the guy waltzes off, we want to take actions necessary to recover from this loss as smoothly as possible and eventually 'grow' the rest of the team to competently cover the parts he authored. More about the context: the domain the component covers and the code are no rocket science but still a lot of non-trivial stuff. Some team members can already cover a lot of this but those have a lot on their plates and we want to make sure every. (as I see it): Improve tests and test coverage - especially for the non-trivial stuff, Update high level documents, Document any 'funny stuff' the code does (we had to do some heavy duct-taping), Add / update code documentation - have everything with 'public' visibility documented. Finally the questions: What do you think are the actions to take in this situation? What have you done in such situations? What did or did not work well for you?

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • 12.04 Taking forever for no apparent reason

    - by Sam
    First off, I'd just like to say how much I'm loving Ubuntu so far. It's one of my first steps into Linux, but so far it's blown away Windows in almost every regard. Now, onto my problem. My brothers have some old Dell Dimensions that are barely clinging to life. They were both running XP Home Premium. The 2400 installed just fine, no issues at all. Now, the other computer, the 3000 isn't clearing the screen with the Ubuntu logo and the pulsing dots. I've tried multiple discs, including the exact same one that gave me no issue in the other computer, and I'm at a loss. Does anyone have any suggestions as to where the problem might lay? They're both 32 bit Intel processors, and it's the correct version of Ubuntu. Is it a bad disc drive? Hardware incompatibility? Thanks for any assistance that can be provided. Dell Dimension 2400 Dell Dimension 3000

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • Cannot boot: FGLRX 8.780 + Kernel 2.6.35-25

    - by pluc
    The situation before this all happened is pretty standard. I have a HP Pavillion dv5 laptop with an ATI Mobility Radeon 4200 series. It always worked fine with Ubuntu for as long as I can remember. However, at one point, something happened and truly made a majestic mess of things. It might've been extra repos I enabled with Ubuntu Tweak - I do not know. But something made it so that my system would not boot any longer. And when I say "won't boot", this is what I mean: - Durning a normal bootup, any entries (except Windows) selected with GRUB (or BURG, not even sure which one I'm using anymore) will spawn the Ubuntu loading screen - then try to start X (or GDM) 5 times. The screen goes to dark, black and back to the Ubuntu loading screen. Then it just stays there until I spawn another TTY. I have no idea what is happening or why. There are no errors in my logs, and I'm truly at a loss here. I've linked three files: Xorg.0.log, the output of dmesg and the GDM log: Xorg.0.log: http://ubuntu.pastebin.com/tpVKc2tc dmesg: ubuntu.pastebin.com/Nd5aYj45 gdm's :0.log: couldn't post due to lack of points :( Let me know if any of you more knowledgeable folks can restore some sanity in my life. Any help is greatly apreciated.

    Read the article

  • Kubuntu 11.10 Lot of Networking problems

    - by Cobraone
    Since I upgrade to 11.10 I have a lot of problems with KDE. First of all there are problems in configuring a static IP address. Just to explain @home I have a normal fiber ADSL and I use a DHCP. When I go to a customer I must insert a static IP address. With ifconfig everything seems ok but there is something wrong in searching DNS names. (I have installed Ubuntu and was going ok again). Now I Have reinstalled again Kubuntu 11.10 and I have the same problem in addition today I have discovered that if I connect to a network in another customer office the desktop freezes and I could only switch between windows with alt+tab. No FN key or right click to open run command works. So i unplugged network (configuration is just DHCP here) and tried on another position in office. It was the same. My Laptop freezes when connected, a fedora 14 of a friend works. So I decided to connect my Galaxy S II as USB network device. Everything is ok for like 3 minutes. When I noticed a little loss of signal again the desktop freezes and i must work (like now) just switching between windows with alt+tab). Additional information: Unplugging network or restarting it via Konsole does not not solve the freezing problem. Every time I must open a console and reboot. Any idea of what tests to do ? Just a recommendation: If I must post here logs or something else please guide me. I use Linux since Ubuntu 9 but I am not an "expert".

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >