Search Results

Search found 8959 results on 359 pages for 'bad decisions'.

Page 196/359 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • how to install Ubuntu on a fresh hard drive

    - by Herman Wiegman
    I attempted to install Ubuntu from a USB stick to my Intel 4 3GHz computer with 80GB HDD. The installer was doing well, then it said something to the effect of "errors on the source USB, or the target HDD" The recommendation was to download the installer again. I suspected my HDD was going bad so I figured I would investigate. What I found was a partially formatted 80GB HDD. I repartitioned it via a different computer. Now a fresh copy of the Ubuntu USB installer is not able to move past the start-up screen (it freezes). I was able to purchase a new / clean HDD, but still the fresh copy of the installer still locks up after the initial opening screen (locks up after about 2 screens worth of installations steps). Does this sounds like a HDD NTHS issue or a CPU/hardware/memory issue? or should I move to a CD image file rather than my USB stick? Now my computer is stuck... no OS.. no way to go back to Windows (upgrade OS CD only). Any insight would be greatly appreciated. Stuck in Schenectady Herman Wiegman

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Providing SSH tunnling, what to think about when configuring Ubuntu Server

    - by bigbadonk420
    Recently I've considered, mostly as a pet project, to set up accounts for a closed group of users via SSH to my box with the purpose of SSH tunnling things like web traffic -- some of it for friends that live abroad and perhaps also to help some people bypass national censorship. There's some things I imagine that I need to do, such as: Disabling shell access by setting the shell to /bin/false or similar. Get some software that can track bandwidth usage on a per-user basis historically Make sure that each user can only use a certain amount of bandwidth. The reason I'm posting here to begin with is to look around and get some pointers regarding what kind of things I should read up on, as well as hearing if there are any software recommendations for doing what I'm trying to do. I already know a bit since I've actually gotten SSH tunnling up and running already, I just don't feel like letting it loose to other people without restrictions and some basic monitoring. I'm primarily trying to learn here, so if you think this is a Very Bad Idea (or if you have a better idea on how to do this) then by all means say so, but please include some information on how to do it :) (I'm also open to trying things like OpenVPN but it seems really hard to set up, also I've heard SSH more often works in locked down environments)

    Read the article

  • Naming conventions: camelCase versus underscore_case ? what are your thoughts about it?

    - by poelinca
    I've been using underscore_case for about 2 years and I recently switched to camelCase because of the new job (been using the later one for about 2 months and I still think underscore_case is better suited for large projects where there are alot of programmers involved, mainly because the code is easyer to read). Now everybody at work uses camelCase because (so they say) the code looks more elegant . What are you're thoughts about camelCase or underscore_case p.s. please excuse my bad english Edit Some update first: platform used is PHP (but I'm not expecting strict PHP platform related answers , anybody can share their thoughts on which would be the best to use , that's why I came here in the first place) I use camelCase just as everibody else in the team (just as most of you recomend) we use Zend Framework which also recommends camelCase Some examples (related to PHP) : Codeigniter framework recommends underscore_case , and honestly the code is easier to read . ZF recomends camelCase and I'm not the only one who thinks ZF code is a tad harder to follow through. So my question would be rephrased: Let's take a case where you have the platform Foo which doesn't recommend any naming conventions and it's the team leader's choice to pick one. You are that team leader, why would you pick camelCase or why underscore_case? p.s. thanks everybody for the prompt answers so far

    Read the article

  • Thoughts on exception handling.

    - by AndyScott
    Was working on a windows form app (something I haven't done in a while), adding threading and logging so that it would work a little more smoothly and have a record of who did what.  I was just about at the point where I was going to check it into source control when I noticed that the Output window was showing "A first chance exception of type 'System.InvalidCastException' occurred in mscorlib.dll", so I googled it.  In reading some threads about the error, I came across the following comment and it got me thinking: "In addition, while they should be avoided if possible, exceptions are a quite legitimate part of program execution. It's their going unhandled that is a real issue, because that means crashy, crashy." How do you normally use exception handling?  I feel that exceptions are intended to handle errors in code (in my experience generally related to bad data making its way into the system).  Now don't get me wrong, I understand that exceptions happen and should be dealt with, but I feel that they are a "last resort" to keep a program from crashing, but should never be a way to pass data or continue logical processing that could be handled in standard code flow. I mention this, because I have seen it done. What do you think?

    Read the article

  • How can I find out what site a popup ad came from?

    - by ændrük
    This is the situation: I've been browsing the web for an hour in pursuit of some bit of technical information and have visited several dozen websites that I don't even remember anymore. I've finally found what I need so I start closing the web browser, only to discover that — aaargh! — there's a popup ad hiding underneath! My blood boils. What insidious website is responsible for this circumvention of my browser's popup blocker? I want to make it pay for its crime. I'll write angry emails. Leave bad reviews. Even block it from my Google search results — yes, that'll show it! But I've reached an impediment. The offending site has already been closed. Is it too late to deduce the advertisement's origin? Or can I somehow un-pop the popup? Here's a test page. With only the popup left on your screen, can you deduce that it was caused by visiting PasteHTML?

    Read the article

  • Question regarding Readability vs Processing Time

    - by Jordy
    I am creating a flowchart for a program with multiple sequential steps. Every step should be performed if the previous step is succesful. I use a c-based programming language so the lay-out would be something like this: METHOD 1: if(step_one_succeeded()) { if(step_two_succeeded()) { if(step_three_succeeded()) { //etc. etc. } } } If my program would have 15+ steps, the resulting code would be terribly unfriendly to read. So I changed my design and implemented a global errorcode that I keep passing by reference, make everything more readable. The resulting code would be something like this: METHOD 2: int _no_error = 0; step_one(_no_error); if(_no_error == 0) step_two(_no_error); if(_no_error == 0) step_three(_no_error); if(_no_error == 0) step_two(_no_error); The cyclomatic complexibility stays the same. Now let's say there are N number of steps. And let's assume that checking a condition is 1 clock long and performing a step doesn't take up time. The processing speed of Method1 can be anywhere between 1 and N. The processing speed of Method2 however is always equal to N-1. So Method1 will be faster most of the time. Which brings me to my question, is it bad practice to sacrifice time in order to make the code more readable? And why (not)?

    Read the article

  • Rails: The Law of Demeter [duplicate]

    - by user2158382
    This question already has an answer here: Rails: Law of Demeter Confusion 4 answers I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Reflective discovery of an inner class in an API

    - by wassup
    Let me ask you, as this bothers me for quite a while but appears to be subjectively the best solution for my problem, if reflective discovery of an inner class for API purposes is that bad idea? First, let me explain what I mean by saying "reflective discovery" and all that stuff. I am sketching an API for a Java database system, that'll be centered around block-based entities (don't ask me what that means - that's a long story), and those entities can be read and returned to the Java code as objects subclassed from the Entity class. I have an Entity.Factory class, that, by means of fluent interfaces, takes a Class<? extends Entity> argument and then, uses an instance of Section.Builder, Property.Builder, or whatever builder the entity has, to put it into the back-end storage. The idea about registering all entity types and their builders just doesn't appeal to me, so I thought that the closest solution to the problem that'd suffice my design needs would be to discover, using reflection, all inner classes of Entity classes and find one that's called Builder. Looking for some expert insight :) And if I missed some important design details (which could happen as I tried to make this question as concise as possible), just tell me and I'll add them.

    Read the article

  • Being stupid to get better productivity?

    - by loki2302
    I've spent a lot of time reading different books about "good design", "design patterns", etc. I'm a big fan of the SOLID approach and every time I need to write a simple piece of code, I think about the future. So, if implementing a new feature or a bug fix requires just adding three lines of code like this: if(xxx) { doSomething(); } It doesn't mean I'll do it this way. If I feel like this piece of code is likely to become larger in the nearest future, I'll think of adding abstractions, moving this functionality somewhere else and so on. The goal I'm pursuing is keeping average complexity the same as it was before my changes. I believe, that from the code standpoint, it's quite a good idea - my code is never long enough, and it's quite easy to understand the meanings for different entities, like classes, methods, and relations between classes and objects. The problem is, it takes too much time, and I often feel like it would be better if I just implemented that feature "as is". It's just about "three lines of code" vs. "new interface + two classes to implement that interface". From a product standpoint (when we're talking about the result), the things I do are quite senseless. I know that if we're going to work on the next version, having good code is really great. But on the other side, the time you've spent to make your code "good" may have been spent for implementing a couple of useful features. I often feel very unsatisfied with my results - good code that only can do A is worse than bad code that can do A, B, C, and D. Are there any books, articles, blogs, or your ideas that may help with developing one's "being stupid" approach?

    Read the article

  • Picking Core Language For Large Scale Web Platform

    - by ryanzec
    Now I have work with PHP and ASP.NET quite a bit and also played around few other language for web development. I am now at a point where need to start building a backend platform that will have the ability to support a large set of applications and I am trying to figure out which language I want to choose as my core language. When I say core language I mean the language that the majority of the backend code is going to be in. This is not to say that other languages won't be used because my guess is that they will but I want a large majority of the code (90%-98%) to be in 1 language. While I see to benefit of using the language that is best for the job, having 15% in php, 15% in ASP.NET, 5% in perl, 10% in python, 15% in ruby, etc… seems like a very bad idea to me (not to mention integrating everything seamlessly would probably add a bit of overhead). If you were going to be building a large scale web platform that need to support multiple applications from scratch, what would you choose as your core language and why?

    Read the article

  • What should you do when presented with a horrible design?

    - by plua
    Our firm makes websites. We also design websites. But sometimes our client brings his/her own design. This is often made by an in-house designer, or it is the same design they used for something else. However, sometimes these designs look awful. And I am talking really unprofessional, unbalanced, uncool. But the client really wants this design. I really do not like working with a design that is so awful. It takes away all pleasure in coding. You code. You check the demo. Works great. Looks awful. It's just not fun. And ultimately the client might be happy, but 1) I do not feel proud of the final product and 2) the community sees you 'develop' ugly websites, which is bad for your image. Anybody experiencing this kind of stuff? What do you recommend? I've been thinking: Blocking these clients. If somebody has an 'own' design, ask to see it first. Then somehow politely decline. Drawback: you lose a client. Create a new design. Have our in-house designers work one something really cool. Drawbacks: client would need to pay for this (without asking for it), or it will be declined and the company loses time = money. And it might come as an insult if you propose a new design out of the blue. THEIR designer won't like it for sure. Put a clear disclaimer at the bottom of the site: Website design by XXXXX, Website development by US. Helps for the community-impact (if people pay attention), but not for the uneasy feeling.

    Read the article

  • Dedicated Servers: Is one better then two for LAMP pseudo HA setup? [closed]

    - by bikedorkseattle
    Possible Duplicate: How to find web hosting that meets my requirements? I know there are zillions of commentary about hosting out there, but I haven't read much about this. Our current well known host is having too many problems, the hardware we are on it subpar, and I'm ready to leave. A day of downtime can cost as much as our monthly hosting bill. A month of bad performance is just killing us right now, user and google wise. I'm wondering about running two dedicated boxes for LAMP, one running as the primary Nginx/Apache (proxy pass), and the other as the MySQL box. Running a single box scares the bejesus out of me because who knows how long it will take anyone to fix a raid card or whatever. The idea is to set this up using some sort of failover system using pacemaker and heartbeat. If one server goes down the other can take over for the other running both web and db. There are some good articles over at Linode about this. I have a few DBs that are 1GB+ and would like to load them into memory. Because of this, I'm shying away from a Linode HA setup because for the price I could do it with two dedicated like I described. Am I mad or an idiot? What are people out there doing for pseodu high availability good performance setups under $400/month? I'm a webmaster; I do a lot of things none of it that well :)

    Read the article

  • What technical details should a programmer of a web application consider before making the site public?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application consider before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • How to mount private network shares on login?

    - by bainorama
    I've read all the existing entries I could find on using pam_mount but none of them seem to work for me. I'm trying to automatically mount shares on my local NAS at user login time. The usernames and passwords on my NAS shares match my local user name and password, but there is no LDAP/AD server. My pam_mount.conf has the following: <volume fstype="cifs" server="bain-brain" path="movies" user="*" sgrp="bains" mountpoint="/home/%(USER)/movies" options="user=%(USER),dir_mode=0700,file_mode=700,nosuid,nodev" /> When I login, I see the following in /var/log/auth.log: Oct 13 10:21:26 bad-lattitude lightdm: pam_mount(misc.c:380): 29 20 0:20 / /home/alastairb/movies rw,nosuid,nodev,relatime - cifs //bain-brain/movies rw,sec=ntlm,unc=\\bain-brain\movies,username=alastairb,uid=1000,forceuid,gid=1000,forcegid,addr=10.1.1.12,file_mode=01274,dir_mode=0700,nounix,serverino,rsize=61440,wsize=65536,actimeo=1 The folder /home/alastairb/movies is present but empty (can't see the files which are on the NAS in the respective share folder). In Nautilus, the share is shown in the sidebar under "Computer", and clicking on this takes me to the correct folder, but again, its empty. Any ideas as to what I'm doing wrong?

    Read the article

  • I receive the error 'grub-install /dev/sda failed' while attempting to install Ubuntu as the computer's only OS.

    - by Liath
    I am attempting to install Ubuntu on a box which was previously running Windows 7. I have also experienced the dreaded "Unable to install GRUB" error. I am not attempting to dual boot. I have previously run a Windows boot disk and removed all existing partitions. If I run the Ubuntu 12.04 install CD and click install after the config screens, I get the error Executing 'grub-install /dev/sda' failed. This is a fatal error. (It is the same error as this question: Unable to install GRUB) All the questions I've read while looking for a solution are related to dual boot. I'm not interested in dual boot, I'm after a clean out the box Ubuntu install. How can I achieve this? (For my sanity, please use very simple instructions when responding. I don't claim to have any talent either for linux or as a sysadmin) Additional details copied from comments dated: 2012-05-29 ~15:19Z After booting from the CD, clicking Try Ubuntu, and then sudo fdisk /dev/sda I get fdisk: unable to seek on /dev/sda: Invalid argument sudo fdisk /dev/sdb gives Device contains neither a valid DOS partiion table, nor Sun, SGI or OSF disklabel. Building a new DOS disklabel with disk identifier 0x15228d1d. Changes will remain in memory only until you decide to write them. After that of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite). Command (m for help): I should add the Live CD desktop is graphically bad. I've got missing parts of programs and the terminal occasionally reflects to the bottom of the screen. But I can't imagine this is related.

    Read the article

  • Is error suppression acceptable in role of logic mechanism?

    - by Rarst
    This came up in code review at work in context of PHP and @ operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. Is such code is being "clever" in a bad way? Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? Edit I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '<li>' . $array['field'] . '</li>'; } vs the following example: echo '<li>' . @$array['field'] . '</li>';

    Read the article

  • Employers and intellectual property 2

    - by Rick
    I have a question about intellectual property, I am currently a manager in a small manufacturing firm. The owners are driven by greed and don't appreciate the development process of complex machinery and are happy just to send things out half done. I on the other hand think that it should be done properly as breakdown in the field can be costly, embarrassing. They seem to have all of us running around doing most of the work out of hours using the attitude of "Be grateful to have a job" yet no one has a contract or any security or any agreement in place. For a couple of the projects i am using PLC's and doing the code in my own time and the testing during company time, and i am aware that they cannot support their own machines if i left, but as i created the code in my own time who owns it? The have asked my to put in a shutdown code for a maintenance request after a given length of time, could this be classed as criminal damage or anything illegal apart from immoral? (we sell the machines with 12 month warrantee, shut down after) But as time goes on I'm getting rather fed up of the companies attitude toward the client. I am considering keeping the clients as my own and get them to contact me directly In the shutdown code. By doing something like this is a trial version contact me for a full license? I wouldn't feel bad for my current employer as he is not afraid to S***t on people as he has been evolved in numerous law suits and has over 30 failed companies leaving people and customers high and dry, we have took the company this far on the reputation of the workers and and i can see things heading like all the other companies he has owned and taking our reputations with him. So i suppose now i have set the scene, if i code into it to contact me directly in the shutdown could there be any legal impact on me, as i rightly or wrongly think i own the code and designs? Cheers R

    Read the article

  • Only one user can connect to Ubuntu samba server

    - by StaticMethod
    I setup a samba server on 12.04 LTS, and it works great for one user but not the others. I am trying to map a network drive from a windows 7 laptop. I can successfully authenticate with one user, but the other two both get "Access is denied" errors. Here is my smb.conf file. [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [share] comment = Ubuntu File Server Share path = /srv/share read only = No create mask = 0755 I know that the service is successfully reading from the /etc/passwd file because if I change the Linux password for the user that works, I have to use the new password when I connect. I changed all the users so they are all members of the same groups (all three users are admins anyway). I only ever have one user connected at a time. Here are the permissions on the shared folder /srv$ ls -l drwxrwxrwx 1 nobody nogroup 16 Feb 22 17:05 share Any ideas?

    Read the article

  • X won't start, root filesystem mounted read only

    - by TK Kocheran
    I just experienced a very strange and puzzling problem on my machine that I can't seem to get sorted out. I was running Windows on a second partition, and everything was working great. I then went to restart into Linux, and noticed that X wouldn't start. Everything was displayed in super-low resolution, so I tried reinstalling my NVIDIA driver. I started seeing all of these I/O error problems, so I figured that my SSD was bad. After a bit more playing around, I ran fsck on the drive when mounted from a startup disk as well as badsectors and everything looked great. The SMART drive tests all passed and again, everything was looking good, so I rebooted again and still, no joy. I started then getting some weird USB errors, so I followed someone's advice and unplugged my computer's power supply, then started back up again and my graphics looked a lot better in the BIOS and in the boot logo, but X still wouldn't start. I then found out that my main boot drive was being mounted read-only for some reason. What's going wrong? I've done some pretty extensive tests on the SSD from a startup disk such as writing massive files, reading big files, running filesystem checks on the entire disk, and everything is looking great, until I try to boot. Whenever I try installing the drivers with apt-get, I get a ton of ata error messages looking like this: How can I diagnose what's going wrong and fix it so I can get back to work?

    Read the article

  • How to switch off? [closed]

    - by Xophmeister
    While I've programmed software for many years, I've only recently started doing so professionally and have noticed a bit of a problematic pattern. I hope this is the best place to pose such a question, as I am interested in others' experiences and solutions... Writing software is, by its nature, a cerebral exercise. When coding for my own sake, I would do so until I was satisfied; even if that meant going all night. Now I'm coding in exchange for goods and services, on projects that are inherently uninteresting to me, I want to 'switch off' when it's time to go home. Maybe you consider that to be a 'bad attitude', but I just don't feel that whatever I'm working on is worth caring about after-hours. Besides, my employer doesn't exactly have the infrastructure required to make out-of-office changes; I can't just clone a repo and even remote login is a PITA. Anyway, the problem I'm experiencing is that, while I'm not particularly overworked or stressed, if I'm faced with a problem, my brain will work on a solution. Generally, it won't give up. Hence I can't switch off and, sometimes, the problem or the solution is significant enough that it disrupts my sleep. While, paradoxically, this doesn't seem to affect my coding ability, it can have a profound impact of the rest of my life. I get increasingly low as I get tired. So far, the best solutions I've found are writing little notes on the matter (and, say, e-mailing them back to my work address) and exercise. Neither of these can switch me off entirely and, as the week progresses, exercise especially becomes untenable due to tiredness. TL;DR How can you stop from being a coding zombie?

    Read the article

  • Ubuntu on Samsung NP700Z5B - no Grub

    - by copolii
    I just bought a Samsung NP700Z5B laptop. Gorgeous machine and great performance! I do 2 things when I get a new laptop: Format the HD and install Winblows from a CD to ditch the bloatware Install some variant of Linux on it (lately Ubuntu) Step 1 worked fine (until earlier today), but I haven't been able to install Ubuntu on it for the past 3 days! I've tried Mint12, Ubuntu 12.04, Ubuntu 11.10, Ubuntu 11.04 and Ubuntu 10.04. The live CD and the installations all run fine and report no problems, but when I reboot grub is nowhere to be found! The system goes directly to Winblows! I've tried booting from the liveCD and re-installing grub via the chroot and purge & reinstall methods (https://help.ubuntu.com/community/Grub2) and neither makes a difference. I've also tried copying the boot sector: dd if=/dev/sda of=linux.bin bs=512 count=1 and putting it on c: then setting bcdedit to add the entry to the Windows bootloader with no results. Earlier today I decided to try and set my boot partition as an EFI boot partition ... bad choice, now I don't even have the Winblows boot loader. I've officially ran out of ideas. Tried calling Samsung, but they're closed (they'd probably say something stupid along the lines of "Samsung recommends Windows 7" ... I've had Dell say that to me). Any help would be greatly appreciated. Update 1 Tried re-installing 12.04 and now I get the screen continously turning off and back on, but still no sign of booting ... it has been doing it for 15 mins so far (I set the boot partition type to ext2 instead of ext4) Update 2 Well ... this just gets better and better. I inserted the installation USB key to reboot it and the flickering stopped for about a minute (remained on) then it started turning off and on again

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • How effectively "sell" a good design in large meetings

    - by User1
    Many times I have witnessed a sad tragedy. Here's what happens: A team design review for a new project. I see a simple design that has quite a few holes. I casually mention the holes and ways to avoid them. The warnings are ignored with comments like "that 'never' happen in real life" Eventually the things that "will 'never' happen" happen An emergency team design review for a broken project. So what do I do? Copping the "I told you so" attitude is not going to win friends and influence people. Sometimes years go by and the comments from step 3 are forgotten anyway. I definitely don't want to be the annoying pest reminding the world of the gotchas. I often sit back and watch the Titanic sail off to Europe. It's frustrating to see bad designs move forward. It's also frustrating that I can't seem to convince others of the pending peril of the current path. I do worst on team meetings where everyone has different ways of understanding different terms. Also, egos tend to win of reason and thought. I'm looking for good tactics to convince groups people to use some new and complicated ideas.

    Read the article

  • TechEd 2012: MVVM In XAML

    - by Tim Murphy
    Paul Sheriff was a real character at the start of his MVVM in XAML session.  There was a lot of sarcasm and self deprecation going on prior to the .  That is never a bad way to get things rolling right after lunch.  Then things got semi-serious. The presentation itself had a number of surprises, but not all of them had to do with XAML.  When he flipped over his company’s code generation tool it took me off guard.  I am used to generator that create code for a whole project, but his tools were able to create different types of constructs on demand.  It also made it easier to follow what he was doing than some of the other demos I have seen this week where people were using code snippets. Getting to the heart of the topic I found myself thinking that I may have found my utopia for application development in MVVM.  Yes, I know there is no such thing, but this comes closer than any other pattern I have learned about.  This pattern allows the application to have better separation of concerns than I have seen before.  This is especially true since you can leverage data binding.  I’m not sure why it has taken me so long to find time for this subject. As Paul demonstrated using this pattern with XAML gives you multi-platform reusable code when you leverage common utility classes and ModelView classes.  The one drawback I see is that you have to go to the lowest common denominator between the platforms you want to support, but you always have to weigh the trade offs. And finally, the Visual Studio nuggets just keep coming.  Even though it has been available for several generations of Visual Studio I have never seen someone use linked files within a solution.  It just goes to show that I should spend more time exploring the deeper features of each dialog. del.icio.us Tags: TechEd,TechEd 2012,MVVM,Paul Sheriff,Patterns,Visual Studio 2012

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >