Search Results

Search found 22481 results on 900 pages for 'andy may'.

Page 470/900 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • How should an undergraduate programmer organize his time learning the maximum possible?

    - by nischayn22
    I started programming lately(pre-final year of a CS degree) and now feel like there's a sea of uncovered treasure for me out there. So, I decided to cover as much as is possible before I look out for a job after graduation. So, I started to read books (The C++ Programming Language, Introduction to Algorithms, Cracking the Coding Interview, Programming Pearls,etc ) participate in StackExchange sites, solving problems (InterviewStreet and ProjectEuler), coding for open source, chatting to fellow programmers/mentors and try to learn more and more. Good,then what's the problem?? The problem is I am trying to do many things, but I am doubtful that I am still utilizing my time properly. I am reading many books and sometimes I just leave a book halfway (jumping from one book to another), sometimes I spend way too much time on chatting and also in getting lost somewhere in the huge internet world, and lastly the wasteful burden of attending classes (I don't think my teachers know good enough or I prefer learning on my own) May be some of you had similar situation. How did you organize your time? Or what do you think is the best way to organize it for an undergraduate? Also what mistakes am I making that you can warn me of

    Read the article

  • Blackberry Access to Powered Down Exchange

    - by Sam Cogan
    I work with a company (it's more of a charity really) who have a single Exchange Server, with a few Blackberry users, who download emails via POP3 or IMAP. They are in a developing country and so every night they turn off the Exchange server to save power. However, they now want the Blackberry users to be able to get mail at night. They have a Linux server (a rented VPS), so they are considering having the mail delivered here and then pulling this mail via a POP connector into Exchange. Therefore at night, the BES users (who will now be pulling their POP email from the Linux server) can still get their mail. Can anyone think of a better solution to this problem that I may have missed? Unfortunately there is no convincing the company to leave the machine on overnight.

    Read the article

  • How do I fix broken installation?

    - by Daniel
    Let me start off by saying I'm dual booting 11.04 and Windows 7 on a Thinkpad T61p. The problem may have arisen when I hit the power button during normal startup. I'm fully aware how stupid this is. I don't know why I did it. I did it. Now, I can't get in to Ubuntu. Windows works fine. But when I try to start Ubuntu normally, it seems to run some checks, and does not start up. Sometimes, I see a black screen, and it tells me that it's running certain checks, and then, [ok]. Like... Battery Check Somethingorother [ok] It'll give me 1-5 of these. And then it just does nothing, and I have to turn it off. When I try to start in safe mode... I tried low graphics mode, and after going through a couple of dialogue boxes, I'm brought right back to the safe mode dialogue box. And if I hit 'resume,' a shell pushes up (still that grey on black "your computer is broken" type shell) and asks me to log in. I do, and try to run unity. It tells me something along the lines of: WARNING no DISPLAY variable set and then sets it to " :0" , which doesn't work. And then I can't do anything, really, and I have to restart. (I don't know how to do this from the command line, so I just hard reset. That command would be helpful). Does anybody have any idea how I can get Ubuntu working right again? FTP is less pleasant in Explorer than it is in Nautilus or w/e it is now.

    Read the article

  • Check the disk for problems on Debian Lenny

    - by Equ
    Hi guys! I just bought a VPS hosting with Debian Lenny (I'm new to all this world). I've managed to install and setup everthing I need pretty well. My testing website works fast as expected most of the time, but sometimes it is really slow (response time is about 5-10 seconds). I checked everything and seems that there are may be some disk issues. How can I check the disk for problems/performance? What else could possible cause such a behaviour? Thank you!

    Read the article

  • Phishing site uses subdomain that I never registered

    - by gotgenes
    I recently received the following message from Google Webmaster Tools: Dear site owner or webmaster of http://gotgenes.com/, [...] Below are one or more example URLs on your site which may be part of a phishing attack: http://repair.gotgenes.com/~elmsa/.your-account.php [...] What I don't understand is that I never had a subdomain repair.gotgenes.com, but visiting it in the web browser gives an actual My DNS is FreeDNS, which does not list a repair subdomain. My domain name is registered with GoDaddy, and the nameservers are correctly set to NS1.AFRAID.ORG, NS2.AFRAID.ORG, NS3.AFRAID.ORG, and NS4.AFRAID.ORG. I have the following questions: Where is repair.gotgenes.com actually registered? How was it registered? What action can I take to have it removed from DNSs? How can I prevent this from happening in the future? This is pretty disconcerting; I feel like my domain has been hijacked. Any help would be much appreciated.

    Read the article

  • Could crosslinking using very general anchor texts be a reason for a drop in rankings?

    - by webmasters
    I have crosslinked 20 sites and I thought I have been penalized for this, asked this question and some experienced members told me maybe that crosslinking may not necessarily be the reason. The sites are on same host, different C class IP and every site in linked to each other. Each site targets long tail kewords. Site 1 - BMW Used Cars - and my area Site 2 - WW Used Cars - and my area And so on... When I crosslinked them (in the sidebar), I did it for the users; instead of repeating the terms used cars and my location over and over (since my users are targeted) I just crosslinked them using the brand: BMW, WW. Targeting locally, my niches are not overly competitive, so I did not need to many external links to rank on various positions on the 1st page. I'm thinking that when I chose to link using only the brand, google might have thought I wanted to actually rank for BBW and WW, hence the drop in my targeted local traffic. Could this be? I now have no-followed the links and I am noticing a slight recovery, but if it's not a interlinking penalty it would be a shame not to benefit from my links.

    Read the article

  • Can I create a virtual network interface to connect to a real network device?

    - by michelemarcon
    I have a networked windows pc with 2 network interfaces. The first connects to a lan with ip address 10.1.. The second connects to another lan with ip address 10.2.. Maybe it's a dumb question, however is it possible to virtualize the second network interface, so that the pc can connect to the 2 lans? If necessary, I may switch to linux or paravirtualization. CLARIFICATION: I want to send DHCP broadcast packets on the second lan, but not on the first lan. I want to do it with one single physical network interface. At the moment, I'm not using any virtualization software.

    Read the article

  • How to prevent one account from unlocking products on other devices using Apple StoreKit?

    - by reapz
    We are currently wrapping up a free-to-play game on iOS in which you can purchase non-consumable products. We have been discussing this case internally and are not quite sure what the best practices are as this is our first title. For example, if a user downloads our app, and makes some purchases. These can be restored should the app ever be deleted and reinstalled as long as the user uses the same Apple ID. What is to stop him from making a fake Apple account, purchasing items and then posting this account on the web allowing everyone to get the items for free? That is obviously a worst case situation. But a smaller case would be a user unlocking items for his friends. We do not want this to be an always online game but have considered doing a check on startup if there is internet available. If the currently logged in account doesn't own the products do we lock them again? Probably not because people may simply sign into the device with different Game Center logins at which point we don't want to constantly lock and unlock items. At some point we will be adding multiplayer at which point we can definately do a check with the currently logged in account. This is because A, they will be online when attempting multiplayer, and B, they will want to use their own account for multiplayer. Unfortunately we aren't quite ready for this yet. Has anyone tackled this issue. Are we overthinking here?

    Read the article

  • Mysql dump of slave w/o missing Master data

    - by zooooommmm
    I am fairly new to the whole replication process of mysql so this may be an easy question to answer. I have a master and and slave. I need to set up another slave so obviously I will need to make the dump from the current slave because I CAN NOT take the master offline for a second. How can I be sure that during the time I am making the dump of the current slave database that I do not miss any master data that is newly created over that time? Thanks all.

    Read the article

  • What video codecs have most amount of content and thus popular at present/in future?

    - by goldenmean
    Hi, I want to find out if I can get some data on the percentage wise distribution of video content, for different video codecs currently used for video encoding. I know there are different applications/use-case scenarios which have different encoder used but i want to consdier all that and have a overall usage number(%) My guess is(highest to lowest % of content) - H.264(AVC) DivX MPEG2 VP6 Where do H.263, MPEG4, VC-1, RV, Theora, etc. fit in here. How may this look like in future? PS:I would like this to be community wiki to have get wider range of inputs, if someone with privileges can do it for me please. thank you. -AD

    Read the article

  • How do I turn off caching in IIS7?

    - by jammus
    Hello. I'm developing an ASP classic site under Windows 7 (form a queue ladies). The problem is IIS seems to be heavily making use of its cache for both static and dynamic content which really conflicts with my 'make a small change, alt-tab, hit ctrl-F5' development style. Changes made to .asp files may take two or three refreshes to show up where as changes to .js files can take 20 times as many. How do I go about turning the caching off on my development machine? Cheers. in b4 stop using asp classic

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • How do I read multiple lines from STDIN into a variable?

    - by The Wicked Flea
    I've been googling this question to no avail. I'm automating a build process here at work, and all I'm trying to do is get version numbers and a tiny description of the build which may be multi-line. The system this runs on is OSX 10.6.8. I've seen everything from using CAT to processing each line as necessary. I can't figure out what I should use and why. Attempts read -d '' versionNotes Results in garbled input if the user has to use the backspace key. Also there's no good way to terminate the input as ^D doesn't terminate and ^C just exits the process. read -d 'END' versionNotes Works... but still garbles the input if the backspace key is needed. while read versionNotes do echo " $versionNotes" >> "source/application.yml" done Doesn't properly end the input (because I'm too late to look up matching against an empty string).

    Read the article

  • How do you compare programming job offers from companies in different countries?

    - by Danny Tuppeny
    This isn't really a programmer-specific question, but I'm not sure of a more appropriate place, and I think the users of this site are best able to answer the question in the context of programmers. Relocating to the US seems fairly common in the programming industry. I live in the UK, and maybe one day, I might do it too. So, if that day comes - how would you go about comparing job offers? Benefits are fairly easy to compare, but given the differences in cost of living, how would you go about comparing salaries and the quality of living you'll have? In a country where the cost of living is lower, you might be able to accept a lower salary (based on exchange rate) and still have the same quality of living. But what can you do to ensure this? In some cases, you may even take a "pay rise" in terms of exchange rate, but end up far worse off. How can you compare job offers across different countries to get an idea of the salary you would need in order to not feel you've gone "backwards"?

    Read the article

  • Windows Azure v1.7 Spring Release Today&ndash;New Management Dashboard

    - by ToStringTheory
    Today, Microsoft will be publicly releasing a new version of Azure for public consumption.  The web conference, at http://www.meetwindowsazure.com will be airing at 1 PM PST.  They have already released an update to the Service Dashboard that can be accessed by going to http://manage.windowsazure.com.  I have some images of the new dashboard here that I have gathered and removed any PII from.  Let me know what you think! Images You should be able to click any of the images for a full resolution image. Tutorial The first thing you get after signing in is the tutorial: Landing After the tutorial completes, you get a screen with services that are active on your account on the left, and a list of ALL services (db/blob/SQL Azure) on the right.  I like the quick access to services across any of my subscriptions: Service Information These are images from a running web site with several roles.  I love how easy they have made many of the features: SQL Azure They have given some great quick functionality for looking at your DB information: Storage Here is the basic information that they give you for any storage accounts you have: Adding Services Super quick and easy to add services with the new UI: Conclusion I am EXCITED!  As you may have seen in the left side of my blog, I am an MCPD in Azure Development, and I must say that I am excited to see Microsoft moving forward with the technology and not letting it stagnate.  After as much as I have fought the other Azure dashboard, I like the friendliness and fluidity of this one. The important thing to note about ALL of the images above: this is HTML, not Silverlight.  The responsiveness is FAST on all of the actions I completed, and I believe that this is a big step forward for Azure… So, what do you think?

    Read the article

  • AuthInfoRequired cups overwrites

    - by mooscape
    My problem is basically identical to the following: http://bbs.archlinux.org/viewtopic.php?id=61826 Put simply, I have a machine in ubuntu trying to connect to another ubuntu machine via a network in order to use the printer attached. There is no problem printing until I restart the guest machine. Immediately it overwrites the printers.conf file (under /etc/cups/printers.conf). It always adds the same line: AuthInfoRequired username,password I stop cups and change it to *#*AuthInfoRequired username,password to comment out the command. Start cups. Works great 'til the next shutdown. Then it gets overwritten again. Googling indicates it may be GTK problem and not CUPS, but I have found no permanent solution to date. Any suggestions appreciated ....

    Read the article

  • Remote connection to dynamic public ip & private ip addresses

    - by user51737
    Many times I connected to windows computer which has static public ip address via remote desktop over wan links. I'm wondering how could I connect to the remote computer that has dynamic public ip address & private ip addresses assigned. I've 2 systems at home: xp system-------connected to internet(dynamic public ip) & allowed other users to connected to the internet on the interface. windows vista system--------enabled dhcp on the interface to access internet from xp. How could I remotely connect from my office to the 'vista system'?. If I've a router/modem at my home it may be possible to allow the ports for the system but I don't. Any tips?

    Read the article

  • help with migrating from Widows, x64 FGLRX, CPU load, Java and Minecraft

    - by joxer
    Im new to ubuntu, it is the second time i have installed it. This comp is Dell studio 1558. some specs: CPU- intel core i7 Q720 1.6GHz, GPU- ATI Mobility Radeon HD 5400 FGLRX- i've fallowed these instructions among inspecting many others, i have tried all of the variants mentioned in that tread before reverting back to the drivers supplied with Ubuntu ( through additional drivers ) which apparently seem to work best. i am testing them with minecraft as silly as it may sound. in 2 to 60 minutes the FPS drop from 70+ to somewhere between 0 and 5. while "fgl_glxgears" runs at between 400 and 800 FPS smoothly.. I am using oracle ( sun ) JRE6 to run minecraft, i have gotten it through a tutorial linked on oracle's website, i currently have no other version of java installed ( was worse when i had a few others here ). after closing the game Ubuntu is similarly slow, i've checked the CPU load using System Monitor and it shows one of the CPU's jumping to 80%~100% load at a time.. a reboot solves it. i realize my mess is up to me to solve but a hand is always appreciated. tyvm in advance.

    Read the article

  • Setting up group disk quotas

    - by Ray
    I am hoping to get some advice in setting up disk quotas. So, I know about: Adding usrquota and grpquota on to /etc/fstab for the file systems that need to be managed. Using edquota to assign disk quotas to users. However, I need to do the last step for multiple users and edquota seems to be a bit troublesome. One solution that I have found is that I can do: sudo edquota -u foo -p bar. This will copy the disk quota of bar to user foo. I was wondering if this is the best solution? I tried setting up group disk quotas but they don't seem to be working. Are group quotas meant to help in the assignment of the same quota to multiple users? Or are they suppose to give a total limit to a set of users? For example, if users A, B, C are in group X then assigning a quota of 20 GB gives each user 20 GB or does it give 20 GB to the entire group X to divide up? I'm interested in doing the former, but not the latter. Right now, I've assigned group disk quotas and they aren't working. So, I guess it is due to my misunderstanding of group disk quotas... My problem is I want to easily give the same quota to multiple users; any suggestions on the best way to do this out of what I've tried above or anything else I may not have thought of? Thank you!

    Read the article

  • Is now the right time to move to .NET 4?

    - by bconlon
    The reason I pose this question is that I'm looking at WPF development and so using the latest version seems sensible. However, this means rolling out the .NET 4 runtime to PCs on old versions of the framework. Windows XP is still the number one O/S (estimated 40%+ market share). To run .NET 4 on XP requires Service Pack 3, and although it is good practice to move to the latest service packs, often large companies are slow to keep up due to the extensive testing involved. In fact, .NET 4 is not installed as standard with any Windows O/S as yet - Windows 7 and 2008 Server R2 have 3.5 installed. This is not quite as big an issue as it was for .NET 3.5 as .NET 4 is significantly smaller as it doesn't include the older runtimes - .NET 3.5 SP1 included .NET 3 and .NET 2 and was 250MB, although this was reduced by doing a web install. The size is also reduced a bit if you target the .NET 4 Client Profile, which should be OK for many WPF applications, and I think this may be rolled out as part of Windows service packs soon. But still, if your application is only 4-5 MB and you need 40-50 MB of Framework it is worth consideration before jumping in and using the new shiny features. #

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • How to Create a shortcut to files / folders on a drive with different / changing letters (root)?

    - by Rolo
    I have seen this question asked and answered many times but the location one usually desires to create the shortcut is on the actual drive with the different letters. I would like to create the shortcut on the drive with the permanent letter. So, let me explain further. I have a C Drive and a G Drive. The C Drive is always C and it's on a Windows XP computer. The G drive may be G today, but H tomorrow. How can I create a permanently working shortcut on the C Drive that points to the G, sometimes H, external hard drive? P.S. I am referring to an external hard drive, but I would also (not instead) be interested in the answer if it were a usb flash drive. (Keep in mind the shortcut is being created on the drive with the permanent letter). Thanks.

    Read the article

  • How does a segment-based rendering engine (as in Descent) work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • What are some techniques I can use to refactor Object Oriented code into Functional code?

    - by tieTYT
    I've spent about 20-40 hours developing part of a game using JavaScript and HTML5 canvas. When I started I had no idea what I was doing. So it started as a proof of concept and is coming along nicely now, but it has no automated tests. The game is starting to become complex enough that it could benefit from some automated testing, but it seems tough to do because the code depends on mutating global state. I'd like to refactor the whole thing using Underscore.js, a functional programming library for JavaScript. Part of me thinks I should just start from scratch using a Functional Programming style and testing. But, I think refactoring the imperative code into declarative code might be a better learning experience and a safer way to get to my current state of functionality. Problem is, I know what I want my code to look like in the end, but I don't know how to turn my current code into it. I'm hoping some people here could give me some tips a la the Refactoring book and Working Effectively With Legacy Code. For example, as a first step I'm thinking about "banning" global state. Take every function that uses a global variable and pass it in as a parameter instead. Next step may be to "ban" mutation, and to always return a new object. Any advice would be appreciated. I've never taken OO code and refactored it into Functional code before.

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >