Search Results

Search found 9271 results on 371 pages for 'whole foods'.

Page 25/371 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Is taking a semester or year off from college a good idea?

    - by astrieanna
    I am currently a Junior majoring in Computer Science at a top university (in the USA). As I'm really getting tired of taking classes, I was wondering if taking a semester or year off to do an internship(s) is a reasonable idea? It seems like it would give me more experience programming (making classes a bit easier), and give me a chance to recover from the burnout that comes from taking 18 credits a semester. A friend suggested that I just take a lighter course load, but I only have 2 more semesters of financial aid, so I need to take 18 credits in each of them in order to finish. Taking time off from school is not a normal thing to do, at least at this school. Since more internships are advertised for the summer (that I've seen), I was wondering if there are internships available in times other than the summer? If I took off for a whole year, would it be more valuable to try to stay at the same company for the whole time or to try to get a series of internships at different ones? Valuable in both the sense of resume value and personal value. Would it be easier or harder to get multiple shorter internships?

    Read the article

  • Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video]

    - by ETC
    What happens if you try to upgrade a machine from MS-DOS to Windows 7? One curious geek ran the experiment using VMWare and recorded the whole, surprisingly fluid, ride for our enjoyment. Andrew Tait was curious, what would happen if you followed the entire upgrade arc for Windows from the 1980s to the present all on one machine? Thanks to VMWare he was able to find out, following the upgrade path all the way from MS-DOS to Windows 7. Check out the video below to see what happens: Chain of Fools: Upgrading Through Every Version of Windows [YouTube via WinRumors] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Flattening PDF transparency

    - by Jan
    I have a PDF, made with Inkscape, that uses transparent colors. This image shall be used in a LaTeX document. While preserving the transparency is nice for editing, it can be a problem for printing. Printing usually involves PDF to PS conversion. Since Postscript does not support transparency, this requires either flatting, i.e. creating a vector graphic that works without transparency or rastering, i.e. rendering a bitmap image. When a PDF document containing such a figure is printed (or converted to PS) using Evince (or Cairo or Ghostscript), the whole page gets rendered as a bitmap, rendering fonts ugly (different from other pages). (Adobe Acrobat handles such PDFs well.) Unfortunately, converting the PDF figures to EPS (before including them with LaTeX) doesn't help much, because both pdftops and pdf2ps (again, Cairo or Ghostscript) rasterize the image, i.e. render a bitmap (saved as EPS). (This is slightly better, because it doesn't affect the whole page, but I'd still prefer a vector graphics.) How can I flatten transparency with Inkscape or other software on Linux?

    Read the article

  • How can I separate the user interface from the business logic while still maintaining efficiency?

    - by Uri
    Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?

    Read the article

  • Breadcrumbs in a modern web application, make sense? [on hold]

    - by Xtreme Biker
    I'm currently beginning with the development of a new web application. The whole web application is going to be bookmarkable and all the pages accesible via GET requests and url parameters. Having said that, let's suppose I've got three entities in my application, Customer, Team and City. Each Customer and Team belong to a city and I've got a city-detail page which displays the detail for a concrete city. So next navigation cases are possible: Customers - Customer detail (id=2) - City detail (id=3) Football teams - Team detail (id=5) - City detail (id=3) Cities - City detail (id=3) There are three possible ways of ending up in a city detail view. My question is, does it make sense to implement a breadcrumb to show such a history, having it available in the browser itself? Would it be more appropiate to show a breadcrumb with the last case, no matter where we're coming from (hierarchical breadcrumb)? That's what Jakob Nielsen points out here: Offering users a Hansel-and-Gretel-style history trail is basically useless, because it simply duplicates functionality offered by the Back button, which is the Web’s second-most-used feature. A history trail can also be confusing: users often wander in circles or go to the wrong site sections. Having each point in a confused progression at the top of the current page doesn’t offer much help. Finally, a history trail is useless for users who arrive directly at a page deep within the site. Also, even if the history trail seems the most natural way to implement it, it requires an extra effort to keep the whole track being HTTP a stateless mean.

    Read the article

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • Software design of a browser-based strategic MMO game

    - by Mehran
    I wonder if there are any known tested software designs for Travian-like browser-based strategic MMO games? I mean how would they implement the server of such games or what is stored in database and what is stored in RAM? Is the state of the world stored in one piece or is it distributed among a number of storage? Does anyone know a resource to study the problems and solutions of creating such games? [UPDATE] Suggested in comments, I'm going to give an example how would I design such a project. Even though I'm not sure if I'm proposing the right one. Having stored the world state in a MongoDB, I would implement an event collection in which all the changes to the world will register. Changes that are meant to happen in the future will come with an action date set to the future and those that are to be carried out immediately will be set to now. Having this datastore as the central point of the system, players will issue their actions as events inserted in datastore. At the other end of the system, I'll have a constant-running software taking out events out of the datastore which are due to be carried out and not done yet. Executing an event means apply some update on the world's state and thus the datastore. As scalable as this design sounds, I'm not sure if it will be worth implementing. For one, it is pointless to cache the datastore as most of updates happen once without any follow ups. For instance if you have the growth of resources in your game, you'll be updating the whole world state periodically in which case, having incorporated a cache, you are keeping the whole world in RAM (which most likely is impossible). So can someone come up with a better design?

    Read the article

  • System in low-graphics mode

    - by Artem Moskalev
    I am totally new to Linux, and Ubuntu. I bought ASUS X53E Notebook, erased Windows and installed Ubuntu. First it worked ok. Then when I started working with it, i opened the terminal, entered sudo chmod 666 usr and then all the icons from the main panel disappeared + the whole system stopped responding. I decided to restart the system. When restarted, a message appears: the system is running in low graphics mode and below it: Your screen, graphics card and input device setting could not be detected correctly. You will need to configure it yourself. But the "OK" button is disabled and if I press any buttons nothing happens. If I enter Ctrl-Alt-F2 it opens the bash terminal. But there commands sudo or apt-get are not found and it says that permission denied if i try to enter any folder like cd /usr If I enter the su command it asks for the password I don't know. When encountering this problem first, I reinstalled the whole Ubuntu. but today it happened again just the same. What shall I do? maybe there is something wrong with the hardware? If I need to install another distribution of Linux could you recommend one? but I'd rather stick to Debian releases like Ubuntu. so how do I fix the problem? PS: Please give answers in simple terms because am a newbie so i don't know what goes where yet.

    Read the article

  • organization of DLL linked functions

    - by m25
    So this is a code organization question. I got my basic code working but when I expand it will be terrible. I have a DLL that I don't have a .lib for. Therefore I have to use the whole loadLibrary()/getprocaddress() combo. it works great. But this DLL that i'm referencing at 100+ functions. my current process is (1) typedef a type for the function. or typedef short(_stdcall *type1)(void); then (2) assign a function name that I want to use such as type1 function_1, then (3) I do the whole LoadLibrary, then do something like function_1 = (type1)GetProcAddress(hinstLib, "_mangled_funcName@5"); normally I would like to do all of my function definitions in a header file but because I have to do use the load library function, its not that easy. the code will be a mess. Right now i'm doing (1) and (2) in a header file and was considering making a function in another .cpp file to do the load library and dump all of the (3)'s in there. I considered using a namespace for the functions so I can use them in the main function and not have to pass over to the other function. Any other tips on how to organize this code to where it is readable and organized? My goals are to be able to use function_1 as a regular function in the main code. if I have to a ref::function_1 that would be okay but I would prefer to avoid it. this code for all practical purposes is just plane C at the moment. thanks in advance for any advice!

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Is an event loop just a for/while loop with optimized polling?

    - by Alan
    I'm trying to understand what an event loop is. Often the explanation is that in the event loop, you do something until you're notified that an event occurred. You than handle the event and continue doing what you did before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server goes to the listening it did before. However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: "It's not 'just like that' you have to register an event listener". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True): do stuff check if event has happened (poll) do other stuff This is my understanding of the whole idea, and i would like to hear if this is correct. I am open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. Best regards

    Read the article

  • Why doesn't it seem to be any development in the field of 3D VR gear, especially with regard to gaming?

    - by neuviemeporte
    I remember that way back around 1995, there was this big craze with VR in the media, a whole bunch of (mostly mediocre) games labeled as "virtual-reality-interactive-movie (...)" were published. If I recall correctly, the first 3D VR helmet was called VFX-1 and was sold bundled with Descent and some dedicated joystick. I never owned one, and I read just one review which was mostly enthusiastic, but pointed to some weak points, like the eyes getting tired after an hour or so of playing. Then the whole thing basically flickered down and died. I suppose the main reason it wasn't successful was that the hardware of the day was not powerful enough, the VR gear's design wasn't perfected to make it comfortable and natural to use, and the companies that made it failed to market it successfully. What I can't understand is why isn't there any development in the field today. There is some vr-ish hardware mostly targeted at the consoles (Kinect, Wii remote, TrackIR), but all projects of creating some 3d head-mounted display system seem to be in early infancy, appear once in a trade show somewhere and aren't heard of again. I think it could work great with head tracking and some of today's shooters, flight sims (trackIR is nice but the movement scale translation is awkward) and other games with an FPP POV. Is there any technological reason why decent vr headgear can't be made today, or is it just that nobody really cares/everyone is scared to repeat the '90s failure?

    Read the article

  • Need ideas for an innovation week

    - by slandau
    So 4 times a year we have an innovation week (to even out the odd sprint releases). This whole week is dedicated to experimenting with new technology/ideas that could potentially help progress the software department or the company as a whole, and serve as sort of a starting point for new ideas and brainstorming. For example, the last one contained a lot of projects. One was the re-design of our web app into more of a Web 2.0 look and feel using JQuery and a lot of cool CSS tricks. Another was a proposal for a new bug tracking software as opposed to the clearly outdated one we use, and another was a very cool JQuery/Js design that could show the same page to multiple users on different computers and allow each of them to take "charge" of the page, disabling the other one from doing anything, and vice versa, seeing all updates in real time -- sort of like Netmeeting through Js. Well, this is my first one as a new employee so I wanted to think of something cool. We get one week (anywhere from 40-60 hours or so), and we usually pair up or do this in groups of 3-4, depending on how many projects there are. Projects have to get approved but usually that doesn't prove to be too difficult. We are in the financial analysis software industry if the domain was leading you guys to think of anything helpful. I am primarily working on a web app in MVC 2 at the moment using a lot of JQuery and a C# backend. Do you guys have any idea of something that would be cool/beneficial/worth it?

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • XNA 4.0, Combining model draw calls

    - by MayContainNuts
    I have the following problem: The levels in my game are made up of a Large Quantity of small Models and because of that I am experiencing frame rate problems. I already did some research and came to the conclusion that the amount of draw calls I am making must be the root of my problems. I've looked around for a while now and couldn't quite find a satisfying solution. I can't cull any of those models, in a worst case scenario there could be 1000 of them visible at the same time. I also looked at Hardware geometry Instancing, but I don't think that's quite what I'm looking for, because the level consists of a lot of different parts. So, what I'd like to do is combining 100 or 200 of these Models into a single large one and draw it as a whole 'chunk'. The whole geometry is static so it wouldn't have to be changed after combining, but different parts of it would have to use different textures (I think I can accomplish that with a texture atlas). But I have no idea how to to that, so does anybody have any suggestions?

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • Is chroot the right choice for my use case?

    - by Anthony
    Backstory: I am working on setting up a MineCraft server and want to allow admins to have ssh access to the MineCraft server console and appropriate mc server files, but not the whole system. The console provided by the minecraft server is only available to the user that launched the process. In addition, the admins will need terminal access to some basic cli tools such as wget, cp, mv, rm, and a text editor. Plan: I have already setup the ssh aspect of things, requiring pre-shared keys and whatnot. Setup a jailed environment in which all user activity will be contained. Setup user accounts. - The first user account will be the minecraft user. The minecraft user will start the MC server in a multiuser screen session and allow the other admins to attach to it. - Subsequent users should have their own /home directory for normal usage. Setup acl for the appropriate files to allow each user to edit the mc server files. No one will be doing system updates, nor will anyone be installing any programs, so I'll be the only user with sudo. The Issues: I don't want the ssh users to have access to the whole system. Users will still need to use wget or curl to update the mc server files. Is chroot the right tool for this use case, or is there something more appropriate for the job? I have no experience setting up a chroot environment and have found several tools to aid in this process. Jailkit seems to be the most robust, but it's not in the standard repos.

    Read the article

  • Dualboot Asus n56v (ubuntu / win 7 x64) - ubuntu doesn't detect partition table made by windows

    - by user76439
    I have an Asus n56v and I've got troubles installing Ubuntu 12.04 x64. I already have installed Windows 7 x64! The hard drive is: Hitachi HTS547575A9E384 My problem: The installation program doesn't recognize the already existing partitions and is offering only options where there are partitions. Can someone help me out? Is this an ACPI/IDE conflict, missing driver or conflict with Windows 7? (I'm not an expert on Linux, only working sometimes with it.) I now tried out some options concerning EFI with a GTP-table. Everything worked but I wasn't able to fix a dual boot (Windows boot loader) nor with GRUB2. The laptop is still having a BIOS, but is able to boot DVDs/CDs in EFI-mode. Now I try to avoid EFI and GTP using the old windows MBR style. I reinstalled Windows, so far no problem. When I want to try to install Ubuntu, it doesn't detect the already existing partition table. It is just showing me an empty space for the whole disk. Other threads like Ubuntu 12.04 does not see windows already install on my computer (dual installation) don't help me out. os-prober shows me a correct result. I don't know how to deal with gdisk as shown in Installer doesn't detect existing partition table/windows 7 partition. I have 750GB for the whole disk. I'm using: 90GB for Windows reserved partition + system partition, 500GB for data and the rest should be for SWAP and linux-system. How can I make Ubuntu detect the partition table?

    Read the article

  • When can I publish a software tool written at work?

    - by AlexMA
    I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours? Follow-up: Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically? Follow-up 2: Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note.

    Read the article

  • System in low graphics, deleted linux, grub rescue, can't access windows

    - by First timer
    So I'm pretty new to Ubuntu but I managed to install it with no big problems on both my desktop and netbook. When I installed it on my brother's netbook everything went horribly wrong and now I fear the system is close to beyond repair. The problem was first that it said it did not have any space left (seemed ridiculous since it had a lot). Then Ubuntu began booting into a "System is running in low graphics mode error" which I then tried to fix, using all the tips I could find in here but nothing helped. I think the graphics error and lack of space might have been related but I can't be sure. Finally I gave up repairing Ubuntu and went for a reinstall. Shouldn't have done that! I read that I should simply open Ubuntu through a live usb and choose GParted to delete the Linux partitions so I did and rebooted accordingly. Next, I was to install Ubuntu but now I am only given the option to wipe the whole disk for Ubuntu, not install along with windows 7. If I access GParted I can still see the ntfs partitions that hold windows 7 (there are 2: one labeled RECOVERY and another labeled OS and boot) so why can't I access them? Btw. the OS and boot has a little red mark with a warning that 1 cluster is referenced to multiple times, don't know what that means. If I boot without the live usb I am sent directly into a grub rescue "black screen of the computer will follow no orders". Please, I know that the easiest might be to simply wipe the whole thing clean but there are important files and programs on windows 7. Is there a way to just access windows? It is a dell inspiron 1018 mini netbook, so I have no cd input and no windows 7 installation cd.

    Read the article

  • Pure functional programming and game state

    - by Fu86
    Is there a common technique to handle state (in general) in a functional programming language? There are solutions in every (functional) programming language to handle global state, but I want to avoid this as far as I could. All state in a pure functional manner are function parameters. So I need to put the whole game state (a gigantic hashmap with the world, players, positions, score, assets, enemies, ...)) as a parameter to all functions which wants to manipulate the world on a given input or trigger. The function itself picks the relevant information from the gamestate blob, do something with it, manipulate the gamestate and return the gamestate. But this looks like a poor mans solution for the problem. If I put the whole gamestate into all functions, there is no benefit for me in contrast to global variables or the imperative approach. I could put just the relevant information into the functions and return the actions which will be taken for the given input. And one single function apply all the actions to the gamestate. But most functions need a lot of "relevant" information. move() need the object position, the velocity, the map for collision, position of all enemys, current health, ... So this approach does not seem to work either. So my question is how do I handle the massive amount of state in a functional programming language -- especially for game development?

    Read the article

  • Cloud Computing - just get started already!

    - by BuckWoody
    OK - you've been hearing about "cloud" (I really dislike that term, but whatever) for over two years. You've equated it with just throwing some VM's in some vendor's datacenter - which is certainly part of it, but not the whole story. There's a whole world of - wait for it - *coding* out there that you should be working on. If you're a developer, this is just a set of servers with operating systems and the runtime layer (like.NET, Java, PHP, etc.) that you can deploy code to and have it run. It can expand in a horizontal way, allowing massive - and I really, honestly mean massive, not just marketing talk kind of scale. We see this every day. If you're not a developer, well, now's the time to learn. Explore a little. Try it. We'll help you. There's a free conference you can attend in November, and you can sign up for it now. It's all on-line, and the tools you need to code are free. Put down Facebook and Twitter for a minute - go sign up. Learn. Do. :) See you there. http://www.windowsazureconf.net/

    Read the article

  • When NOT to use a framework

    - by Chris
    Today, one can find a framework for just about any language, to suit just about any project. Most modern frameworks are fairly robust (generally speaking), with hour upon hour of testing, peer reviewed code, and great extensibility. However, I think there is a downside to ANY framework in that programmers, as a community, may become so reliant upon their chosen frameworks that they no longer understand the underlying workings, or in the case of newer programmers, never learn the underlying workings to begin with. It is easy to become specialized to a degree that you are no longer a 'PHP programmer' (for example), but a "Drupal programmer", to the exclusion of anything else. Who cares, right? We have the framework! We don't need to know how to "do it by hand"! Right? The result of this loss of basic skills (sometimes to the extent that programmers who don't use frameworks are viewed as "outdated") is that it becomes common practice to use a framework where it is not required or appropriate. The features the framework facilitates wind up confused with what the base language is capable of. Developers start using frameworks to accomplish even the most basic of tasks, so that what once was considered a rudimentary process now involves large libraries with their own quirks, bugs, and dependencies. What was once accomplished in 20 lines is now accomplished by including a 20,000 line framework AND writing 20 lines to use the framework. Conversely, one does not want to reinvent the wheel. If I'm writing code to accomplish some basic, common little task, I might feel like I am wasting my time when I know that framework XYZ offers all the features I am after, and a whole lot more. The "whole lot more" part still has me worried, but it doesn't seem that many even consider it anymore. There has to be a good metric to determine when it is appropriate to use a framework. What do you consider the threshold to be, how do you decide when to use a framework, or, when not.

    Read the article

  • How do I get a different type of scrollbars in 12.04? [closed]

    - by Joseph Garvin
    Possible Duplicate: How do I disable overlay scrollbars? By default 12.04 uses overlay scroll bars that do not suit my taste, and every method I've found so far of disabling them makes them broken in a different way. When I was using 11.10 this wasn't a problem because I could still change the GTK theme. In 12.04, the Appearance settings only contain a few stock themes, and other than the special purpose contrast ones they all have the overlay scroll bars. If I aptitude search gtk3 | grep theme I get no results so there appears to be no packaged alternative either. Most suggestions I've seen for disabling the overlay scroll bars involve uninstalling packages or editing files as root. I want to disable them just for the current user, not for everyone on the whole box; as should be the case for any theme/display setting. There is a gsettings command that temporarily disables the overlay scrollbars just for the current user, but this has two problems of its own: The setting doesn't stick after log off. Because who would want to save settings? The scroll bars put in place have no contrast. They have a black scroller on a black background and are completely unusable. In short what I'd like to know is how to disable overlay scroll bars such that: My preference is user specific. My preference is actually saved. The scroller can actually be seen against the background without having to use a special high contrast theme that makes my whole desktop look like a negative photo from Tron.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >