Search Results

Search found 19525 results on 781 pages for 'say'.

Page 12/781 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Mutating Programming Language?

    - by MattiasK
    For fun I was thinking about how one could build a programming language that differs from OOP and came up with this concept. I don't have a strong foundation in computer science so it might be common place without me knowing it (more likely it's just a stupid idea :) I apologize in advance for this somewhat rambling question :) Anyways here goes: In normal OOP methods and classes are variant only upon parameters, meaning if two different classes/methods call the same method they get the same output. My, perhaps crazy idea, is that the calling method and class could be an "invisible" part of it's signature and the response could vary depending on who call's an method. Say that we have a Window object with a Break() method, now anyone (who has access) could call this method on Window with the same result. Now say that we have two different objects, Hammer and SledgeHammer. If Break need to produce different results based on these we'd pass them as parameters Break(IBluntObject bluntObject) With a mutating programming language (mpl) the operating objects on the method would be visible to the Break Method without begin explicitly defined and it could adopt itself based on them). So if SledgeHammer calls Window.Break() it would generate vastly different results than if Hammer did so. If OOP classes are black boxes then MPL are black boxes that knows who's (trying) to push it's buttons and can adapt accordingly. You could also have different permission sets on methods depending who's calling them rather than having absolute permissions like public and private. Does this have any advantage over OOP? Or perhaps I should say, would it add anything to it since you should be able to simply add this aspect to methods (just give access to a CallingMethod and CallingClass variable in context) I'm not sure, might be to hard to wrap one's head around, it would be kinda interesting to have classes that adopted themselves to who uses them though. Still it's an interesting concept, what do you think, is it viable?

    Read the article

  • Direct2D Transform

    - by James
    I have a beginner question about Direct2D transforms. I have a 20 x 10 bitmap that I would like to draw in different orientations. To start, I would like to draw it vertically with a destination rectangle of say: (left, top, right, bottom) (300, 300, 310, 320) The bitmap is wider than it is tall (20 x 10), but when I draw it vertically, it will be appear taller than it is wide (10 x 20). I know that I can use a rotation matrix like so: m_pRenderTarget->SetTransform( D2D1::Matrix3x2F::Rotation( 90.0f, D2D1::Point2F(<center of shape>)) ); But when I use this method to rotate my shape, the destination rectangle is still wider than it is tall. Maybe it would look something like this: (left, top, right, bottom) (280, 290, 300, 300) The destination rectangle is 20 x 10 but the bitmap appears on the screen as 10 x 20. I can't look at the destination rectangle in the debugger and compare it to: (left, top, right, bottom) (300, 300, 310, 320) Is there any simple way to say "I want to rotate it so that the image is rendered to exactly this destination rectangle after the rotation?" In this case, I would like to say "Please rotate the bitmap so that it appears on the screen at this location:" (left, top, right, bottom) (300, 300, 310, 320) If I can't do that, is there any way to find out the 10 x 20 destination rectangle where the bitmap is actually being rendered to the screen?

    Read the article

  • Many commands are not found by zsh

    - by Denny Mueller
    If I tab autocomplete, most of the time I get errors. Let's say I do a vim tab to look for the files in the folder. It just jumps to the next command line. Or let's say I press tab after typing rvm use 2.0.0 --default I will get zsh: correct 'rvm' to 'rvim' [nyae]?. If I say no, I get a command not found error. Also if I press tab after typing ruby -v, zsh wants to correct to _ruby -v. Any known bug or any help for this?

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • How to set up a wireless AP with a linux box and SOCKS proxy

    - by user50455
    I've got: (1) Linux box (Arch linux to be precise, but that doesn't really matter) (2) Ethernet connection on it (say, eth0) (3) Socks proxy on a remote site (say, remote :port), which can be accessed through (2) (4) Wireless card on local linux box (say, eth1) So, the task is: create a wireless access point using (4) on local site (1) in such a way that all connections from it will go through proxy (3). E.g., when one simply connects to that AP (well, there should be DHCP or something for that) and goes to serverfault.com, all the traffic goes through that SOCKS proxy. I'm just asking for the digging direction and some references, not step-by-step tutorial. Thanks in advance

    Read the article

  • How do they keep track of the NPCs in Left 4 Dead?

    - by f20k
    How do they keep track of the NPC zombies in Left 4 Dead? I am talking about the NPCs that just walk into walls or wander around aimlessly. Even though the players cannot see them, they are there (say inside rooms or behind doors). Let's say there's about 10 or so zombies in a hallway and inside rooms. Does the game keep all of those zombies in a list and iterate through giving them commands? Do they just spawn when the user is within a certain radius or reached a special location? Say you placed the 4 units (controlled by players) on completely different places throughout the map. Let's assume you aren't being swarmed and then you have not killed any of these aimless NPCs. Would the game be keeping track of 10 x 4 = 40 zombies in total? Or is my understanding completely off? The reason I ask is if I were to implement something similar on a mobile device, keeping track of 40 or more NPCs might not be such a great idea.

    Read the article

  • Are your personal insecurities screwing up your internal communications?

    - by Lucy Boyes
    I do some internal comms as part of my job. Quite a lot of it involves talking to people about stuff. I’m spending the next couple of weeks talking to lots of people about internal comms itself, because we haven’t done a lot of audience/user feedback gathering, and it turns out that if you talk to people about how they feel and what they think, you get some pretty interesting insights (and an idea of what to do next that isn’t just based on guesswork and generalising from self). Three things keep coming up from talking to people about what we suck at  in terms of internal comms. And, as far as I can tell, they’re all examples where personal insecurity on the part of the person doing the communicating makes the experience much worse for the people on the receiving end. 1. Spending time telling people how you’re going to do something, not what you’re doing and why Imagine you’ve got to give an update to a lot of people who don’t work in your area or department but do have an interest in what you’re doing (either because they want to know because they’re curious or because they need to know because it’s going to affect their work too). You don’t want to look bad at your job. You want to make them think you’ve got it covered – ideally because you do*. And you want to reassure them that there’s lots of exciting work going on in your area to make [insert thing of choice] happen to [insert thing of choice] so that [insert group of people] will be happy. That’s great! You’re doing a good job and you want to tell people about it. This is good comms stuff right here. However, you’re slightly afraid you might secretly be stupid or lazy or incompetent. And you’re exponentially more afraid that the people you’re talking to might think you’re stupid or lazy or incompetent. Or pointless. Or not-adding-value. Or whatever the thing that’s the worst possible thing to be in your company is. So you open by mentioning all the stuff you’re going to do, spending five minutes or so making sure that everyone knows that you’re DOING lots of STUFF. And the you talk for the rest of the time about HOW you’re going to do the stuff, because that way everyone will know that you’ve thought about this really hard and done tons of planning and had lots of great ideas about process and that you’ve got this one down. That’s the stuff you’ve got to say, right? To prove you’re not fundamentally worthless as a human being? Well, maybe. But probably not. See, the people who need to know how you’re going to do the stuff are the people doing the stuff. And those are the people in your area who you’ve (hopefully-please-for-the-love-of-everything-holy) already talked to in depth about how you’re going to do the thing (because else how could they help do it?). They are the only people who need to know the how**. It’s the difference between strategy and tactics. The people outside of your bubble of stuff-doing need to know the strategy – what it is that you’re doing, why, where you’re going with it, etc. The people on the ground with you need the strategy and the tactics, because else they won’t know how to do the stuff. But the outside people don’t really need the tactics at all. Don’t bother with the how unless your audience needs it. They probably don’t. It might make you feel better about yourself, but it’s much more likely that Bob and Jane are thinking about how long this meeting has gone on for already than how personally impressive and definitely-not-an-idiot you are for knowing how you’re going to do some work. Feeling marginally better about yourself (but, let’s face it, still insecure as heck) is not worth the cost, which in this case is the alienation of your audience. 2. Talking for too long about stuff This is kinda the same problem as the previous problem, only much less specific, and I’ve more or less covered why it’s bad already. Basic motivation: to make people think you’re not an idiot. What you do: talk for a very long time about what you’re doing so as to make it sound like you know what you’re doing and lots about it. What your audience wants: the shortest meaningful update. Some of this is a kill your darlings problem – the stuff you’re doing that seems really nifty to you seems really nifty to you, and thus you want to share it with everyone to show that you’re a smart person who thinks up nifty things to do. The downside to this is that it’s mostly only interesting to you – if other people don’t need to know, they likely also don’t care. Think about how you feel when someone is talking a lot to you about a lot of stuff that they’re doing which is at best tangentially interesting and/or relevant. You’re probably not thinking that they’re really smart and clearly know what they’re doing (unless they’re talking a lot and being really engaging about it, which is not the same as talking a lot). You’re probably thinking about something totally unrelated to the thing they’re talking about. Or the fact that you’re bored. You might even – and this is the opposite of what they’re hoping to achieve by talking a lot about stuff – be thinking they’re kind of an idiot. There’s another huge advantage to paring down what you’re trying to say to the barest possible points – it clarifies your thinking. The lightning talk format, as well as other formats which limit the time and/or number of slides you have to say a thing, are really good for doing this. It’s incredibly likely that your audience in this case (the people who need to know some things about your thing but not all the things about your thing) will get everything they need to know from five minutes of you talking about it, especially if trying to condense ALL THE THINGS into a five-minute talk has helped you get clear in your own mind what you’re doing, what you’re trying to say about what you’re doing and why you’re doing it. The bonus of this is that by being clear in your thoughts and in what you say, and in not taking up lots of people’s time to tell them stuff they don’t really need to know, you actually come across as much, much smarter than the person who talks for half an hour or more about things that are semi-relevant at best. 3. Waiting until you’ve got every detail sorted before announcing a big change to the people affected by it This is the worst crime on the list. It’s also human nature. Announcing uncertainty – that something important is going to happen (big reorganisation, product getting canned, etc.) but you’re not quite sure what or when or how yet – is scary. There are risks to it. Uncertainty makes people anxious. It might even paralyse them. You can’t run a business while you’re figuring out what to do if you’ve paralysed everyone with fear over what the future might bring. And you’re scared that they might think you’re not the right person to be in charge of [thing] if you don’t even know what you’re doing with it. Best not to say anything until you know exactly what’s going to happen and you can reassure them all, right? Nope. The people who are going to be affected by whatever it is that you don’t quite know all the details of yet aren’t stupid***. You wouldn’t have hired them if they were. They know something’s up because you’ve got your guilty face on and you keep pulling people into meeting rooms and looking vaguely worried. Here’s the deal: it’s a lot less stressful for everyone (including you) if you’re up front from the beginning. We took this approach during a recent company-wide reorganisation and got really positive feedback. People would much, much rather be told that something is going to happen but you’re not entirely sure what it is yet than have you wait until it’s all fixed up and then fait accompli the heck out of them. They will tell you this themselves if you ask them. And here’s why: by waiting until you know exactly what’s going on to communicate, you remove any agency that the people that the thing is going to happen to might otherwise have had. I know you’re scared that they might get scared – and that’s natural and kind of admirable – but it’s also patronising and infantilising. Ask someone whether they’d rather work on a project which has an openly uncertain future from the beginning, or one where everything’s great until it gets shut down with no forewarning, and very few people are going to tell you they’d prefer the latter. Uncertainty is humanising. It’s you admitting that you don’t have all the answers, which is great, because no one does. It allows you to be consultative – you can actually ask other people what they think and how they feel and what they’d like to do and what they think you should do, and they’ll thank you for it and feel listened to and respected as people and colleagues. Which is a really good reason to start talking to them about what’s going on as soon as you know something’s going on yourself. All of the above assumes you actually care about talking to the people who work with you and for you, and that you’d like to do the right thing by them. If that’s not the case, you can cheerfully disregard the advice here, but if it is, you might want to think about the ways above – and the inevitable countless other ways – that making internal communication about you and not about your audience could actually be doing the people you’re trying to communicate with a huge disservice. So take a deep breath and talk. For five minutes or so. About the important things. Not the other things. As soon as you possibly can. And you’ll be fine.   *Of course you do. You’re good at your job. Don’t worry. **This might not always be true, but it is most of the time. Other people who need to know the how will either be people who you’ve already identified as needing-to-know and thus part of the same set as the people in you’re area you’ve already discussed this with, or else they’ll ask you. But don’t bring this stuff up unless someone asks for it, because most of the people in the audience really don’t care and you’re wasting their time. ***I mean, they might be. But let’s give them the benefit of the doubt and assume they’re not.

    Read the article

  • Why are embedded device apps still written in C/C++? Why not Java programming language?

    - by hinkmond
    At the recent Black Hat 2014 conference in Sin City, the Black Hatters were focusing on Embedded Devices and IoT. You know? Make your networked-toaster burn your bread 10,000 miles away, over the Web for grins and giggles. Well, apparently the Black Hatters say it can be done pretty easily these days, which is scary. See: Securing Embedded Devices & IoT Here's a quote: All these devices are still written in C and C++. The challenges associated with developing securely in these languages have been fought for nearly two decades. "You often hear people say, 'Well, why don't we just get rid of the C and C++ language if it's so problematic. Why don't we just write everything in C# or Java, or something that is a little safer to develop in?'," DeMott says. Gah! Why are all these IoT devices still using C/C++? Of course they should be using Java SE Embedded technology! It's a natural fit to use for better security on embedded devices. Or, I guess, developers really don't mind if their networked-toasters do char their breakfast. If it can be burned, it will be... That's what I say. Unless they use Java. Hinkmond

    Read the article

  • Want 2 external monitors with a 13' MacBook - is this possible?

    - by kevinburke
    I've got a 13' white MacBook from 2008 and I'd like to run one or two external monitors, my budget is $500. I've tried to do research and this is what I was going to get - will these work OK? Two Diamond BVU195 HD USB Display Adapters (DVI and VGA with included DVI to VGA adapter) to plug into the USB port Two Dell ST2310 monitors One external USB hub so I don't use up both of my USB ports Will this work? I've read some people say it does and some people say it doesn't, but I don't know enough to say either way. Also do you have recommendations for a better monitor than the dell sd2310? what's the best setup I can buy for $500? Thanks very much for your help, Kevin

    Read the article

  • Is a "model" branch a common practice?

    - by dukeofgaming
    I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) Schema source (a .sql file) Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. Edit: I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary-looking merges. I'm also doing feature branching.

    Read the article

  • Will having multiple domains improve my seo?

    - by Anonymous12345
    Lets say I have a domain already, for example www.automobile4u.com (not mine), with a website fully running and all. The title of my "Website" says: <title>Used cars - buy and sell your used cars here</title> Also, lets say I have fully SEO the website so when people searching for the term buy used cars, I end up on the second or first page. Now, I want to end up higher, so I go to the google adwords page where you can check how many searches are made on specific terms. Lets say the term "used cars" has 20 million searches each month. Here comes the question, could I just go and buy that domain with the search terms adress, in this case www.usedcars.com and make a redirect to my original page, and this way when people search for "used cars", my newly bought domain name comes up redirecting people to my original website (www.automobile4u.com)? The reason I believe this benefits me is because it seems search engines first of all check website adresses matching the search, so the query "used cars" would automatically bring www.usedcars.com to the first result right? What are the downsides for this? I already know about google spiders not liking redirects, but there are many methods of redirecting... Is this a good idea generally?

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • Interaction between two Clouds

    - by user7969
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :)

    Read the article

  • The most efficent ways for drawing lines all day long with OpenGL

    - by nkint
    I'd like to put a computer screen that is running an OpenGL programs in a room. It has to run all day long (not in the night). I'd like to draw lines that are slowly fading in the background. The setting is simple: a uniform color background (say, black) and colored lines (say, white) that are slowly fading out. With slowly I mean.. hours. Say that the first line I draw is with alpha 255 (fully visible), after one hours is 240. After 10 hours is 105. One line could have 250 points and there will be like 300 line in one day. For now I have done a prototype with very rudimentary method like: glBegin( GL_LINE_STRIP ); iterator = point_list.begin(); for (++iterator, end = point_list.end(); iterator != end; ++iterator) { const Vec3D &v = *iterator; glVertex2f(v.x(), v.y()); } glEnd(); More efficient method?

    Read the article

  • Choose IP Adress for Process to use on launch [duplicate]

    - by user1436026
    This question already has an answer here: How to set which IP to use for a HTTP request? 2 answers Say my server has the following IP addresses: 123.456.78.0 123.467.79.1 123.456.77.1 123.456.68.0 etc... Say I want to launch a process, say wget from the command line. Normally, I would do something like this: wget http://www.google.com/ Except that I would like to choose the IP address that my server uses to make this request. Is there a way to use wget or launch another command with a choice of one of my own IP addresses, like the following pseudo command: with-ip 123.456.68.0 wget http://www.google.com/

    Read the article

  • How to Create a Grid for a 2D Game?

    - by SoulBeaver
    So I'm currently writing the engine for my videogame. I've almost integrated Tiled (I think) so I should have a map-creator here soon. My question is, how do I actually make the grid? I'm really confused here. If I create a large map with, say, 20x20 grids the size of 32x32 (screen size 640x640), then what do I do with it? Let's say I have the code for creating a window, and then place a player sprite that I can move with input, that's fine. If I use one map that's as big as the screen, then every pixel on the map is also a pixel on the game screen. The mapping is exact. Now what happens if I have a 2000x2000 map, for example? My character would have to keep moving and move the map around (or rather the camera focused on the player moves). Then I can no longer say that the screen maps exactly to the pixel position of the map. I tried making a Grid class that maps out the screen area to 32x32 tiles, but I'm not sure if that makes any sense. Once the map moves each tile would have to update its information, or something. I'm just really confused here. How do I actually make the tiles and a grid and map them to the data I get from tiled, or that I make myself? Are there any good examples of source code that I could look at?

    Read the article

  • Thin web server - single or multiple instances per IP address:port?

    - by wchrisjohnson
    I'm deploying a rack/sinatra/web socket app onto several servers and will use thin as the web server (http://code.macournoyer.com/thin/). There are almost no views to show, so I am not front-ending it with a traditional web server like Apache or nginx. In general, you see thin started and the underlying config file for it has the number of server instances to start, say 3, and the port to start with, say 5000. So, in my example, when thin starts, it starts up three instances on a range of ports, starting on port 5000. If I have a series of virtual machines, say 3, 6, 9, etc. that I treat as a cluster, would/should I choose to start a single thin instance on each VM, or multiple instances on each VM? Why? Thanks - Chris

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • What's the best multiple monitor setup I can buy for $500 and a macbook?

    - by kevinburke
    I've got a 13' white MacBook from 2008 and I'd like to run one or two external monitors, my budget is $500. I've tried to do research and this is what I was going to get - will these work OK? Two Diamond BVU195 HD USB Display Adapters (DVI and VGA with included DVI to VGA adapter) to plug into the USB port Two Dell ST2310 monitors One external USB hub so I don't use up both of my USB ports Will this work? I've read some people say it does and some people say it doesn't, but I don't know enough to say either way. Also do you have recommendations for a better monitor than the dell sd2310? what's the best setup I can buy for $500? Thanks very much for your help, Kevin

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

  • list of polymorphic objects

    - by LivingThing
    I have a particular scenario below. The code below should print 'say()' function of B and C class and print 'B says..' and 'C says...' but it doesn't .Any ideas.. I am learning polymorphism so also have commented few questions related to it on the lines of code below. class A { public: // A() {} virtual void say() { std::cout << "Said IT ! " << std::endl; } virtual ~A(); //why virtual destructor ? }; void methodCall() // does it matters if the inherited class from A is in this method { class B : public A{ public: // virtual ~B(); //significance of virtual destructor in 'child' class virtual void say () // does the overrided method also has to be have the keyword 'virtual' { cout << "B Sayssss.... " << endl; } }; class C : public A{ public: //virtual ~C(); virtual void say () { cout << "C Says " << endl; } }; list<A> listOfAs; list<A>::iterator it; # 1st scenario B bObj; C cObj; A *aB = &bObj; A *aC = &cObj; # 2nd scenario // A aA; // B *Ba = &aA; // C *Ca = &aA; // I am declaring the objects as in 1st scenario but how about 2nd scenario, is this suppose to work too? listOfAs.insert(it,*aB); listOfAs.insert(it,*aC); for (it=listOfAs.begin(); it!=listOfAs.end(); it++) { cout << *it.say() << endl; } } int main() { methodCall(); retrun 0; }

    Read the article

  • Optimization and Saving/Loading

    - by MrPlosion1243
    I'm developing a 2D tile based game and I have a few questions regarding it. First I would like to know if this is the correct way to structure my Tile class: namespace TileGame.Engine { public enum TileType { Air, Stone } class Tile { TileType type; bool collidable; static Tile air = new Tile(TileType.Air); static Tile stone = new Tile(TileType.Stone); public Tile(TileType type) { this.type = type; collidable = true; } } } With this method I just say world[y, x] = Tile.Stone and this seems right to me but I'm not a very experienced coder and would like assistance. Now the reason I doubt this so much is because I like everything to be as optimized as possible and there is a major flaw in this that I need help overcoming. It has to do with saving and loading... well more on loading actually. The way it's done relies on the principle of casting an enumeration into a byte which gives you the corresponding number where its declared in the enumeration. Each TileType is cast as a byte and written out to a file. So TileType.Air would appear as 0 and TileType.Stone would appear as 1 in the file (well in byte form obviously). Loading in the file is alot different though because I can't just loop through all the bytes in the file cast them as a TileType and assign it: for(int x = 0; x < size.X; x++) { for(int y = 0; y < size.Y; y+) { world[y, x].Type = (TileType)byteReader.ReadByte(); } } This just wont work presumably because I have to actually say world[y, x] = Tile.Stone as apposed to world[y, x].Type = TileType.Stone. In order to be able to say that I need a gigantic switch case statement (I only have 2 tiles but you could imagine what it would look like with hundreds): Tile tile; for(int x = 0; x < size.X; x++) { for(int y = 0; y < size.Y; y+) { switch(byteReader.ReadByte()){ case 0: tile = Tile.Air; break; case 1: tile = Tile.Stone; break; } world[y, x] = tile; } } Now you can see how unoptimized this is and I don't know what to do. I would really just like to cast the byte as a TileType and use that but as said before I have to say world[y, x] = Tile.whatever and TileType can't be used this way. So what should I do? I would imagine I need to restructure my Tile class to fit the requirements but I don't know how I would do that. Please help! Thanks.

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

  • Too complex/too many objects?

    - by Mike Fairhurst
    I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an AtomicPermission class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '"allowable" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: is this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' when is OO abstraction too much? when is OO abstraction too little? how can I determine when I am overthinking a problem vs fixing one? how can I determine when I am adding bad code to a bad project? how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class.

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >