Search Results

Search found 28877 results on 1156 pages for 'do good'.

Page 200/1156 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Informing Googlebot for deprecated pages

    - by trante
    I publish timetables in my website. For example last year I published Number 2 bus Summer 2013 timetable. I has pretty good ranking on Google SERPs for number 2 bus timetable But this year I added a new page with the name "Number 2 bus Summer 2014 timetable". When users search number 2 bus timetable in Google, they find 2013 timetable in first page of SERPs. But I want them to find 2014 timetable. Thy can reach 2014 page with the keywords number 2 bus timetable 2014. But most of the users doesn't write year name. So what's the proper way to say Googlebot that 2013 page is deprecated and newer version is 2014 page ? I created a link from 2013 page to 2013 page and added a deprecation alert for visitors. But I still see 2013 timetable in first page of Google SERPs. Of course it is possible to 301 redirect, 2013 page to 2014 page. But I want users to reach old pages to compare the differences between years. (As you would guess I have many pages like this.) Edit: Why I don't put timetables on same page and show different years' timetables with sorting. Because my old pages has good pagerank scores or SERPs. Removing these old page will remove them.

    Read the article

  • The one feature that would make me invest in SSIS 2012

    - by Peter Larsson
    This week I was invited my Microsoft to give two presentations in Slovenia. My presentations went well and I had good energy and the audience was interacting with me. When I had some time over from networking and partying, I attended a few other presentations. At least the ones who where held in English. One of these was "SQL Server Integration Services 2012 - All the News, and More", given by Davide Mauri, a fellow co-worker from SolidQ. We started to talk and soon came into the details of the new things in SSIS 2012. All of the official things Davide talked about are good stuff, but for me, the best thing is one he didn't cover in his presentation. In earlier versions of SSIS than 2012, it is possible to have a stored procedure to act as a data source, as long as it doesn't have a temp table in it. In that case, you will get an error message from SSIS that "Metadata could not be found". This is still true with SSIS 2012, so the thing I am talking about is not really a SSIS feature, it's a SQL Server 2012 feature. And this is the EXECUTE WITH RESULTSETS feature! With this, you can have a stored procedure with a temp table to deliver the resultset to SSIS, if you execute the stored procedure from SSIS and add the "WITH RESULTSETS" option. If you do this, SSIS is able to take the metadata from the code you write in SSIS and not from the stored procedure! And it's very fast too. Let's say you have a stored procedure in earlier versions and when referencing that stored procedure in SSIS forced SSIS to call the stored procedure (which can take hours), to retrieve the metadata. Now, with RESULTSETS, SSIS 2012 can continue in milliseconds! This is because you provide the metadata in the RESULTSETS clause, and if the data from the stored procedure doesn't match this RESULTSETS, you will get an error anyway, so it makes sense Microsoft has provided this optimization for us.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Need a PCIe desktop graphics card for dual-monitor

    - by Graham
    I have a mid-2008 workstation with two HD monitors supporting HDMI and DVI inputs. Since Ubuntu 11.10, I have experienced no end of trouble with my NVidia Quadro NVS 290 in TwinView dual-monitor output. Others have similar desktop TwinView woes. I want a new graphics card. Previously I asked for a graphics card recommendation and response was Nvidia Geforce GTS 450... but really I'm looking for someone who has actually got a working dual-monitor desktop to tell me what card they use so I can get something that is known to work. So please, people who have no-issues with their 64-bit Ubuntu 12.04 Unity 3D desktop spread across two HD-resolution external monitors (either DVI or HDMI connector), and who also run Google Chrome (which throws a spanner due to its own GPU compositing)... please let me know what graphics card you have so I can buy one. Gathering Options These seem to be the Nvidia cards featuring dual DVI. But they all seem to be gaming cards - what has dual-DVI, good support, but is not a massive gaming card? Nvidia GTS 450 (previously recommended) - 2x DVI Nvidia GTX 550 Ti (used by System76) - 2x DVI Nvidia GT 430 (used by System76) - 1xDVI, 1xHDMI Nvidia GT 640 (found on NVidia site) - 1xDVI, 1xHDMI (also GT 620, GT 630) Has anyone had a good desktop dual-monitor Unity 3D experience with ATI cards?

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Entry / JR Php Programmer - What do I learn next?

    - by dtj
    I got very interested in programming toward the end of college. Took a few classes, but learned most everything on my own via books and such. Its mostly been Php and MySQL. Right out of school, I got a job working at a company for 2 years (web media) and ended up learning a lot of stuff and programming some things for them. I am no longer at that company but I am looking for my next steps as a programmer. I really enjoy Web Development and Php and MySQL seems to be my thing. Basically, I know how to do CRUD operations, i am mediocre at OOP and still have more to learn, I know HTML and CSS quite well, I know my way around a Unix terminal and can access MySQL through it and set up cron jobs and such. I know some basic Javascript. Whats a good next step? I don't anything about 3rd party services, PDO, APIs (twitter, facebook, etc), Drupal / Joomla, Unit Testing, E-Commerce, PECL, PEAR ....in other words A LOT I get easily overwhelmed by the amount of stuff there is to learn, so I'm sort of trying to find a path. Right now, I'm digging into OOP more, as that seems like a good conceptual first-step. Any suggestions?

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Ubuntu security with services running from /opt

    - by thejartender
    It took me a while to understand what's going on here (I think), but can someone explain to me if there are security risks with regards to my logic of what's going on here as I am trying to set up a home web server as a developer with some good Linux knowledge? Ubuntu is not like other systems, as it has restricted the root user account. You can not log in as root or su to root. This was a problem for me as I have had to install numerous applications and services to /opt as per user documentation (XAMPPfor Linux is a good example). The problem here is that this directory is owned by root:root. I notice that my admin user account does not belong to root group through the following command: groups username so my understanding is that even though the files and services that I place in /opt belong to root, executing them by means of sudo (as required) does not mean that they are run as root? I imagine that the sudo command is hidden somewhere under belonging to the root user and has a 775 permission? So the question I have is if running a service like Tomcat, Apcahe, etc exposes my system like on other systems? Obviously I need to secure these in configurations, but isn't the golden rule to never run something as root? What happens if I have multiple services running under same user/group with regards to a compromised server?

    Read the article

  • How relevant are Brainbench scores when evaluating candidates?

    - by Newtopian
    I've seen many companies using certification services such as Brainbench when evaluating candidates. Most times they use it as a secondary screen prior to interview or as a validation to choose between candidates. What is your experience with Brainbench scores? Did you try the tests yourself, and if so do you feel the score is meaningful enough to be used as part of a hiring process? Difficult choice. Consensus seems to be that BB cert are not very good as a certification. The biggest argument was around the fact that some of the questions are too precise to form a good evaluation. this view can probably be tempered somewhat but still, to hold someone's future solely on the results of this evaluation would be irresponsible. That said, I still think it is possible to use them properly to gain additional objective knowledge on a candidate's level of expertise provided the test is done in a controlled environment ensuring that all taking it stand on equal footing. Thus I went with the answer that best reflected this view keeping in mind that it is still just an hour long 50ish multiple choice question to evaluate skills and knowledge that take years to acquire. To be taken with a grain of salt ! In short, The tests have value but weather or not they are worth the money is another debate. Thanks all for your time.

    Read the article

  • Making an interactive 2D map

    - by Chad
    So recently I have been working on a Legend of Zelda: A Link to the Past clone, and I am wondering how I could handle certain map interactions (like cutting grass, lifting rocks, etc). The way I am currently doing the tilemap is with 2 PNGs. The first is the "tilemap" where each pixel represents a 16x16 tile and the (red, green) values are the (x, y) coords for the tile in the second PNG (the "tileset"). I am then using the blue channel to store collision data. Each tile is split into 4 8x8 tiles and represented by a 2 bit value (0 = empty, 1 = Jumpdown point, 2 = unused right now, 3 = blocking). 4 of these 2 bit values make up the full blue channel (1 byte). So collisions work great, and I am moving on to putting interactive units on the level; but I am not sure what a good way is to do it. I have experimented with spawning an entity for each grass and rock, but there are just WAY to many; FPS just dies even if I confine it to the current "zone" the user is in (for those who remember LTTP it had zones you moved between). It does make a difference that this is a browser-based JavaScript game. tl;dr: What is a good way to have an interactive map without using full blown entities for each interactive item?

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • Swapping from NHibernate to Entity Framework &ndash; Sanity Check

    - by DesigningCode
    Now I’m not an expert in either of these techs.  I have a nice framework for unit of work / repository built with NHibernate.  Works pretty well.  I use FluentNhibernate to do the mappings.  Works well.  Takes very little code to get going with a DB back OO model. So why swap? Linq.  In Entity Framework you get much better linq support.  Visibility. I have no idea what's really happening with NHibernate….its a cloud of mystery most of the time.  You have to read all the blogs, mailing lists, etc to know what's going on. So, EF 4.0 looks like pretty good….  it has reasonably good support for mapping POCOs.  Wrapping UnitOfWork and Repository around it seems ok. Only thing I haven’t liked too much is having to explicitly load lazy loading entities. So…. am I sane?  is EF the way to go?  or is NHibernate going to suddenly release the next generation of coolness?  Is there any other major gotchas of using EF over NHibernate?

    Read the article

  • "Don't do programming after a few years of starting career" Is this a fair advice?

    - by Muhammad Yasir
    I am a little experienced developer having around 5 years experience in PHP and somewhat less in Java, C# and trying to learn some Python now a days. Since the start of my career as a programmer I have been told every now and then by fellow programmers that programming is suitable for a few early years of carrier (most of them take it as 5 years) and that one must change the direction after it. The reason they present is that headaches and pressures associated with programming. They also say that programmers are less social and don't usually like to give time to their families etc. and specially "Oh come on, you can not do programming in your entire life!" I am somewhat confused here and need to ask others about it. If I leave programming then what do I do?! I guess teaching may be a good option in this case but it will require to first earn a PhD degree perhaps. It may also be noteworthy that in my country (Pakistan) the life of a programmer is not very good in that normally they must give 2-3 extra hours in office to accomplish urgent programming tasks. I have a sense that situation is somewhat similar in other countries and regions as well. So the question is, do you think it is a fair advice to change career from programming to something else after spending 5 years in this field? Thanks for sharing thoughts!

    Read the article

  • Reasons NOT to open source not-for-profit code?

    - by naught101
    I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on SE, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.

    Read the article

  • Hallmarks of a Professional PHP Programmer

    - by Scotty C.
    I'm a 19 year old student who really REALLY enjoys programming, and I'm hoping to glean from your years of experience here. At present, I'm studying PHP every chance I get, and have been for about 3 years, although I've never taken any formal classes. I'd love to some day be a programmer full time, and make a good career of it. My question to you is this: What do you consider to be the hallmarks or traits of a professional programmer? Mainly in the field of PHP, but other, more generalized qualifications are also more than welcome, as I think PHP is more of a hobbyist language and may not be the language of choice in the eyes of potential employers. Please correct me if I'm wrong. Above all, I don't want to wast time on something that isn't worth while. I'm currently feeling pretty confident in my knowledge of PHP as a language, and I know that I could build just about anything I need and have it "work", but I feel sorely lacking in design concepts and code structure. I can even write object oriented code, but in my personal opinion, that isn't worth a hill of beans if it isn't organized well. For this reason, I bought Matt Zandstra's book "PHP Objects, Patterns, and Practice" and have been reading that a little every day. Anyway, I'm starting to digress a little here, so back to the original question. What advice would you give to an aspiring programmer who wants to make an impact in this field? Also, on a side note, I've been working on a project with a friend of mine that would give a fairly good idea of where I'm at coding wise. I'm gonna give a link, I don't want anyone to feel as though I'm pushing or spamming here, so don't click it if you don't want to. But if you are interested on giving some feedback there as well, you can see the code on github. I'm known as The Craw there. https://github.com/PureChat/PureChat--Beta-/tree/

    Read the article

  • Complex Release Vehicle Management

    - by Sharon
    I'm looking into improving our build and release system. We are a .Net/Windows shop, and I don't see any really good tools for Windows for generating the files that are to be dropped in patch or hotfix. We are currently using TFSBuild 2010 with Windows Workflows for our continuous integration builds as well as our daily full build which includes an Installshield package for deployment. What is a good way of generating the list of files to be included in a "patch" style release, where one does not redistribute all the files, but only those necessary to accomplish the necessary changes? Are there any open source tools that work well for this, or do teams usually roll their own? I have considered using Beyond Compare but I would prefer something open source. The file "patch" creation must be 100% automatable. Which release vehicles really ought to be patch style? And which releases should replace all system files related to our application? Assume we have a very large amount of resources necessary to maintain. Is there any established material that is trusted within the industry for strategies about this? I realize it is different for "enterprise" rather than with typical websites. I am looking more for "enterprise" strategies due to our product distribution style. tl;dr Looking for info on how to ship more reliable packages?

    Read the article

  • I cannot solve the "Install these packages without verification" problem

    - by Yonatan Orlev
    I Googled and Googled and I just cannot find a solution to this problem: sudo apt-get install <whatever> Gives me: WARNING: The following packages cannot be authenticated! and Install these packages without verification [y/N]? I cannot find a decent solution. The closest I got was to run: sudo apt-get install debian-keyring debian-archive-keyring But then, even thought, and against my good judgment I agreed to install the package without confirmation, I get: (I replaced http with XXXX because of forum limitations). Install these packages without verification [y/N]? y Err XXXX://il.archive.ubuntu.com gutsy/universe debian-archive-keyring 2007.02.19-0.1 404 Not Found Err XXXX://il.archive.ubuntu.com gutsy/universe debian-keyring 2005.05.28 404 Not Found Failed to fetch XXXX://il.archive.ubuntu.com/ubuntu/pool/universe/d/debian-archive-keyring/debian-archive-keyring_2007.02.19-0.1_all.deb 404 Not Found Failed to fetch XXXX://il.archive.ubuntu.com/ubuntu/pool/universe/d/debian-keyring/debian-keyring_2005.05.28_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Trying to run apt-get update also does not help: I get tons of "404 Not Found" errors. Can someone please direct me to a good solution to this problem? I cannot understand why this issue is not better documented. There must be a simple solution which allows me to update my list of sources or whatever.

    Read the article

  • SAP or Navision? Career Path

    - by codebased
    This could be tricky to ask; I may or may not ask this question here but I thought to give it a try. I've been in Software Industry since 2002 and now it has been a time that I'm at Senior level where I normally code, lead and define the architect; giving technical solutions to the management is one of my asset that I've earned during my services. Now it is the time to define the road map for the future, $$$. I am not in favor of Project Management roles. I've been thinking of going through the ERP and my current company does provide me an option to go for Navision/ Microsoft Dynamics. They are currently on 4.0 but they are planning to move for 2009 and also to build one of their own plug-in. Indeed the option is good because Microsoft is trying to accomplish the market for Dynamics products. However, they have less success in Australia. Now, Another option is with SAP where person can go with 200 K $ a year. Where as I'd doubt that if the same kind of growth, financial, is available for Microsoft geek. What is your opinion on Navision or SAP? If I try to completely move to SAP it could be bit challenging as market will consider me a fresher. However the return is quite good. Where in case of Microsoft, I think technology changes so fast that there is a less chance to grow in, within, the same experience; in other word, if any new framework comes in .net then market look for that person who knows this new framework and not .net But in case of SAP, where the base remain same and chances are to grab more money from the market. What would you do if you were me? In stackoverflow - Navision questions are 20+ where in SAP 200+///?? :-)

    Read the article

  • Example of DOD design (on a generic Zombie game)

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • Need Help With Conflicting Customer Support Goals?

    - by Tom Floodeen
    It seems that every OPS review Customer Support Executives are being asked to improve the customer KPIs while also improving gross margin. This is a tough road for even experienced leaders. You need to reduce your agents research time while increasing their answer accuracy. You want to spend less time training them while growing the number of products and systems being used. You have to deal with increasing service volumes but at the same time you need to focus on creating appropriate service insight. After all, to be a great support center you not only have to be good at answering questions, you also need to be good at preventing them.   Five Key Benefits of knowledge Management in Customer Service will help you start down the path meeting these, and other, objectives. With Oracle Knowledge Products, fully integrated with Oracle’s CRM solutions, you can accomplish both increased  service demand while driving your costs down. And you can handle both while positively impacting the satisfaction and loyalty of your customers.  Take advantage of Oracle to not only provide you with a great integrated tool suite, but also with the vision to drive you down the path of success.

    Read the article

  • Little mysterious RowMatch

    - by kishore.kondepudi(at)oracle.com
    Incidentally this was the first piece of code i ever wrote in ADF.The requirement was we have tax rates which are read from a table.And there can be different type of tax rates called certificates or exceptions based on the rate_type column in the tax rates table.The simplest design i chose was to create an EO on the tax rates table and create two VO's called CertificateVO and ExceptionVO based on the same EO.So far so good.I wrote all the business logic in the EO and completed the model project.The CertificateVO has the query as select * from tax_rates TaxRateEO where rate_type='CERTIFICATE' and similary the ExceptionVO is also built.The UI is pretty simple and it has two tabs called Certificates and Exceptions and each table has a button to create a tax rate.The certificate tab is driven by CertificateVO and exception tab is driven by ExceptionVO.The CertificateVO has default value of rate_type set to 'CERTIFICATE' and ExceptionVO has default value of rate_type to 'EXCEPTION' to default values for new records.So far so good.But on running the UI i noticed a strange thing,When i create a new row in Certificate i see the same row in Exception too and vice-versa.i.e; what ever row i create in one VO it also appears in the second one although it shouldn't be.I couldn't understand the reason for behavior even though an explicit where clause is present.Digging through documentation i found that ADF doesnt apply the where clause to new rows instead it applies something called as RowMatch to them.RowMatch in simple terms is a where condition applied to the VO rows at runtime.Since we had both VO's based on the same EO we have the same entity cache.The filter factor for new rows to be shown in VO at runtime is actually RowMatch than the where clause defined in the VO.The default RowMatch is empty as a result any new row appears in both the VO's since its from same entity cache.The solution to this problem is to use polymorphic view objects which can do the row filter based on configuration or override the getRowMatch() method in the VOImpl and pass the custom where filter instead of default RowMatch.Eg:@Overridepublic RowMatch getRowMatch(){    return new RowMatch("rate_type='CERTIFICATE'");}similarly for ExceptionVO too.With proper RowMatch in place new rows will route themselves to appropriate VO.PS: The behavior(Same row pushed to both VO's from entity cache) is also called as ViewLink Consistency.Try it out!

    Read the article

  • How can player actions be "judged morally" in a measurable way?

    - by Sebastien Diot
    While measuring the player "skills" and "effort" is usually easy, adding some "less objective" statistics can give the player supplementary goals, especially in a MUD/RPG context. What I mean is that apart from counting how many orcs were killed, and gems collected, it would be interesting to have something along the line of the traditional Good/Evil, Lawful/Chaotic ranking of paper-based RPG, to add "dimension" to the game. But computers cannot differentiate good/evil effectively (nor can humans in many cases), and if you have a set of "laws" which are precise enough that you can tell exactly when the player breaks them, then it generally makes more sense to actually prevent them from doing that action in the first place. One example could be the creation/destruction axis (if players are at all allowed to create/build things), possibly in the form of the general effect of the player actions on "ecology". So what else is there left that can be effectively measured and would provide a sense of "moral" for the player? The more axis I have to measure, the more goals the player can have, and therefore the longer the game can last. This also gives the players more ways of "differentiating" themselves among hordes of other players of the same "class" and similar "kit".

    Read the article

  • Can I make a Compiz animation rule for Wine menus?

    - by satuon
    Wine menus appear and disappear with the "Glide 2" effect. This looks good for dialogs but not so good for menus. Can I make a custom rule to apply fade in/out instead? I include the animation section from my exported Compiz settings (default values are skipped): [animation] s0_open_matches = ((type=Dialog | ModalDialog | Normal | Unknown) | name=sun-awt-X11-XFramePeer | name=sun-awt-X11-XDialogPeer) & !(role=toolTipTip | role=qtooltip_label) & !(type=Normal & override_redirect=1) & !(name=gnome-screensaver);(type=Menu | PopupMenu | DropdownMenu | Dialog | ModalDialog | Normal);(type=Tooltip | Notification | Utility) & !(name=compiz) & !(title=notify-osd); s0_close_effects = animation:Glide 2;animation:Fade;animation:None; s0_close_matches = ((type=Dialog | ModalDialog | Normal | Unknown) | name=sun-awt-X11-XFramePeer | name=sun-awt-X11-XDialogPeer) & !(role=toolTipTip | role=qtooltip_label) & !(type=Normal & override_redirect=1) & !(name=gnome-screensaver);(type=Menu | PopupMenu | DropdownMenu | Dialog | ModalDialog | Normal);(type=Tooltip | Notification | Utility) & !(name=compiz) & !(title=notify-osd);

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >