Search Results

Search found 9193 results on 368 pages for 'batcher sort'.

Page 191/368 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Will Unity skills be interchangeable?

    - by Starkers
    I'm currently learning Unity and working my way through a video game maths primer text book. My goal is to create a racing game for WebGL (using Three.js and maybe Physic.js). I'm well aware that the Unity program shields you from a lot of what's going on and a lot of the grunt work attached to developing even a simple video game, but if I power through a bunch of Unity tutorials, will a lot of the skills I learn translate over to other frameworks/engines? I'm pretty proficient at level design with WebGL, and I'm a good 3D modeller. My weaknesses are definitely AI and Physics. While I am rapidly shoring up my math, and while Physics is undeniably interesting there's only so many hours in the day and there's a wealth of engines out there to take care of this sort of thing. AI does appeal to me a lot more, and is a lot more necessary. AI changes drastically from game to game, is tweaked heavily during development, and the physics is a lot more constant. Will leaning AI concepts in Unity allow me to transfer this knowledge pretty much anywhere? Or will I just be paddling up Unity creek with these skills?

    Read the article

  • Unable to start Ubuntu 12.04. The system is running in low-graphics

    - by kaleidoscpicsoul
    I am a newbie to Ubuntu. I installed Ubuntu 12.04 using a USB Stick and it was running all fine for a few weeks and this error popped up. According to one of my friend, the best way was to re-install Ubuntu. Being from a non unix background i thought the same too and after the second install it happened again, but only this time was much quicker, in 3 days. I don't want to re-install Ubuntu every time this happens. I am a complete newbie to Linux which means that i am really bad at using terminal. I know there are other people who fixed this issue using this very same forum, but unfortunately the answers provided are too complex for me to understand. Please let me know how to do this. Things I want to let you know: I would need help step by step if that is alright with you. After i get the error i get the options and i click exit to console login I get the following message in a black screen (which i think is a command line sort of thing): * Stopping save kernel messages [OK] apache2: Could not reliably determine the server's fully qualified domain name,using 127.0.1.1 for ServerName [OK] * Starting web server apache2 and a blinking cursor. So basically it looks like a dead end for my non unixy eye. And one final thing is before this issue had happened i had tried configuring Python to Apache2. For that i had uninstalled and installed LAMP server several time and edited the configuration files too. I don't know if this really is a concern, but I don't know.. I have a USB with Ubuntu 12.04 in it so i can install it anytime. (But i want to know what the issue is rather than running away) . I migrated to Ubuntu from Windows and i have no plans to go back. I think that's from my side. Please let me know if there are any questions.

    Read the article

  • Files in /home deleted

    - by long-time user....2006
    In the most specific, unemotional terms: Reinstalled os, using 11.10(1 month after release to skip initial issues that usually crop up). Configured system to my specifications(just ways of organizing config files, etc). Log out Log back in after after an hour or so...to find my home directory obliterated and just a few skeleton files existing. think oh well, try again (this has happened before with an install for reasons I've never been able to pinpoint, usually around install time with some sort of update but its never been a major recurring issue) same thing happens I thought something was awry, so I reinstalled again (another 20 minutes, meh) Set up system, arranged home directory a bit differently thinking maybe I tread on something I shouldn't have. log out, come back --- the same thing. Most of the directories I added were deleted (e.g. .xmonad which links to xmonad.hs in my portable config directory) tl;dr every change I make in my home directory gets deleted. The emotional part: UNACCEPTABLE. I need to configure my system the way I want, not get punched in the face everytime I make a change. I'll willingly fill in details as needed, this was just a start to see if anyone can help, I've found no trace of this issue in a search.

    Read the article

  • Marking Discussions as Answered

    As a contributor to a number of projects on CodePlex I really like the fact that the discussions feature exists but also I need ways to help me sort the discussions threads so I can make sure no-one is getting forgotten about. Seems like a lot of you agreed as the feature request Provide feature to allow Coordinators to mark Discussions threads as 'Answered' is our number 2 voted feature right now with 178 votes.  Today we rolled out the first iteration of “answer” support to discussions. In this first iteration we wanted to keep it simple and lightweight. The original poster of the thread along with project owners, developers or editors can mark any post to the thread as an answer. You can have any number of answers marked in a thread and it’s very quick to mark or unmark a post as an answer.  We deliberately keep the answers in the originally posted order so that you can see them in context with the discussion thread. When viewing discussions the default view is still to see everything, but you can easily filter by “Unanswered”.  You can even save that as a bookmark so as someone interested in the project can quickly jump to the unanswered discussion threads to go help out on. As I mention, we kept this first pass of the answering feature as simple and as lightweight as possible so that we can get some feedback on it. Head on over to the issue tracking this feature if you have any thoughts once you have used it for a bit or feel free to respond in the comments. I already have a couple of things I think we want to do such as a refresh of the look and feel of discussions in general along, make it easier to navigate to posts that are marked an answered and surface posts that you do that were marked as answered in your profile page - but if you have ideas then please let us know.

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • Ubuntu 14.04 LTS 64-bit install alongside Windows 7

    - by user289222
    I've tried installing Ubuntu 14.04 LTS alongside my Windows 7 OS, following the exact procedure given by the Ubuntu website and random other tutorials. I've tried with a LiveCD and with a USB stick but I always run into the same problem. When I'm at the screen where I'm allowed to select how I want to install Ubuntu ("alongside", "erase Windows 7", "something else"), the first option says "Install Ubuntu inside Windows 7" instead of "Install Ubuntu alongside Windows 7". From pretty much all tutorials I've seen, the tutorial says that the option should say "alongside". I click "inside" anyway, and Ubuntu doesn't install at all. Instead, my computer just reboots, and goes back to the Try Ubuntu or Install Now screen. This happens regardless of using a LiveCD or a USB stick. I've also tried manually resizing my partitions using "something else". Oddly, I see 4 sda partitions: /dev/sda type size used /dev/sda1 1mb unknown Windows 7 (loader) /dev/sda2 ntsf 208mb unknown Recovery Windows Environment (loader) /dev/sda3 ntsf ~752000mb unknown Recovery Windows Environment (loader) /dev/sda4 ~18000mb unknown I try resizing the largest partition, but some sort of internal error occurs and it doesn't let me resize my partitions. Any ideas on what's going on and how to solve it?

    Read the article

  • REST API wrapper - class design for 'lite' object responses

    - by sasfrog
    I am writing a class library to serve as a managed .NET wrapper over a REST API. I'm very new to OOP, and this task is an ideal opportunity for me to learn some OOP concepts in a real-life situation that makes sense to me. Some of the key resources/objects that the API returns are returned with different levels of detail depending on whether the request is for a single instance, a list, or part of a "search all resources" response. This is obviously a good design for the REST API itself, so that full objects aren't returned (thus increasing the size of the response and therefore the time taken to respond) unless they're needed. So, to be clear: .../car/1234.json returns the full Car object for 1234, all its properties like colour, make, model, year, engine_size, etc. Let's call this full. .../cars.json returns a list of Car objects, but only with a subset of the properties returned by .../car/1234.json. Let's call this lite. ...search.json returns, among other things, a list of car objects, but with minimal properties (only ID, make and model). Let's call this lite-lite. I want to know what the pros and cons of each of the following possible designs are, and whether there is a better design that I haven't covered: Create a Car class that models the lite-lite properties, and then have each of the more detailed responses inherit and extend this class. Create separate CarFull, CarLite and CarLiteLite classes corresponding to each of the responses. Create a single Car class that contains (nullable?) properties for the full response, and create constructors for each of the responses which populate it to the extent possible (and maybe include a property that returns the response type from which the instance was created). I expect among other things there will be use cases for consumers of the wrapper where they will want to iterate through lists of Cars, regardless of which response type they were created from, such that the three response types can contribute to the same list. Happy to be pointed to good resources on this sort of thing, and/or even told the name of the concept I'm describing so I can better target my research.

    Read the article

  • Combinatorial explosion of interfaces: How many is too many?

    - by mga
    I'm a relative newcomer to OOP, and I'm having a bit of trouble creating good designs when it comes to interfaces. Consider a class A with N public methods. There are a number of other classes, B, C, ..., each of which interacts with A in a different way, that is, accesses some subset (<= N) of A's methods. The maximum degree of encapsulation is achieved by implementing an interface of A for each other class, i.e. AInterfaceForB, AInterfaceForC, etc. However, if B, C, ... etc. also interact with A and with each other, then there will be a combinatorial explosion of interfaces (a maximum of n(n-1), to be precise), and the benefit of encapsulation becomes outweighed by a code-bloat. What is the best practice in this scenario? Is the whole idea of restricting access to a class's public functions in different ways for other different classes just silly altogether? One could imagine a language that explicitly allows for this sort of encapsulation (e.g. instead of declaring a function public, one could specify exactly which classes it is visible to); Since this is not a feature of C++, maybe it's misguided to try to do it through the back door with interaces?

    Read the article

  • Fastest approach to 3D animation

    - by HappyFerret
    I'm currently tasked with designing a small HTML5 game. Having done everything by myself so far (3D models, codebase, game design, etc) I'm now at a point where I'm running out of time. I've less than a day to animate and bind everything together. However, that's exactly my problem. I was under the naive impression that everything would be easier if I went with pre-rendered 3D models. However, I didn't consider the most difficult part. Animation. After having spent over an hour trying to figure out messiahStudio, I figured it's time to ask for outside help. Is there any easier solution to 3D animation than 3D rigging? What I'm basically looking for is some sort of tool that allows me to simply grab and move/deform select polygons. It doesn't have to be as life-like and accurate as rigging, just efficient enough. Were the circumstances any different, I might just learn how to rig. But that's sorely out of scope right now. PS:The models were created in Sculptris but are fairly low-poly.

    Read the article

  • Can't remove software - Installed in NULL

    - by ChosSimbaOne
    I've installed software to our administration machine. The problem is that i cannot start the software or uninstall it, as it is not in any directory on the machine. I tried to install it at /pack/CST/... but it is not there and a locate on CST or cst returns nothing. The software is installed from a DVD and not a repository. I've tried to reboot the machine, as i thought i might had something to do with the software being loaded in some sort of tmpfs but that didn't help. I've looked through the entire /etc to check for any relations to the software, but unsuccessfully. I'm out of ideas, to what can cause this problem, anyone got any ideas?? EDIT: I downloaded the iso wich i mounted with: sudo mount -o loop /path/to/iso.iso /path/to/mountpoint sudo /path/to/mountpoint/install.sh Ran the install GUI via an X-session. I choose to install the software in /pack/CST/... but when it exited it said that the software had been installed to /tmp/... There was nothing in tmp, so i decided to reboot the machine and did a full find to see if there was anything left of the software, removed what looked like it could be related. It had placed a script in all the /etc/rs* folder which I removed with: sudo update-rc.d -f scriptname -r I rebooted the machine again, just to be sure. When i run the installer again, it tells me that the software is installed in NULL and i have to remove before installing it. /pack/ is a mountpoint for /q/system/pack What i expected was that the software would be installed in /pack/CST, but it seems to be lock in the system, but I am unable to locate where.

    Read the article

  • I have Ubuntu only and need to install Windows

    - by Terzuz
    I had Windows 8, I installed Ubuntu for a new OS, Then I want to sadly go back to Windows , I have a Windows Vista *.iso but I can't boot from it. When I try to extract the '.iso file and have the contents on my USB so it can boot up , When I restart and click F9 for my Boot Device Options , Only my Hard Drive and CD ROM are there but my "Generic Flash Drive" is not , But when I do not have Windows Vista '.iso on it , It will show up in the list. How can I make a partition of some sort, Provide instructions since I am new at this all , then I need to be able to use the Windows Vista installer and install Windows Vista, I would like Dual-Boot if possible. Info: I have the HP 2000 Laptop (Mine was removed from the Best Buy Website so the closest laptop to the specifications and the design is the link at the bottom) I am running Ubuntu 12.10. I have 4GB of RAM , 220 GB in my Hard Drive left , I have a USB Flash Drive which works sometimes , other times it fails. Note - I tried using GParted in Ubuntu but I had a problem where the main drive with 220 GB Free was locked , I am not sure what to do and can not find the correct forum. http://www.bestbuy.com/site/HP+-+Pavilion+15.6"+Laptop+-+4GB+Memory+-+320GB+Hard+Drive+-+Pewter/5043836.p?id=1218608951204&skuId=5043836

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • New functionality in TFS Build Manager &ndash; Managing Triggers and Build Resources

    - by Jakob Ehn
    Yesterday we pushed out a new release (August 2012) of the Community TFS Build Extension, including a new version of the Community TFS Build Manager (1.0.4.6) The two big new features in the Build Manager in this release are: Set Triggers It is now possible to select one or more build definitions and update the triggers for them in one simple operation: You’ll note that we have started collapsing the context menu a bit, the list of commands are getting long! When selecting the Trigger command, you’ll see a dialog where the options should be self-explanatory: The only thing missing here is the Scheduled trigger option, you’ll have to do that using Team Explorer for now.   Manage Build Resources The other feature is that it is now possible to view the build controllers and agents in your current collection and also perform some actions against them. The new functionality is available by select the Build Resources item in the drop down menu: Selecting this, you’ll see a (sort of) hierarchical view of the build controllers and their agents: In this view you can quickly see all the resources and their status. You can also view the build directory of each build agent and the tags that are associated with them. On the action menu, you can enable and disable both agents and controllers (several at a time), and you can also select to remove them. By selecting Manage, you’ll be presented with the standard Manage Controller dialog from Visual Studio where you can set the rest of the properties. Hopefully we’ll be able to implement most of the existing functionality so that we can remove that menu option Our plan is to add more functionality to this view, such as adding new agents/controllers, restarting build service hosts, maybe view diagnostic information such as disk space and error logs.   Hope you’ll find the new functionality useful. Remember to log any bugs and feature requests on the CodePlex site. Happy building!

    Read the article

  • Artificial Intelligence ... how to make an object roam freely/avoid other objects, and model consciousness? [on hold]

    - by help bonafide pigeons
    Say a simple free roam battle scene in which a player runs around freely and engages in battle with other enemies/objects, as shown below: The dragon/dinosaur (or whatever that thing I drew appears to be) will, by some measure, try and avoid attacks so it is modeled to appear to have a conscious desire to avoid pain. My question is ... since this is very complex, many possible strategies for solving this, algorithms, etc., what is the basic idea behind how this would be accomplished in any sort? Like, we can assume the enemy in the picture is not just going to aimlessly hop around and avoid, but freely be modeled to behave as if it were really exploring/fighting. For the best example I can give, witness the behavior of the enemies in Final Fantasy 12 in this video: http://www.youtube.com/watch?v=mO0TkmhiQ6w How do the pros, or how would anyone attempt solve/implement this? PS: I have tried several times to give an image the "illusion" that is has a conciousness, but aside from emulating a real animal's consciousness in complete, I fall short and get choppy moving images that follow predictable patterns, error-prone movements, and the worst imaginable scenario of a battle engagement.

    Read the article

  • How often do you review fundamentals?

    - by mlnyc
    So I've been out of school for a year and a half now. In school, of course we covered all the fundamentals: OS, databases, programming languages (i.e. syntax, binding rules, exception handling, recursion, etc), and fundamental algorithms. the rest were more in-depth topics on things like NLP, data mining, etc. Now, a year ago if you would have told me to write a quicksort, or reverse a singly-linked list, analyze the time complexity of this 'naive' algorithm vs it's dynamic programming counterpart, etc I would have been able to give you a decent and hopefully satisfying answer. But if you would have asked me more real world questions I might have been stumped (things like how would handle logging for an application, or security difference between GET and POST, differences between SQL Server and Oracle SQL, anything I list on my resume as currently working with [jQuery questions, ColdFusion questions, ...] etc) Now, I feel things are the opposite. I haven't wrote my own sort since graduating, and I don't really have to worry much about theoretical things that do not naturally fall into problems I am trying to solve. For example, I might give you some great SQL solutions using an analytical function that I would have otherwise been stumped on or write a cool web application using angular or something but ask me to write an algo for insertAfter(Element* elem) and I might not be able to do it in a reasonable time frame. I guess my question here to the experienced programmers is how do you balance the need to both learn and experiment with new technologies (fun!), working on personal projects (also fun!) working and solving real world problems in a timeboxed environment (so I might reach out to a library that does what I want rather than re-invent the wheel so that I can focus on the problem I am trying to solve) (work, basically), and refreshing on old theoretical material which is still valid for interviews and such (can be a drag)? Do you review older material (such as famous algorithms, dynamic programming, Big-O analysis, locking implementations) regularly or just when you need it? How much time do you dedicate to both in your 'deliberate practice' and do you have a certain to-do list of topics that you want to work on?

    Read the article

  • Why choose an established CMS as opposed to building one from scratch?

    - by SkonJeet
    A lot of my research over the next few weeks will be into different CMS's. I've already had a brief look at episerver and umbraco. While reading into these systems I can't help but think that providing content management features are achievable without learning the details and structure of many of these (rather large) CMS platforms. I have, in the past, been given projects whereby my role as a developer must be kept separate to that of an editor (makes sense). i.e. It was my task to develop the design and functionality of the site and my clients' job to update the content. I've achieved this by also implementing a sort of 'portal' on which there were a couple of pages that would accept text input and picture uploads etc. (basically, whatever content they wanted), record this new content to the database and then by design the code-behind would read all this from the database into relevant controls (repeaters for example). For me, this has been an effective enough way of my clients managing the content to deploy with my solutions. I know that I am wrong - and that CMS's are preferable to those that are built from the ground up - but other than the matter of cost, why?

    Read the article

  • Reliance on Outlook (been a looong time, I know)

    - by AndyScott
    Do you feel that your development group too reliant on Outlook? Have you reached a point that you have to search your email for pertinent information when asked? What are you using? I realized things had gotten out of hand a couple weeks ago over a weekend. I was at my in-laws house (in the country, no PC/laptop, no internet connection; and I get an email on my phone that I needed to reply to, but I couldn't send without deleting items from my inbox/sent items/etc. Now mind you, I have rules set up to move stuff into folders, and files more than a month old are automatically moved to the PST; but generally don't manually move items to a PST until I have had a chance to 'work' the item. Please don't bother mocking my process, it's just the way I work. That being said, it was a frustrating process of 'I need all this information, what can I afford to lose'. I work on an International project (think lots of customers), and conversations in 9 or 10 different directions about 10-20 different things are not abnormal for a given day. I have found myself looking data up in Outlook because that's where it is. I think that I have reached the point now, where I don't feel that Outlook is up to the task of organizing the data that it contains.   When you have that many emails (200 or so a day), information seems to get lost at times, and I find that Outlook's search capabilities are lacking. Additionally, I find that any sort of organizational 'system' of sorting emails that can cover multiple topics is a lost cause. But at the same time, the old process of taking the information that I got from emails and moving it into another 'notes' type of program has proved to be too time consuming. Anyone out there have some better type of system? (Comments about the capacity of my brain, and it's ability to recall information not needed.)

    Read the article

  • Allowing client to select data to return via REST interface

    - by CMP
    I have a rest service that is essentially a proxy to a variety of other services. So if I call GET /users/{id} It will get their user profile, as well as order history, and contact info, etc... all from various services, and aggregates them into one nice object. My problem is that each call to a different service has the potential to add time to the original request, so we would rather not get ALL the data ALL of the time if a particular client does not care about all of the pieces. A solution I have arrived at is to do something like this: GET /users/{id}?includeOrders=true&includeX=true&includeY=true... That works, and it allow me to do only what I need to, but it is cumbersome. We have added enough different data sources that there are too many parameters for that style to be useful. I could do something similar with a single integer and a bitmask or something, but that only makes it harder to read, and it does not feel very Restful. I could break it down into multiple calls so they would need to call /users/{id}/orders and /users/{id}/profile separately, but that sort of defeats the purpose of an aggregating proxy, who's purpose is to make clients jobs easier. Are there any good patterns that can help me return just enough data for each client, without making it too difficult for them to filter and select what they want?

    Read the article

  • Determining whether two fast moving objects should be submitted for a collision check

    - by dreta
    I have a basic 2D physics engine running. It's pretty much a particle engine, just uses basic shapes like AABBs and circles, so no rotation is possible. I have CCD implemented that can give accurate TOI for two fast moving objects and everything is working smoothly. My issue now is that i can't figure out how to determine whether two fast moving objects should even be checked against each other in the first place. I'm using a quad tree for spacial partitioning and for each fast moving object, i check it against objects in each cell that it passes. This works fine for determining collision with static geometry, but it means that any other fast moving object that could collide with it, but isn't in any of the cells that are checked, is never considered. The only solution to this i can think of is to either have the cells large enough and cross fingers that this is enough, or to implement some sort of a brute force algorithm. Is there a proper way of dealing with this, maybe somebody solved this issue in an efficient manner. Or maybe there's a better way of partitioning space that accounts for this?

    Read the article

  • How to handle the fear of future licensing issues of third-party products in software development?

    - by Ian Pugsley
    The company I work for recently purchased some third party libraries from a very well-known, established vendor. There is some fear among management that the possibility exists that our license to use the software could be revoked somehow. The example I'm hearing is of something like a patent issue; i.e. the company we purchased the libraries from could be sued and legally lose the ability to distribute and provide the libraries. The big fear is that we get some sort of notice that we have to cease usage of the libraries entirely, and have some small time period to do so. As a result of this fear, our ability to use these libraries (which the company has spent money on...) is being limited, at the cost of many hours worth of development time. Specifically, we're having to develop lots of the features that the library already incorporates. Should we be limiting ourselves in this way? Is it possible for the perpetual license granted to us by the third party to be revoked in the case of something like a patent issue, and are there any examples of something like this happening? Most importantly, if this is something to legitimately be concerned about, how do people ever go about taking advantage third-party software while preparing for the possibility of losing that capability entirely? P.S. - I understand that this will venture into legal knowledge, and that none of the answers provided can be construed as legal advice in any fashion.

    Read the article

  • Merging two sites into one, how to redirect from the domain that's going away?

    - by bikeboy389
    I haven't been able to find any existing questions that cover my exact issue, so here goes: My client wants her two sites (domain1.com and domain2.com) rolled into a single, new site under domain1.com. Once the site is ready on domain1.com, DNS for domain2.com would be pointed at the same server as domain1.com. I know how to do an htaccess rewrite rule that would make all domain2.com traffic map to a specific single page or directory within domain1.com. But that's not what the client wants. What she wants is for a bunch of specific pages on domain2.com to map to specific new pages on domain1.com. For example: domain2.com/index.php?pageid=58 GOES TO domain1.com/2011/04/somearticle domain2.com/index.php?pageid=92 GOES TO domain1.com/2011/03/differentname etc. I could put a bunch of 301 redirects in the htaccess on domain1.com, which would work fine. The problem is, the client doesn't want/need specific redirects for ALL the domain2.com pages, and if I just do 301 redirects, anybody who comes looking for a domain2.com page that I haven't built a specific redirect for will get a 404 error. So I need to use 301 redirects for some traffic, and a rewrite rule for any traffic that's not covered in the 301 redirects. How do I do sort of a blending of a rewrite rule and 301 redirects, all in the htaccess file for domain1.com? Is this possible? Is it as simple as putting the 301 redirects in the htaccess file first, then doing the rewrite rule? I'm guessing not.

    Read the article

  • Bad previous code. To fix or not to fix?

    - by Viniyo Shouta
    As a freelancer programmer I am often asked to edit part of an application source code in order to add functionalities, fix bugs etc. While I'm on my adventure journey to study the source to do what I'm asked correctly I run into code like: World::User* GetWorld() { map<DWORD,World*>::iterator it = mapWld.find( m_userWorldId ) if( it != mapWld.end() ) return &it->second; return NULL; } if( pUser->GetWorld()->GetId() == 250 ) If I investigate further I end up finding that the DWORD class member of User, userWorldId can be a value non-found in the map mapWld, which will lead to a casuality as also known as crash! The obviously valid way to do it is: World* pWorld = pUser->GetWorld(); if( pWorld && pWorld->GetId() == 250 )//... Sometimes when it's something just 'small' I end up sort of 'fixing' it. But sometimes when I'm on a 500 thousand line source code and this kind of code is everywhere there is no much can do. The question is if it's politically correct to fix some of these things. Think of it; You are not paid to fix it. Perhaps you think it's right, but it was necessarily done that way for some reason and you should not be messing with it. You do not have authorization, you do not own the source and none of the copyrights belong to you. You have authorization to edit issues accordingly to the owners but you're in a hurry, you have many other projects to do, it's the end of the month, you must pay the bills. Sincerely, I think of it as seeing an animal die from a disease in front of you, you have the cure in your hands but you do nothing. What is the best to do in this scenario?

    Read the article

  • How to do integrated testing?

    - by Enthusiastic Programmer
    So I have been reading up on a lot of books surrounding testing. But all the books I've read have the same flaws. They will all tell you the definitions of testing. But I have not found a single book that will guide you into integration testing (or pretty much anything higher then unit testing). Is integration testing that elusive or am I reading the wrong books? I'm a hands on person, so I would appreciate it if someone could help me with a simple program: Let's say you need to make some sort of calculation program that calculates something (doesn't matter what) and exports it to *.txt file. Let's assume we use the Model View Controller design principle. And one class for the actual calculating which you'll use in the model and one for writing the textfile. So: View = Controller = Model = CalculationClass, FileClass So for unittesting: You'd test the calculationClass, I'd personally focus most of my unit tests there. And less time on unit testing the view/controller/FileClass. I personally wouldn't see the use of unittesting those unless you want a really robust program. Integration testing: Now this is where I run into a wall. What would I have to test to call it an integration test? I could stub the view and feed the controller data which it would pass on to the model and so forth. And then check what the view gets back in the end. But ... Couldn't I just run the (in this case small) program then and test it manually? Would this be considered a integration test too, or does it have to be automated? Also, can I check multiple items to see if they are correct? I cannot seem to find any book that offers a hands on approach to methods of integration testing.

    Read the article

  • How to maintain a demo version of an application?

    - by O.O
    I need to be able to demo our production application to prospective clients. The way I have it setup today is simple. The demo application is an exact duplicate of the production system, except that the data in the database is obfuscated to protect our current clients' data. This works great because it doesn't require any application changes. Boss dropped a potential BOMBSHELL today and said that the demo system needs to contain a special link and that ONLY shows up on demo. He went on to explain that in the future there may be much bigger differences between the demo and production apps (e.g. an entire area of functionality). What do I do now? Some things I have thought about doing: Maintain a different branch in subversion specific to the demo system Create an installation package that has the changes for demo, then revert and build a production installation package Modularize the application (no idea how) Say: "Screw you! I will not do it!" (LOL) Use some sort of conditional logic in the app to determine if it is a demo or a production app. E.g. (if the URL contains 'demo' then show else hide). If you haven't guessed by now, this is a web application Anyways, I have no experience in this scenario as to which one is better or if none of these are any good. Anyone have an answer, strategy, something!?

    Read the article

  • How can I better manage far-reaching changes in my code?

    - by neuviemeporte
    In my work (writing scientific software in C++), I often get asked by the people who use the software to get their work done to add some functionality or change the way things are done and organized right now. Most of the time this is just a matter of adding a new class or a function and applying some glue to do the job, but from time to time, a seemingly simple change turns out to have far-reaching consequences that require me to redesign a substantial amount of existing code, which takes a lot of time and effort, and is difficult to evaluate in terms of time required. I don't think it has as much to do with inter-dependence of modules, as with changing requirements (admittedly, on a smaller scale). To provide an example, I was thinking about the recently-added multi-user functionality in Android. I don't know whether they planned to introduce it from the very beginning, but assuming they didn't, it seems hard to predict all the areas that will be affected by the change (apps preferences, themes, need to store account info somehow, etc...?), even though the concept seems simple enough, and the code is well-organized. How do you deal with such situations? Do you just jump in to code and then sort out the cruft later like I do? Or do you do a detailed analysis beforehand of what will be affected, what needs to be updated and how, and what has to be rewritten? If so, what tools (if any) and approaches do you use?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >