Search Results

Search found 1180 results on 48 pages for 'nick donovan'.

Page 8/48 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • SEO best practices for a web feature that uses geolocation by IP Address

    - by Nick
    I'm working on a feature that tailors content based on a geo location lookup by IP address in order to provide information based on the general area where this visitor is from. I'm concerned that content will be interpreted as focused solely on the search engine spider's geo origin when it is indexed. Are there SEO best practices for geo location by ip address features? I appreciate any specific tips or words of wisdom.

    Read the article

  • Instruction vs data cache usage

    - by Nick Rosencrantz
    Say I've got a cache memory where instruction and data have different cache memories ("Harvard architecture"). Which cache, instruction or data, is used most often? I mean "most often" as in time, not amount of data since data memory might be used "more" in terms of amount of data while instruction cache might be used "more often" especially depending on the program. Are there different answers a) in general and b) for a specific program?

    Read the article

  • How does Minecraft renders its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Software license restricting commercial usage like CC BY-NC-SA

    - by Nick
    I want to distribute my software under license like Creative Commons Attribution - Non commercial - Share Alike license, i.e. Redistribution of source code and binaries is freely. Modified version of program have to be distributed under the same license. Attribution to original project should be supplied to. Restrict any kind of commercial usage. However CC does not recommend to use their licenses for software. Is there this kind of software license I could apply? Better if public license, but as far as I know US laws says that only EULA could restrict usage of received copy?

    Read the article

  • MVP Nomination

    - by Nick Harrison
    I have debated posting this or not. My initial thought was not to post about it. My thought was not to blog about it thinking that I would spare myself the embarrassment if I wasn't awarded. A little paranoid, I know, but these are paranoid times. After more reflection, I realize that there is no embarrassment in not winning. There is great honor in being nominated. Instead of worrying about not winning in the end, I need to enjoy the moment and enjoy being nominated. This is an extreme honor. I would to hear your stories of being nominated? What was the process like? What was your reaction? Hopefully, I will have some good news to share here soon. If not, being nominated truly is an honor.

    Read the article

  • What Can We Learn About Software Security by Going to the Gym

    - by Nick Harrison
    There was a recent rash of car break-ins at the gym. Not an epidemic by any stretch, probably 4 or 5, but still... My gym used to allow you to hang your keys from a peg board at the front desk. This way you could come to the gym dressed to work out, lock your valuables in your car, and not have anything to worry about. Ignorance is bliss. The problem was that anyone who wanted to could go pick up your car keys, click the unlock button and find your car. Once there, they could rummage through your stuff and then walk back in and finish their workout as if nothing had happened. The people doing this were a little smatter then the average thief and would swipe some but not all of your cash leaving everything else in place. Most thieves would steal the whole car and be busted more quickly. The victims were unaware that anything had happened for several days. Fortunately, once the victims realized what had happened, the gym was still able to pull security tapes and find out who was misbehaving. All of the bad guys were busted, and everyone can now breathe a sigh of relieve. It is once again safe to go to the gym. Except there was still a fundamental problem. Putting your keys on a peg board by the front door is just asking for bad things to happen. One person got busted exploiting this security flaw. Others can still be exploiting it. In fact, others may well have been exploiting it and simply never got caught. How long would it take you to realize that $10 was missing from your wallet, if everything else was there? How would you even know when it went missing? Would you go to the front desk and even bother to ask them to review security tapes if you were only missing a small amount. Once highlighted, it is easy to see how commonly such vulnerability may have been exploited. So the gym did the very reasonable precaution of removing the peg board. To me the most shocking part of this story is the resulting uproar from gym members losing the convenient key peg. How dare they remove the trusted peg board? How can I work out now, I have to carry my keys from machine to machine? How can I enjoy my workout with this added inconvenience? This all happened a couple of weeks ago, and some people are still complaining. In light of the recent high profile hacking, there are a couple of parallels that can be drawn. Many web sites are riddled with vulnerabilities are crazy and easily exploitable as leaving your car keys by the front door while you work out. No one ever considered thanking the people who were swiping these keys for pointing out the vulnerability. Without a hesitation, they had their gym memberships revoked and are awaiting prosecution. The gym did recognize the vulnerability for what it is, and closed up that attack vector. What can we learn from this? Monitoring and logging will not prevent a crime but they will allow us to identify that a crime took place and may help track down who did it. Once we find a security weakness, we need to eliminate it. We may never identify and eliminate all security weaknesses, but we cannot allow well known vulnerabilities to persist in our system. In our case, we are not likely to meet resistance from end users. We are more likely to meet resistance from stake holders, product owners, keeper of schedules and budgets. We may meet resistance from integration partners, co workers, and third party vendors. Regardless of the source, we will see resistance, but the weakness needs to be dealt with. There is no need to glorify a cracker for bringing to light a security weakness. Regardless of their claimed motives, they are not heroes. There is also no point in wasting time defending weaknesses once they are identified. Deal with the weakness and move on. In may be embarrassing to find security weaknesses in our systems, but it is even more embarrassing to continue ignoring them. Even if it is unpopular, we need to seek out security weaknesses and eliminate them when we find them. http://www.sans.org has put together the Common Weakness Enumeration http://cwe.mitre.org/ which lists out common weaknesses. The site navigation takes a little getting used to, but there is a treasure trove here. Here is the detail page for SQL Injection. It clearly states how this can be exploited, in case anyone doubts that the weakness should be taken seriously, and more importantly how to mitigate the risk.

    Read the article

  • Converting ANTLR AST to Java bytecode using ASM

    - by Nick
    I am currently trying to write my own compiler, targeting the JVM. I have completed the parsing step using Java classes generated by ANTLR, and have an AST of the source code to work from (An ANTLR "CommonTree", specifically). I am using ASM to simplify the generating of the bytecode. Could anyone give a broad overview of how to convert this AST to bytecode? My current strategy is to explore down the tree, generating different code depending on the current node (using "Tree.getType()"). The problem is that I can only recognise tokens from my lexer this way, rather than more complex patterns from the parser. Is there something I am missing, or am I simply approaching this wrong? Thanks in advance :)

    Read the article

  • Universal Pen Drive Linux Will Not Burn IOS Ubuntu 13.10 To USB [duplicate]

    - by Nick
    This question already has an answer here: How to create a bootable USB stick? 4 answers Universal Pen Drive Linux will not let me burn the iso to my usb. Whenever I attempt it it says 'can not open file 'E:*where I put my downloads*\ubuntu-13.10-desktop-amd64.iso' as archive'. Any help please. I just want to move to ubuntu and hopefully never have to use windows again :D Please help me and walk me through this process.

    Read the article

  • Is there a way to install Ubuntu stripped down without desktop applications?

    - by Nick Berardi
    Just to start off, I know of lubuntu but it really doesn't meet what I am looking for. Basically what I am looking for is the standard Desktop Ubuntu install, but with out all the word processing, multimedia, and games installed. I have seen posts out about how to get the desktop environment running on Ubuntu server, but they seem complicated, and never seem to equal the standard Desktop install. So my question is, is there anyway to tell the standard Desktop install not to install all the applications? Or is there a distro available that leaves all the applications out, and just has the standard desktop look and feel? What I really want this for is, is for development purposes to run on a VM to do Mono development.

    Read the article

  • What if you could work on anything you wanted?

    - by Nick Harrison
    What if you could work on anything you wanted? Redgate is doing an experiment of sorts this week.  Called Down Tools Week.    The idea is that they stopped working on their regular projects for a week and strike out on something that catches their attention and drives their passion. Evidently in many cases, these projects have turned out to be new features in their existing products that individual were interested in, some were internal iniatives and some where evidently off the wall new ideas.   Today is show and tell where they will share with each other what they have been working on. There may well be some interesting announcements coming out of this.    The prospects are exciting. I understand that Google does something similar allowing their employees a specified amount of time to work on projects of their own choosing.    This has been the breeding ground for some of my favorite services. It is a shame that more companies do not follow such practices.   Now I know that most companies cannot afford to shut down everything for a week and sometimes you can't really explore an interesting idea in 8 hours a week or however much time Google allocates, but still it may be worth while. What would happen if your company gave you as an individual 1 week each quarter to work on a project of your own design and see what happens?   I would be happen if you still had to get approval for before your week long adventure. Personally, I think that this could be a very effective use of training budgets.   Give me a week to research something on my own and you would be amazed at what I can find out.    Maybe this should be the prerequisite before starting a new project.   Stagger the team onboarding but have everyone spend a week long sabbatical studying BizTalk before starting a project that will hinge on BizTalk. The show and tell afterwards is a great way to keep everyone honest or at least reassure management that everyone is honest.    If your goal was to spend a week researching and exploring a new technology and you had to do a show and tell afterwards to show off what you had learned, then everyone can learn a bit of what you just learned.     Sounds like a promising win win for me. Maybe it is a pipe dream, but what if .... What would you work on if given the opportunity to work on anything you wanted?

    Read the article

  • How to become a "faster" programmer?

    - by Nick Gotch
    My last job evaluation included just one weak point: timeliness. I'm already aware of some things I can do to improve this but what I'm looking for are some more. Does anyone have tips or advice on what they do to increase the speed of their output without sacrificing its quality? How do you estimate timelines and stick to them? What do you do to get more done in shorter time periods? Any feedback is greatly appreciated, thanks,

    Read the article

  • How can a solo programmer become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Compiling vs using pre-built binaries performance?

    - by Nick Rosencrantz
    Will performance be better (quicker) if I manually compile the source for a software component for the actual machine that it will be used on, compared to if the source was compiled on another platform perhaps for many different architectures? I got some good results compiling source that I downloaded and I wonder whether this was due to compiling it instead of downloading a pre-compiled binary which is often the case with software updates.

    Read the article

  • Unity Occlusion Portals: What and How?

    - by Nick Wiggill
    (Here I eat my words on Meta about posting Unity questions on Unity Answers... since that site is less responsive than this one.) Unity provides cell-based Occlusion Culling (via Umbra, I believe). However, a newer feature that it supports is Occlusion Portals. The question is, if BSP-based occlusion culling is already a feature of Unity, what do portals add, and how? PS. This question is not "What are portals?" -- I'm aware of the original Quake BSP-style portals -- which is partly why I find the explicit portal concept in Unity odd, since it uses BSP anyway.

    Read the article

  • Install AMDCONFIG on Ubuntu driver

    - by Nick Bailuc
    In 12.04 I used the official driver downloaded from amd.com which came with amdconfig but now in 14.04 the official driver is buggy so I just use the Ubuntu Official Drivers which works even better because they beefed up the original driver. The Ubuntu driver doesnt come with the terminal command amdconfig which allowed me to tweak/overclock my graphics card. How can I install it without having to install the original AMD driver? Additional Information: -I only use x.org drivers becuase it's opensource therefore more stable rather than the proprietary fglrx driver -I do not use procrams like amdoverdrivectrl or atioverclock because they are not as stable and advanced as the terminal command

    Read the article

  • Automatic Generalization

    - by Nick Harrison
    I have been interested in functional programming since college. I played around a little with LISP back then, but I have not had an opportunity since then. Now that F# ships standard with VS 2010, I figured now is my chance. So, I was reading up on it a little over the weekend when I came across a very interesting topic. F# includes a concept called "Automatic Generalization". As I understand it, the compiler will look at your method and analyze how you are using parameters. It will automatically switch to a generic parameter if it is possible based on your usage. Wow! I am looking forward to playing with this. I have long been an advocate of using the most generic types possible especially when developing library classes. Use the highest level base class that you can get away with. Use an interface instead of a specific implementation. I don't advocate passing object around, but you get the idea. Tools like resharper, fxCop, and most static code analysis tools provide guidance to help you identify when a more generalized type is possible, but this is the first time I have heard about the compiler taking matters into its own hands. I like the sound of this. We'll see if it is a good idea or not. What are your thoughts? Am I missing the mark on what Automatic Generalization does in F#? How would this work in C#? Do you see any problems with this?

    Read the article

  • How to become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • No GRUB Screen or recovery mode on Boot after 12.04 Upgrade

    - by Nick
    I tried the live boot CD and boot-repair, also loaded the Desktop install CD, and it looks like all partitions check out OK. However, when I try to boot Linux (the only bootable partition on the computer) I get a blank screen. Every so often the screen give me something akin to: Assuming write through cache Asking for cache data failed it appears to start booting, then hangs. Ctrl+Alt+Delete shuts down the machine The last message during boot is "STarting TiMidity++ ALSA midi emulation... [OK]" I used boot-repair to generate a boot info report. One thing looks odd to me- it reports a missing core.img on /dev/sda1. Here is the full info: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info August 2nd 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. = Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda1 and looks at sector 18406911 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/extlinux/extlinux.conf /boot/grub/core.img sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 307,339,514 307,339,452 83 Linux /dev/sda2 307,339,515 312,576,704 5,237,190 5 Extended /dev/sda5 307,339,578 312,576,704 5,237,127 82 Linux swap / Solaris Drive: sdb _______________________________________ Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 625,142,447 625,140,400 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 11b4d633-7863-40b2-a6ca-da5f82c3ad0b ext4 /dev/sda5 cb8d65f4-8cf9-4088-b804-e3dea2151033 swap /dev/sdb1 349E7C109E7BC8BE ntfs Personal1 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sdb1 /media/Personal1 fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sr0 /live/image iso9660 (ro,noatime) ...(a bunch of config file info- let me know if anyone wants to see it!) But usually I just get "Cannot Display This Video Mode", which I know means the video output is not usable by the monitor. I'm looking for a way to get into a recovery mode.I'd really like to avoid wiping the drive. Any thoughts?

    Read the article

  • Why using the word "mechanism" in CS?

    - by Nick Rosencrantz
    I'm not sure about the usage of the word "mechanism" when in fact most of the time what is meant is an algorithm. For instance there's talk about Java's "thread-scheduling mechanism" - why not call it an algorithm and why borrow a term from mechanics where relations sometimes are the opposites than of computer science? I'm aware that an algorithm is considered a "mechanical solution" but is this really the case in fact when a lot of algorithm don't have mechanical representations for instance a file-sharing network that gets quicker and faster as the usage grows, that would be the reverse of a mechanical structure that would go slower when usage grows.

    Read the article

  • Agile Documentation

    - by Nick Harrison
    We all know that one of the premises of the agile manifesto is to value Working Software over Comprehensive Documentation. This is a wonderful idea and it takes a tremendous burden off of project implementations. I have seen as many projects fail because of the maintenance weight of the project documentations as I have for any reason. But this goal as important as it is may not always be practical. Sometimes the client will simply insist on tedious documentation despite the arguments against it. This may be to calm a nervous client. This may be to satisfy an audit / compliance requirement. This may be a non-too subtle attempt at sabotaging the project. Ok, it is probably not an all out attempt to sabotage the project, but it will probably feel that way. So what can we do to keep to the spirit of the Agile Manifesto but still meet the needs of the client wanting the documentation? This is a good question that I have been puzzling over lately! I hope to explore some possible answers more fully here. A common theme that my solutions are likely to follow is the same theme that I often follow with simplifying complex business logic. Make it table driven! My thought is that the sought after documentation could be a report or reports out of a metadata repository. Reports are much easier to maintain than hand written documentation. Here are a few additional advantages that we can explore over time: Reports will take advantage of the fact that different people have different needs and different format requirements Reports and the supporting metadata are more easily validated and the validation can be automated. If the application itself uses this metadata than there never has to be a question as to whether or not the metadata is up to date. It is up to date or the application would not work. In many cases we should be able to automatically gather most of the Meta data that we need using reflection, system tables, etc. I think that this will lower the total cost of ownership for the documentation and may provide something useful beyond having a pretty document to look at.  What are your thoughts?

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • Sub Domain tracking with Analytics filters

    - by Nick
    Hi All, We currently have Analytics tracking codes running throughout our site including our Sub Domains. What I would like to do is create different Profiles under the same account segmenting the sub domains by means of filters. Currently I am just excluding the hostname of the main website by using the following custom filter: Exclude: Hostname Filter pattern: ^www.mydomain.co.za(.*) I know this isn't the proper method of doing this though and have some of the main domains links coming through in the data. Ideally I would just like to include anything from: sub.domain.co.za Any help would be greatly appreciated. Thanks

    Read the article

  • What is the right level of granularity for code commenting?

    - by Nick
    Commenting in code I believe is very important but recently I've been reviewing code that has left me wondering particular this one. //due to lack of confidence with web programming leaving this note in for now What is the right level of granularity for code commenting? EDIT: Obviously the above comment is shocking hence why I'm asking the question. I've recently noticed the inline comments in the code at my work place annoying. Instead of getting angry I want discovery the acceptable level of granularity for code commenting in the community.

    Read the article

  • Scrum - how to carry over a partially complete User Story to the next Sprint without skewing the backlog

    - by Nick
    We're using Scrum and occasionally find that we can't quite finish a User Story in the sprint in which it was planned. In true Scrum style, we ship the software anyway and consider including the User Story in the next sprint during the next Sprint Planning session. Given that the User Story we are carrying over is partially complete, how do we estimate for it correctly in the next Sprint Planning session? We have considered: a) Adjusting the number of Story Points down to reflect just the work which remains to complete the User Story. Unfortunately this will mess up reporting the Product Backlog. b) Close the partially-completed User Story and raise a new one to implement the remainder of that feature, which will have fewer Story Points. This will affect our ability to retrospectively see what we didn't complete in that sprint and seems a bit time consuming. c) Not bother with either a or b and continue to guess during Sprint Planning saying things like "Well that User Story may be X story points, but I know it's 95% finished so I'm sure we can fit it in."

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >