Search Results

Search found 6007 results on 241 pages for 'sub jp'.

Page 163/241 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • How to determine the amount to spend per phrase on Adwords research?

    - by Anonymous -
    My company would like to start a PPC advertising campaign. Whilst I understand the concept and how to set everything up from a technical point of view, this is something I've never done before. Logically, we'd like to test out a wide range of keywords that we think would lead to conversions, which we've put together through brainstorming and with some help from Google's External Keyword Tool. Sub-question whilst I remember - am I correct in thinking that in Google's keyword tool, keywords that we think will perform well that have a low competition yet high monthly searches are good since there will be less advertisers, meaning our bid per click will be less? Is there a common benchmark or process of doing a round of tests with keywords? Should we wait for 100 clicks on each keyword, see which ones have lead to the most sales (or rather, sales that are sustainable with the cost per click of that keyword), then drop the ones which aren't converting and put that budget onto the converting keywords? We realistically have a few hundred keywords/phrases we would like to test, but spending $100 per keyword/phrase is going to work out as quite an expensive test. It would be nice to be able to spend $5-10 per phrase, but I don't think the sample size would be great enough to determine anything usefully reliable. Another approach might be to setup all the keywords, and those that bring the most sales within x hours/days would be the ones we use. What is the common procedure with things like this? I know there are a plethora of companies that specialize in exactly this, but this is something we anticipate doing a lot in the future, so it would make sense to do it in house if at all possible.

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • How do you setup Postfix/Dovecot/MySQL to not look for local accounts?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • Why should I use MSBuild instead of Visual Studio Solution files?

    - by Sid
    We're using TeamCity for continuous integration and it's building our releases via the solution file (.sln). I've used Makefiles in the past for various systems but never msbuild (which I've heard is sorta like Makefiles + XML mashup). I've seen many posts on how to use msbuild directly instead of the solution files but I don't see a very clear answer on why to do it. So, why should we bother migrating from solution files to an MSBuild 'makefile'? We do have a a couple of releases that differ by a #define (featurized builds) but for the most part everything works. The bigger concern is that now we'd have to maintain two systems when adding projects/source code. UPDATE: Can folks shed light on the lifecycle and interplay of the following three components? The Visual Studio .sln file The many project level .csproj files (which I understand an "sub" msbuild scripts) The custom msbuild script Is it safe to say that the .sln and .csproj are consumed/maintained as usual from within the Visual Studio IDE GUI while the custom msbuild script is hand-written and usually consumes the already existing individual .csproj "as-is"? That's one way I can see reduce overlap/duplicate in maintenance... Would appreciate some light on this from other folks' operational experience

    Read the article

  • Outlook macro runs through 250 iterations before failing with error [migrated]

    - by Senoculus
    Description: I have an Outlook macro that loops through selected emails in a folder and writes down some info to a .csv file. It works perfectly up until 250 before failing. Here is some of the code: Open strSaveAsFilename For Append As #1 CountVar = 0 For Each objItem In Application.ActiveExplorer.Selection DoEvents If objItem.VotingResponse <> "" Then CountVar = CountVar + 1 Debug.Print " " & CountVar & ". " & objItem.SenderName Print #1, & objItem.SenderName & "," & objItem.VotingResponse Else CountVar = CountVar + 1 Debug.Print " " & CountVar & ". " & "Moving email from: " & Chr(34) & objItem.SenderName & Chr(34) & " to: Special Cases sub-folder" objItem.Move CurrentFolderVar.Folders("Special Cases") End If Next Close #1 Problem After this code runs through 250 emails, the following screenshot pops up: http://i.stack.imgur.com/yt9P8.jpg I've tried adding a "wait" function to give the server a rest so that I'm not querying it so quickly, but I get the same error at the same point.

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • Do objects maintain identity under all non-cloning conditions in PHP?

    - by Buttle Butkus
    PHP 5.5 I'm doing a bunch of passing around of objects with the assumption that they will all maintain their identities - that any changes made to their states from inside other objects' methods will continue to hold true afterwards. Am I assuming correctly? I will give my basic structure here. class builder { protected $foo_ids = array(); // set in construct protected $foo_collection; protected $bar_ids = array(); // set in construct protected $bar_collection; protected function initFoos() { $this->foo_collection = new FooCollection(); foreach($this->food_ids as $id) { $this->foo_collection->addFoo(new foo($id)); } } protected function initBars() { // same idea as initFoos } protected function wireFoosAndBars(fooCollection $foos, barCollection $bars) { // arguments are passed in using $this->foo_collection and $this->bar_collection foreach($foos as $foo_obj) { // (foo_collection implements IteratorAggregate) $bar_ids = $foo_obj->getAssociatedBarIds(); if(!empty($bar_ids) ) { $bar_collection = new barCollection(); // sub-collection to be a component of each foo foreach($bar_ids as $bar_id) { $bar_collection->addBar(new bar($bar_id)); } $foo_obj->addBarCollection($bar_collection); // now each foo_obj has a collection of bar objects, each of which is also in the main collection. Are they the same objects? } } } } What has me worried is that foreach supposedly works on a copy of its arrays. I want all the $foo and $bar objects to maintain their identities no matter which $collection object they become of a part of. Does that make sense?

    Read the article

  • Tried to install some software, it says some packages are damaged, cannot fix them

    - by lempira
    So, I go to the Ubuntu Software Center, as soon as it opens, a window pops up with the following text: "Items cannot be installed or removed until the package catalog is repaired. Do you want to repair it now?" Then I click the "Repair" button, then a new window pops up with the following text: "Package operation failed. The installation or removal of a software package failed." Then I click on the "Details" button, which returns me the following text: installArchives() failed: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (2) dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. What should I do?

    Read the article

  • Develop in trunk and then branch off, or in release branch and then merge back?

    - by Torben Gundtofte-Bruun
    Say that we've decided on following a "release-based" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: develop and stabilize a new release in the trunk and then "save" that state in a new release branch; or first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent released branch rather than from the trunk. Details based on comment below: Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. Most team members work on several of these areas, so there's partial overlap between people. We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. Our work isn't compiled and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. A release cycle is typically half a year or more for the base platform; often 1 month for the modules.

    Read the article

  • Should all public methods in an abstract class be marked virtual?

    - by Justin Pihony
    I recently had to update an abstract base class on some OSS that I was using so that it was more testable by making them virtual (I could not use an interface as it combined two). This got me thinking whether I should mark all of the methods that I needed virtual, or if I should mark every public method/property virtual. I generally agree with Roy Osherove that every method should be made virtual, but I came across this article that got me thinking about whether this was necessary or not. I am going to limit this down to abstract classes for simplicity, however (whether all concrete public methods should be virtual is especially debatable, I am sure). I could see where you might want to allow a sub-class to use a method, but not want it overriding the implementation. However, as long as you trust that Liskov's Substitution Principle will be followed, then why would you not allow it to be overriden? By marking it abstract, you are forcing a certain override anyway, so, it seems to me that all public methods inside of an abstract class should indeed be marked virtual. However, I wanted to ask in case there was something I might not be thinking. Should all public methods within an abstract class be made virtual?

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

  • Google search question, front page not showing

    - by user5746
    I know this is probably a dumb question but I hope someone can give me some insight; I was ranked on Google first page of search results for "funny st patricks day shirts" but I was third from the bottom and not familiar enough with SEO, so I signed up for "Attracta" to rank higher. Big mistake. Since using Attracta, I've lost the first page and I'm now on the fourth page in that search. What I noticed is that Google is now just showing a sub-page or side page, (a link from my front page, to a page which has only a few designs in it) this is not where I would want customers to land first... but my front page is not showing in that search anymore. Obviously, the title of this side page is not geared toward that search result, so I know that's why I have the pr drop. Why is my front page not ranking over that page, though? Why is it apparently gone from that search, or so far back no one will ever find it? I need to know how to fix this quick if anyone has any advice at all for me. It's the busiest season for my website and the people who were stealing design ideas from me are all ranked higher than my site now. (I can prove this, lol) So, I'm very frustrated by that. I would be very grateful to have any advice at all as to what I can do to fix this. THANKS in advance for any advice you can offer. Catelyn

    Read the article

  • What are some efficient ways to set up my environment when working on a remote site?

    - by Prefix
    Hello fellow Programmers, I am still a relatively new programmer and have recently gotten my first on-campus programming position. I am the sole dev responsible for 8 domains as well as 3 small sized PHP web apps. The campus has its web environment divided into staging and live servers -- we develop on the staging via SFTP and then push the updates to the live server through a web GUI. I use Sublime Text 2 and the Sublime SFTP plugin currently for all my dev work (its my preferred editor). If I am just making an edit to a page I'll open that individual file via the ftp browser. If I am working on the PHP web app projects, I have the app directory mapped to a local folder so that when I save locally the file is auto-uploaded through Sublime SFTP. I feel like this workflow is slow and sub-optimal. How can I improve my workflow for working with remote content? I'd love to set up a local environment on my machine as that would eliminate the constant SFTP upload/download, but as I said there are many sites and the space required for a local copy of the entire domain would be quite large and complex; not to mention keeping it updated with whatever the latest on the staging server is would be a nightmare. Anyone know how I can improve my general web dev workflow from what I've described? I'd really like to cut out constantly editing over FTP but I'm not sure where to start other than ripping the entire directory and dumping it into XAMP.

    Read the article

  • Looking for a better Factory pattern (Java)

    - by Sam Goldberg
    After doing a rough sketch of a high level object model, I am doing iterative TDD, and letting the other objects emerge as a refactoring of the code (as it increases in complexity). (That whole approach may be a discussion/argument for another day.) In any case, I am at the point where I am looking to refactor code blocks currently in an if-else blocks into separate objects. This is because there is another another value combination which creates new set of logical sub-branches. To be more specific, this is a trading system feature, where buy orders have different behavior than sell orders. Responses to the orders have a numeric indicator field which describes some event that occurred (e.g. fill, cancel). The combination of this numeric indicator field plus whether it is a buy or sell, require different processing buy the code. Creating a family of objects to separate the code for the unique handling each of the combinations of the 2 fields seems like a good choice at this point. The way I would normally do this, is to create some Factory object which when called with the 2 relevant parameters (indicator, buysell), would return the correct subclass of the object. Some times I do this pattern with a map, which allows to look up a live instance (or constructor to use via reflection), and sometimes I just hard code the cases in the Factory class. So - for some reason this feels like not good design (e.g. one object which knows all the subclasses of an interface or parent object), and a bit clumsy. Is there a better pattern for solving this kind of problem? And if this factory method approach makes sense, can anyone suggest a nicer design?

    Read the article

  • Entry level engineer question regarding memory mangement

    - by Ealianis
    It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with "I'm focusing on the wrong things." While I respect this coworker I do disagree that this is a "wrong thing" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the ladder and rewrote it with the "Dim alertID" so that someone else might find it easier to read. But here was my concern and question. "Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub)." I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?

    Read the article

  • Software Center - Items cannot be installed or removed until package catalog is repaired"

    - by Stephanie
    I tried to install back in time and now I keep getting the message 'items cannot be installed or removed until package catalog is repaired. I have tried sudo apt-get install -f then get Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: backintime-gnome The following packages will be upgraded: backintime-gnome 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 1 not fully installed or removed. Need to get 0 B/39.4 kB of archives. After this operation, 24.6 kB of additional disk space will be used. Do you want to continue [Y/n]? when I click Y, I get the following message dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: backintime-gnome E: Sub-process /usr/bin/dpkg returned an error code (1) stephanie@stephanie-ThinkPad-T61:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: backintime-gnome

    Read the article

  • Static pages for large photo album

    - by Phil P
    I'm looking for advice on software for managing a largish photo album for a website. 2000+ pictures, one-time drop (probably). I normally use MarginalHack's album, which does what I want: pre-generate thumbnails and HTML for the pictures, so I can serve without needing a dynamic run-time, so there's less attack surface to worry about. However, it doesn't handle pagination or the like, so it's unwieldy for this case. This is a one-time drop for pictures from a wedding, with a shared usercode/password for distribution to the guests; I don't wish to put the pictures in a third-party hosting environment. I don't wish to use PHP, simply because that's another run-time to worry about, I might relent and use something dynamic if it's Python or Perl based (as I can maintain things written in those). I currently have: Apache serving static files, Album-generated, some sub-directories to divide up the content to be a little more manageable. Something like Album but with pagination already handled would be great, but I'm willing to have something a little more dynamic, if it lets people comment or caption and store the extra data in something like an sqlite DB. I'd want something light-weight, not a full-blown CMS with security updates every three months. I don't want to upload pictures of other peoples' children into a third-party free service where I don't know what the revenue model is. (For my site: revenue is none, costs out of pocket). Existing server hosting is *nix, Apache, some WSGI. Client-side I have MacOS. Any advice?

    Read the article

  • How to correctly handle redirect after site facelift

    - by Stefan
    I recently updated our site taking it from a multi-page site to a single page site. The problem now is that when the site is searched in say Google, it displays the site as well as the indexed pages. So if a user clicks say our "About" page, it takes them to our now outdated material. I am hoping to get some guidance on how to properly handle this. I figure the first step is to now setup a robots.txt on our new index page to tell the engines not to crawl beyond index.php. But in the meantime, how do I handle the fact that when searching our site on Google we may still have users who try to click on sub-page links? Should I simply setup redirects while waiting for the engines to update? And if so, do I need to setup redirects on each page using PHP or is this something I would take care of on our sites control panel? I am not very familiar with redirects... Any help is appreciated!

    Read the article

  • Not able to track traffic on subdomain using Google Analytics

    - by Steven
    I'm trying to track traffic for my sub-domain, but it's not happening. This is how it's set up. My partner has a domain called sub1.partner.com. This domain points to partner1.mydomain.com. The idea is that users think they are browsing my partners website, when they are in fact browsing pages on my server. My tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_setDomainName', '.mysite.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); In Google analytics I've created a new account under my main account and called in partner1.mysite.com. On this account I have created a filter: Filter type: include Filter field: Host name Filter pattern: partner1.mysite.no Case sensetive: No What more can I try to track traffic on my subdomain? UPDATE Question 1 Is this line correct? _gaq.push(['_setDomainName', '.mysite.com']); Question 2 Is it correct that I have to add \ before any punctuations like so \. in filters?

    Read the article

  • Question on methods in Object Oriented Programming

    - by mal
    I’m learning Java at the minute (first language), and as a project I’m looking at developing a simple puzzle game. My question relates to the methods within a class. I have my Block type class; it has its many attributes, set methods, get methods and just plain methods. There are quite a few. Then I have my main board class. At the moment it does most of the logic, positioning of sprites collision detection and then draws the sprites etc... As I am learning to program as much as I’m learning to program games I’m curious to know how much code is typically acceptable within a given method. Is there such thing as having too many methods? All my draw functionality happens in one method, should I break this into a few ‘sub’ methods? My thinking is if I find at a later stage that the for loop I’m using to cycle through the array of sprites searching for collisions in the spriteCollision() method is inefficient I code a new method and just replace the old method calls with the new one, leaving the old code intact. Is it bad practice to have a method that contains one if statement, and place the call for that method in the for loop? I’m very much in the early stages of coding/designing and I need all the help I can get! I find it a little intimidating when people are talking about throwing together a prototype in a day too! Can’t wait until I’m that good!

    Read the article

  • How to package static content outside of web application?

    - by chinto
    Our web application has static content packaged as part of WAR. We have been planning to move it out of the project and host it directly on Apache to achieve the following objectives. It's getting too big and bloating the EAR size resulting in slower deployment across nodes. Faster deployment times. Take the load of Application Server Host the static content on a sub domain allowing some browsers (IE) to load resources simultaneously Give us an option to use further caching such as Apache mod_cache apart from the cache headers we send out to browsers. We use yuicompressor-maven-plugin to aggregate and minimize JS file. My question is how do package and manage this static content out side of the web application? My current options are. New maven war project. Still use the same plugin for aggregation and compression. Just a plain directory in SVN and use YUI/Google compressor directly. Or is there a better technology out there to manage static content as a project?

    Read the article

  • In MVC, why can't a model create a view?

    - by MUY Belgium
    I have a web application written in Perl with a controller, some "views" and some "Models". Each "Model" is corresponding to one "View". The controller (one file) creates an Model object corresponding to each view (view is a CGI argument) then retrieve the view from the module it has just created. Indeed, this should be bad thing but can you argue a bit more about it. My first idea was that since the object "Model" depends upon the "view", then the "model" is actually a view. But also the fact that ALL the cgi parameters are passed to the Model causes the "Model" to become not truelly a view but to loose all interest, since it is only related to the current implementation of the web apps. On other words, that the "Model" keep model but loose its "comprehensiveness" ("Model" is not easily understandable). I'm am quite new in project analysis, so please do not be too harsh. Why is this bad? I have made a prototype with the main structures I have understood of this web application, made as short as possible. #Model.pm package Model; import { # this requires an attribute called "view" # and this require an argument which is the cgi params } ... #View1.pm package View1; ... #Model1.pm package ModelView1 ; base Model; use View1; sub new { my $class = shift; my $arg = shift; Model::DoSomething($arg); $self->view = new View1($arg); ... } #controller.cgi my $model = 0; ... $model = new Model1( cgi_param => params() ); #there is severall models here ... print $model->get_view()->get_html();

    Read the article

  • SEO questions about PR, Page Structure, and Other

    - by jasondavis
    A couple of basic questions related to SEO. 1) If I have a site that has several different niches that I am trying to promote from. Example Web Developer broke down into section of Web Design, Graphic Design, Programming, Software for SEO purposes, would it be better to use subdomains for these main sections or use the main domain with a folder like structure? 2) Is PR different for each page of a domain or ever page has a PR of the same on that domain? Also do sub-domains have a different PR? 3) When entering a hugely over saturated niche such as web-design, is it even possible to compete with the big sites that have been ranked on google #1 page for years? 4) Lastly, I have read about how important titles, link anchors, and headings are for SEO and how content is the most important. So left's say we are building a standard header, body, sidebar, footer page. In the the actual markup, would it be better to make sure the main content comes before the sidebar on the page or does this probably not make a difference? 5) I seen mentioned in another answer here that microformats can help with SEO, is there any fact behind this? Thank you for any info on this

    Read the article

  • unable remove/reinstall libxss1:amd

    - by mono
    Whenever I try to install anything (software-center or apt-get) I get an error report stating that libxss1:amd is in a "very bad inconsistent state" (translated from german) and that I should reinstall it. That would not bother me much but I am never sure if the stuff I'm trying will be working in the end. It really resists from being repaired apt-get install, apt-get remove, apt-get -f install, and all the usual suspects - nothing works. It always comes to: dpkg: Fehler beim Bearbeiten von libxss1:amd64 (--configure): Paket ist in einem sehr schlechten inkonsistenten Zustand - Sie sollten es nochmal installieren, bevor Sie die Konfiguration versuchen. Fehler traten auf beim Bearbeiten von: libxss1:amd64 E: Sub-process /usr/bin/dpkg returned an error code (1) I posted the error as is, since my translation might not be exact. But I guess you can figure from my text. I found out that this file is the X screen saver. Maybe I should kill it before handling that file. If thats so, than please tell me how. Thanks for your help, thomas

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >