Search Results

Search found 43110 results on 1725 pages for 'noob question'.

Page 265/1725 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • How to verify that all files are intact prior to install?

    - by Kalle H. Väravas
    I'm working on my CMS (in PHP platform) for a long time now. The main program is done and I'm currently developing the Installer part. Installation itself will be fairly simple: Upload all files Verify that the "content/" dir has correct permissions Check if ALL files are intact and not modified [This is the subject of this question] Insert the config data and first settings Run install (Generate all DB tables and insert sample data etc.) Now the question-mark is at step 3. How do I verify ALL files? Verification itself should compare all CMS root-directories files against a list from remote location. List should contain filename, filesize and filetype. This way the user can check, that there are no unnecessary or corrupted files, that could indicated a breach in the software. I have seen some software installers do that, but I cannot find any right now and there for I'm clueless on the most optimal method for this. Of course there always is a simple array trick, but there surely must be a better and faster method?!

    Read the article

  • Should I start MCPD training now or wait for new exams?

    - by lunchmeat317
    i apologize if this question has been asked before, or if this is the wrong place to put it. I'm beginning my study track for the MCPD certification in Web Development. However, Microsoft plans to retire this certification on July 31st of 2013, along with two of the necessary tests to receive the certification. On MS's site, I can't find a newer certification path to take - I imagine that Microsoft will release new certification paths and new tests for their new software, but I don't know when that will happen. I don't really know anything about Microsoft's process, as this is the first Microsoft certification I'll be studying for. The bottom line is this - I don't want to lose six months waiting for a new test to appear that won't expire, but I don't want to rush to get a certification that will be invalid in six months (or have to reset any progress due to new study material). To those with experience in affairs like this - what is the best course to take, and can I maximize the time I have now (not wait for new testing material)? Is there any way to find material for the new tests that Microsoft will be rolling out? Thank you for your patience. If this is the wrong place to put this question, I would like to request that it be moved to the correct StackExchange site instead of being closed. Thanks for your help!

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • XNA: How to make the Vaus Spacecraft move left and right on directional keys pressed?

    - by Will Marcouiller
    I'm currently learning XNA per suggestion from this question's accepted answer: Where to start writing games, any tutorials or the like? I have then installed everything to get ready to work with XNA Game Studio 4.0. General Objective Writing an Arkanoid-like game. I want to make my ship move when I press either left or right keys. Code Sample protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here #if WINDOWS if (Keyboard.GetState().IsKeyDown(Keys.Escape)) this.Exit(); else { if (Keyboard.GetState().IsKeyDown(Keys.Left)) MoveLeft(gameTime); } #endif // Move the sprite around. BounceEnergyBall(gameTime); base.Update(gameTime); } void MoveLeft(GameTime gameTime) { // I'm not sure how to play with the Vector2 object and its position here!... _vausSpacecraftPos /= _vausSpacecraftSpeed.X; // This line makes the spacecraft move diagnol-top-left. } Question What formula shall I use, or what algorithm shall I consider to make my spaceship move as expected left and right properly? Thanks for your thoughts! Any clue will be appreciated. I am on my learning curve though I have years of development behind me (already)!

    Read the article

  • Software development process for a part time University project for 1 developer?

    - by Pricey
    I will be doing a part time University project soon and the time frame for it is around 8 months with approximately 10-15 hours a week spent working on it, with a review by a tutor each quarter. My question is what software development process would you recommend using when the course requires you to work on your own in order to manage yourself as well as the project? I wanted to use a weekly or bi-weekly iterative approach to my work but a lot of the processes seem tailored to teams of people. I am looking at XP (Extreme Programming) OR Scrum as something that is less than the norm for University work but again Scrum I don't know a lot about yet, and a question I have is; can you say you are doing XP without pair-programming? because my tutor seems to think that I have to stick to all the practices otherwise I can't do it (nevermind if I am working alone). We can have external user input as well but due to the small timescales with part time work it may be more beneficial for myself to be the user as well, which is not what I prefer considering how I can get lost in the design.

    Read the article

  • Basic Google Analytics Click Tracking and/or Overview

    - by Alan Storm
    This is a really basic Google Analytics question. Apologies in advance if it's not appropriate here, but I've had a lot of luck on Stack Overflow and this seems like the best Stack Exchange site for a question like this. I'm trying to understand how Google Analytics goals work, or if they're the right feature to be using for my situation. Most of the documentation I find online refers to the old version of the UI, not the new one. I have a website, let's call is blog.example.com. This website drives traffic to an ecommerce store, let's call that store.example2.com. I want to get reports on which links from blog.example.com are being clicked through leading to store.example2.com. How do you do this in Google analytics? Are goals the right area to be looking? Do I setup the goals on store.example2.com or blog.example.com? Or both? Is there any canonical user guide (free or paid) that covers how this works? I'm a competent programmer, but it's years since I dealt with conversion tracking on any serious level, and we've progressed well beyond my frozen caveman pixel tracking knowledge. Thanks in advance

    Read the article

  • How to break the "php is a bad language" paradigm? [closed]

    - by dukeofgaming
    PHP is not a bad language (or at least not as bad as some may suggest). I had teachers that didn't even know PHP was object oriented until I told them. I've had clients that immediately distrust us when we say we are PHP developers and question us for not using chic languages and frameworks such as Django or RoR, or "enterprise and solid" languages such as Java and ASP.NET. Facebook is built on PHP. There are plenty of solid projects that power the web like Joomla and Drupal that are used in the enterprise and governments. There are frameworks and libraries that have some of the best architectures I've seen across all languages (Symfony 2, Doctrine). PHP has the best documentation I've seen and a big community of professionals. PHP has advanced OO features such as reflection, interfaces, let alone that PHP now supports horizontal reuse natively and cleanly through traits. There are bad programmers and script kiddies that give PHP a bad reputation, but power the PHP community at the same time, and because it is so easy to get stuff done PHP you can often do things the wrong way, granted, but why blame the language?. Now, to boil this down to an actual answerable question: what would be a good and solid and short and sweet argument to avoid being frowned upon and stop prejudice in one fell swoop and defend your honor when you say you are a PHP developer?. (free cookie with teh whipped cream to those with empirical evidence of convincing someone —client or other— on the spot) P.S.: We use Symfony, and the code ends being beautiful and maintainable

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Using template questions in a technical interview

    - by Desolate Planet
    I've recently been in an argument with a colleage about technical questions in interviews. As a graduate, I went round lots of companies and noticed they used the same questions. An example is "Can you write a function that determines if a number is prime or not?", 4 years later, I find that particular question is quite common even for a junior developer. I might not be looking at this the correct way, but shouldn't software houses be intelligent enought to think up their own interview questions. This may well be the case, but I've been to about 16 interviews as a graduate and the same questions came up in about 75% of them. This leads me to believe that many companies are lazy and simply Google: 'Template questions for interviewing software developers' and I kind of look down on that. Question: Is it better to use a sest of questions off some template or should software houses strive to be more original and come up with their own interview material? From my point of view, if I failed an inteview and went off and looked for good answers to the questions I messed up on, I could fly through the next interview if they questions are the same.

    Read the article

  • Google analytics - drop in traffic

    - by user1001421
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • How does Wikipedia's SEO work?

    - by Josh Siegl
    i'm sorry if this question is misplaced or doesn't belong here. I'm currently developing an app for android and IOS and of course i'm thinking about the best ways to market it. Last night I Google'd somebody else's app and the third link in was a Wikipedia page on it, I never even thought of apps having Wikipedia pages, but alas there it was. And of course it was very helpful in determining exactly what the app did and in what cases it was useful for (something that's absolutely crucial for potential customers to understand). So then I got to thinking that I should create a Wiki for my app, but how does Wikipedia apply SEO? I know that the question could be overly complicated or specific, i'm just looking for general answers. For instance when somebody Google's my app, where does Wikipedia display on the results? When I create a Wiki for my app, how do I ensure that the Wikipedia page shows in the search results (is there any way to do that? ) I'm sure i'll find all of this out later when I create a Wiki for my app, I guess i'm just asking this out of curiosity. So how does Wikipedia's search engine optimization work? (on a page by page basis)

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • DualLayout OpenSourceFood demo site installation instructions

    - by svdoever
    We released DualLayout which enables advanced web design with the power of SharePoint. DualLayout and a demo site can be downloaded from the DualLayout product page. This blogpost contains detailed instructions on installing the demo site. The demo site is based on the site http://opensourcefood.com. The demo site requires internet access because it still links to pages and resources of the real site. Execute the following steps to install the demo site: Copy the OpenSourceFoodDemo.zip file to your SharePoint Server 2010 Make sure that the zip file in “unblocked”, otherwise files are assumed from other computer (right-click on zip file, press “Unblock” button if available) Unzip the OpenSourceFoodDemo.zip to a folder of your choice (c:\OpenSourceFoodDemo) Open the SharePoint  Start->Microsoft SharePoint 2010 Products->SharePoint 2010 Management Shell Change directory to the unzip folder (cd c:\OpenSourceFoodDemo) Start install script: .\InstallDemoSite.ps1 Answer the questions, default values in most cases ok. A little guidance: Question: Give credentials for the account that will be used for the application pool Answer: use for example same account as used for the application pool of your SharePoint site (lookup in IIS Manager) Question: Give credentials for the account that will be used for the application pool Answer: Use same account you are currently logged in with The demo site is made available through a backup and restore. The SharePoint Server 2010 installation must be patched to a level equal or higher to the update level on the SharePoint Server used to create the backup. If you get errors with respect to restore check http://technet.microsoft.com/en-us/sharepoint/ff800847.aspx for downloading the latest cumulative update.

    Read the article

  • Is there any research out there on geographic differences in work environments (e.g., respect) for programmers?

    - by Ethel Evans
    One thing I've learned from this website is that software developers are not treated the same as what I've seen in the companies I've worked at, and some of the differences seem to be related to the culture or other factors of the geographical location where the programmer works. In some areas, it seems like programmers can expect many perks and a great deal of professional respect, but in others it sounds like programmers are seen as laborers who are told what to do and then should go do it without question. Even in just the USA, there seem to be major differences in "the norm" between the various regions of this country. I'm wondering how much of this is just my perception, and how much is real differences about how programmers are perceived in their different locations. Is there any research out there discussing major differences in programmer work environments or attitudes about how to treat or respect programmers by geography? I'd be interested in multiple articles tackling different ways of looking at this. Edit: Research, specifically, doesn't seem to be available, so I'm making the question broader. Any good, thoughtful writing on the topic of any kind available?

    Read the article

  • What does "fully supported" mean in context of Radeon Opensource Video Driver?

    - by stevecoh1
    UPDATE: This is not a request for support of my specific issue. Details of that issue are here: How to recover from bad upgrade to 13.04 (Unity very slow) . I have "solved" that issue, for the time being anyway, by loading alternative lighter weight desktops. This question was opened specifically to question the meaning of the documentation at https://help.ubuntu.com/community/RadeonDriver . END OF UPDATE There it is, in Black and White: https://help.ubuntu.com/community/RadeonDriver Fully Supported All these Radeon(HD) cards and derivatives have good 3D acceleration support. This is not an exhaustive list: ... RV610/RV630 Radeon HD 2400/2600/2700/4200/4225/4250 Yet in my case (the HD2400) this proves to be manifestly untrue, at least if "Fully Supported" means sufficient to run Unity in Ubuntu 13.04. It runs all the applications I can launch under Unity, but Unity itself is unbearably slow. It's quite striking really. Click on the "Dash" - go get a cup of coffee. Type a key in the Unity search box, wait five seconds for it to appear. Type Alt-tab and wait five seconds for the screen to finish painting. None of these issues appear outside of Unity components. As you all know, there are complaints about slow performance all over the Internet about Unity. Shouldn't this page somehow address this issue? Especially if "fully supported" doesn't mean sufficiently to run the default modern Ubuntu release. What does "fully supported" mean?

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Zoom Layer centered on a Sprite

    - by clops
    I am in process of developing a small game where a space-ship travels through a layer (doh!), in some situations the spaceship comes close to an enemy space ship, and the whole layer is zoomed in on the two with the zoom level being dependent on the distance between the ship and the enemy. All of this works fine. The main question, however, is how do I keep the zoom being centered on the center point between the two space-ships and make sure that the two are not off-screen? Currently I control the zooming in the GameLayer object through the update method, here is the code (there is no layer repositioning here yet): -(void) prepareLayerZoomBetweenSpaceship{ CGPoint mainSpaceShipPosition = [mainSpaceShip position]; CGPoint enemySpaceShipPosition = [enemySpaceShip position]; float distance = powf(mainSpaceShipPosition.x - enemySpaceShipPosition.x, 2) + powf(mainSpaceShipPosition.y - enemySpaceShipPosition.y,2); distance = sqrtf(distance); /* Distance > 250 --> no zoom Distance < 100 --> maximum zoom */ float myZoomLevel = 0.5f; if(distance < 100){ //maximum zoom in myZoomLevel = 1.0f; }else if(distance > 250){ myZoomLevel = 0.5f; }else{ myZoomLevel = 1.0f - (distance-100)*0.0033f; } [self zoomTo:myZoomLevel]; } -(void) zoomTo:(float)zoom { if(zoom > 1){ zoom = 1; } // Set the scale. if(self.scale != zoom){ self.scale = zoom; } } Basically my question is: How do I zoom the layer and center it exactly between the two ships? I guess this is like a pinch zoom with two fingers!

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • Why don't I have a loop error with these redirects?

    - by byronyasgur
    I know this may seem a bit of a question in reverse, but I actually don't seem to have a problem I just want to make sure before I proceed. I have 2 domains domain1.com and domain2.com and a directory my_directory at domain2.com. I have domain2.com setup as an "add on domain" in the cpanel account of domain1.com so that when I go to domain2.com I am taken to domain1.com/my_directory but the browser shows domain2.com in the addressbar so it looks and acts like and is a separate site. However when people browse to domain1.com/directory I want the address bar to show domain2.com not domain1.com/directory. So I put a redirect in the htaccess file to redirect domain1.com/directory to domain2.com and it works perfectly, but I think it shouldnt and I'm worried I've done something wrong. My question is this: domain2.com was already redirected to domain1.com/directory in the first place (I see the redirect in my cpanel under addon domains) so by adding the second redirect in the htaccess file I should be creating a loop! Could somebody please set my mind at rest and show me why not?

    Read the article

  • Creating a Website Without a Framework [closed]

    - by James Jeffery
    I've been using PHP Frameworks for so long that I've actually forgot the "best practices" for create websites without one. Usually I will use Symfony, or more recently I've been using Laravel. A client wants a very simple website, but with certain parts of it dynamic. Due to the nature of the site using Wordpress, or a Framework, is out of the question. I'm a sucker for priding myself on my code, but I feel like I'm asking such a basic question that it's killing me to ask. But, what are the best practices for creating websites without a Framework? I like to live by the K.I.S.S (Keep It Simple Stupid!) method of thinking. So, my idea was to just create the .php pages that are required, do any page processing or database interaction on that page, then have the HTML below the closing PHP tag. I would have any helpers/functions in a functions.php file. This is what I remember doing way before I was using Frameworks, and to me it seems like a very old school way of doing things. I've not created a site without a Framework for literally 2+ years, so I've lost my way with the basics. Any advice would be greatly appreciated.

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • Should a project start with the client or the server?

    - by MadBurn
    Pretty simple question with a complex answer. Should a project start with the client or the server, and why? Where should a single programmer start a client/server project? What are the best practices and what are the reasons behind them? If you can't think of any, what reasons do you use to justify why you would choose to start one before the other? Personally, I'm asking this question because I'm finishing up specs for a project I will be doing for myself on the side for fun. But now that I'm finishing this phase, I'm wondering "ok, now where do I begin?" Since I've never done a project like this by myself, I'm not sure where I should start. In this project, my server will be doing all the heavy lifting and the client will just be sending updates, getting information from the server, and displaying it. But, I don't want that to sway the answer as I'm looking for more of an in depth and less specific answer that would apply to any project I begin in the future.

    Read the article

  • Page Spamming via locations

    - by codemonkey
    Hi guys I am new here so please be gentle :) I have created a web page for a small mail order business. The page asks the reader if they are in need of a supplier for products in their "area" and if they have ever been let down by a supplier in that "area" etc. It also lists all the local villages and hamlets around the [area] where they can also supply too. This page is dynamically created and the [area] changes and so do the small towns that are local to the town. The page also contains information on the products so the word count vs town names is not stupid. An example of one of the URL would be www.website.com/1014/Halesowen/ It basically covers the whole of the UK so around 800 main towns with 28,000 local villages. The URL changes, so does the title and h1 tags, also each page is Geo coded for that town. My question really is this a good or bad idea? Is it a black hat technique ? I have been told if I have to ask the question then it probably is but the site does supply to all these areas just as any mail order company does and would like to get listed higher in each town for the products. I have seen this done on a few sites but only with a few targeted towns and not the whole of the UK so I would be really interested in your guys thoughts on this. I would post the URL to the site but as I am new here I am a bit unsure of the rules regarding posting links. The whole site needs a lot of other onsite SEO work doing and I will be doing that over the next few weeks. I look forward to your views on this. p.s. If I am allowed to post the URL without getting into trouble so you can see it someone let me know? Thanks in advance

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >