Search Results

Search found 9431 results on 378 pages for 'bubble sort'.

Page 268/378 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Creating smooth lighting transitions using tiles in HTML5/JavaScript game

    - by user12098
    I am trying to implement a lighting effect in an HTML5/JavaScript game using tile replacement. What I have now is kind of working, but the transitions do not look smooth/natural enough as the light source moves around. Here's where I am now: Right now I have a background map that has a light/shadow spectrum PNG tilesheet applied to it - going from darkest tile to completely transparent. By default the darkest tile is drawn across the entire level on launch, covering all other layers etc. I am using my predetermined tile sizes (40 x 40px) to calculate the position of each tile and store its x and y coordinates in an array. I am then spawning a transparent 40 x 40px "grid block" entity at each position in the array The engine I'm using (ImpactJS) then allows me to calculate the distance from my light source entity to every instance of this grid block entity. I can then replace the tile underneath each of those grid block tiles with a tile of the appropriate transparency. Currently I'm doing the calculation like this in each instance of the grid block entity that is spawned on the map: var dist = this.distanceTo( ig.game.player ); var percentage = 100 * dist / 960; if (percentage < 2) { // Spawns tile 64 of the shadow spectrum tilesheet at the specified position ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 64 ); } else if (percentage < 4) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 63 ); } else if (percentage < 6) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 62 ); } // etc... (sorry about the weird spacing, I still haven't gotten the hang of pasting code in here properly) The problem is that like I said, this type of calculation does not make the light source look very natural. Tile switching looks too sharp whereas ideally they would fade in and out smoothly using the spectrum tilesheet (I copied the tilesheet from another game that manages to do this, so I know it's not a problem with the tile shades. I'm just not sure how the other game is doing it). I'm thinking that perhaps my method of using percentages to switch out tiles could be replaced with a better/more dynamic proximity forumla of some sort that would allow for smoother transitions? Might anyone have any ideas for what I can do to improve the visuals here, or a better way of calculating proximity with the information I'm collecting about each tile? (PS: I'm reposting this from Stack Overflow at someone's suggestion, sorry about the duplicate!)

    Read the article

  • What Counts for a DBA: Passion

    - by drsql
    One of my first questions, when interviewing for a DBA/Programmer position, is always: “Why do you want this job?” The answers I receive range from cheesy hyperbole (“I want to enhance your services with my vast knowledge”) to deadpan realism (“I have N kids who all have a hole in the front of their face where food goes"). Both answers are fine in their own way, at least displaying some self-confidence, humour and honesty, but once in a while, I'll hear the answer that is music to me ears... “I LOVE DATABASES!” Whenever I hear it, my nerves tingle in hopeful anticipation; have I found someone for whom working with database isn't just a job, but a passion? Inevitably, I'm often disappointed. What initially seemed like passion turns out to be rather shallow enthusiasm; the person is enthusiastic about working with databases in the same way he or she might be about eating a bag of Cajun spiced kettle chips; enjoyable, but not something to think about too deeply or take too seriously. Enthusiasm comes, and enthusiasm goes. I've seen countless technical forum users burst onto the scene in a blaze of frantic question-answering, only to fade away within days, never to be heard from again. Passion, however, is more of a longstanding commitment. The biographies of the great technologists and authors of the recent past are full of the sort of passion and engrossment that lead a person to write a novel non-stop for a fortnight with no sleep and only dog food to eat (Philip K. Dick), or refuse to leave the works of the first tunnel under the Thames, even though it was flooded (Brunel). In a similar (though more modest) way, my passion for working with databases has led me to acts that might cause someone for whom it was "just a job" to roll their eyes in disbelief. Most evenings you're more likely to find me reading a database book than watching TV. I've spent hundreds of hours of my spare time writing blogs and articles (some of which are only read by tens of people); I've spent hundreds of dollars travelling to conferences, paying my own flight and hotel expenses, so that I can share a little of what I know, and mix with some like-minded people. And I know I'm far from alone in this, in the SQL Server community. Passion isn't everything, of course, and it isn't always accompanied by any great skill, but in almost every case, that skill can be cultivated over time. If you are doing what you are passionate about, work turns into more than just a way to feed your kids; it becomes your hobby, entertainment, and preoccupation. And it is this passion that gives a DBA the obsessive stubbornness, the refusal to be beaten by even the most difficult problem, which is often so crucial. A final word of warning though: passion without limits can turn weird. Never let it get in the way of your wife, kids, bills, or personal hygiene.

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • Is there an API for determining congressional districts?

    - by ardavis
    I'm looking to determine the congressional district based on an address my user is providing. This will avoid having the user to look it up themselves. Does an API of this sort exist? Note Through my attempts to find one, I've only come across these: http://www.govtrack.us/developers/api (not sure how to submit an an address or zip code however) The following resources are available in the API ...Bills and resolutions in the U.S. Congress since 1973 (the 93rd Congress). ...A (bill, person) pair indicating cosponsorship, with join and withdrawn dates. ...Members of Congress and U.S. Presidents since the founding of the nation. ...Terms held in office by Members of Congress and U.S. Presidents. Each term corresponds with an election, meaning each term in the House covers two years (one 'Congress'), as President four years, and in the Senate six years (three 'Congresses'). ...Roll call votes in the U.S. Congress since 1789. How people voted is accessed through the Vote_voter API. ...How people voted on roll call votes in the U.S. Congress since 1789. See the Vote API. Filter on the vote field to get the results of a particular vote... http://www.opencongress.org/api (seems to be a way to find congress information, but not districts) This API provides programmers with structured access to all the data on OpenCongress, everything from official bill info to news and blog coverage to user-generated votes on bills and much more... This API defaults to returning XML. All queries can also return JSON... https://groups.google.com/forum/?fromgroups=#!topic/opendems-discuss/CeKyi_aANaE (similar question, no resolution) I've been looking over Open Dems, and seeing what's exposed at this point and what isn't. I work with Democrats Abroad, and am interested in using stuff from the lab for their sites. I quickly looked over the Precinct API, which does both more and less than what I'd need. An ideal resource would be any way of translating addresses into CD at the very least (getting state district data would be good as well), since that would make it easier for DA's membership to make a difference in races like last month's NY26 race... Update I'm looking at the source for the govtrack.us website and the 'doGeoCode' function may be useful. view-source:http://www.govtrack.us/congress/members If no one has any suggestions, I will try to go off of what they are doing.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Is code like this a "train wreck" (in violation of Law of Demeter)?

    - by Michael Kjörling
    Browsing through some code I've written, I came across the following construct which got me thinking. At a first glance, it seems clean enough. Yes, in the actual code the getLocation() method has a slightly more specific name which better describes exactly which location it gets. service.setLocation(this.configuration.getLocation().toString()); In this case, service is an instance variable of a known type, declared within the method. this.configuration comes from being passed in to the class constructor, and is an instance of a class implementing a specific interface (which mandates a public getLocation() method). Hence, the return type of the expression this.configuration.getLocation() is known; specifically in this case, it is a java.net.URL, whereas service.setLocation() wants a String. Since the two types String and URL are not directly compatible, some sort of conversion is required to fit the square peg in the round hole. However, according to the Law of Demeter as cited in Clean Code, a method f in class C should only call methods on C, objects created by or passed as arguments to f, and objects held in instance variables of C. Anything beyond that (the final toString() in my particular case above, unless you consider a temporary object created as a result of the method invocation itself, in which case the whole Law seems to be moot) is disallowed. Is there a valid reasoning why a call like the above, given the constraints listed, should be discouraged or even disallowed? Or am I just being overly nitpicky? If I were to implement a method URLToString() which simply calls toString() on a URL object (such as that returned by getLocation()) passed to it as a parameter, and returns the result, I could wrap the getLocation() call in it to achieve exactly the same result; effectively, I would just move the conversion one step outward. Would that somehow make it acceptable? (It seems to me, intuitively, that it should not make any difference either way, since all that does is move things around a little. However, going by the letter of the Law of Demeter as cited, it would be acceptable, since I would then be operating directly on a parameter to a function.) Would it make any difference if this was about something slightly more exotic than calling toString() on a standard type? When answering, do keep in mind that altering the behavior or API of the type that the service variable is of is not practical. Also, for the sake of argument, let's say that altering the return type of getLocation() is also impractical.

    Read the article

  • Another Custom Property Locator: a Library of Books

    - by Cindy McMullen
    Introduction The previous post gave an introduction to custom property locators and showed how create one using JDeveloper.  This post continues on the custom locator theme, with a slightly more complex locator: a library of books.  It demonstrates using the DAO pattern to delegate data access from the Locator, which is likely how many actual backing stores will integrate with the Locator.  You can imagine, rather than a library of books, the data store might be a user database of sorts.  The same sort of pattern would apply. This post uses the BookLocator example originally shown in the WebCenter documentation, but has: updated the source code to reflect the final Property APIs includes the steps for generating the namespace and property definition files via JDeveloper detailed usage of the PropertyService APIs Getting Started If you're new to JDeveloper, you might want to check out this tutorial.  There is also the "Jump-Start to using Personalization" blog post that you might find useful.  Otherwise, if you're already familiar with both, you can skip those tutorials and jump right in to using JDeveloper. Download the BookLocator.zip file (which has been updated from the original post) and unzip it to a new directory.  Start JDeveloper, navigate to the BookLocator.jws file, and open it.   It should look something like this: The Properties Namespace file contains the property definitions and property set definitions you define.  It is explained more in detail in the Namespace documentation.  Although this example doesn't show it, the property set definitions have the ability to reference multiple locators per property.   This can be done by right-clicking on the 'Locator Info' box.  Configure the contents of the Locator Map  by editing locators and mapping them to available property names in the property set definition. Compiling, deploying, and running your locator The rest of the steps in this tutorial basically follow those in the previous blog on custom locators, and won't be repeated here.   A scenario to invoke your locator is included with the sample app: see BookProperties.scenarios_diagram above.  Summary This post demonstrates a simple library of books accessed by the BookPropertyLocator via the DAO layer.  This is a useful pattern for more realistic property retrievals, such as a backing user store.  It also points out the possibility of retrieving properties from multiple locators, which would be quite handy to retrieve user attributes from multiple sources.

    Read the article

  • Styling Windows Phone Silverlight Applications

    - by Tim Murphy
    If you have not developed with styles in Silverlight/XAML then it can be challenging and resources can be sparse depending on how deep you get.  One thing that you need to understand is what level you can apply styles and how much they can cascade.  What I am finding is that this doesn’t go to the level that we are used to in HTML and CSS. While styles can be defined at a page level if you want to share styles throughout your application they should be defined in the App.xaml file.  This is of course analogous to placing a style in your HTML file versus an external CSS file.  This is the type of style I will concentrate on in this post. The first thing to look it how styles associate to elements.  TargetType defines the object type that your style will apply to.  In the example below the style is targeting the TextBlock object type. <Style x:Key="TextBlockSmallGray" TargetType="TextBlock"> Next we use a Setter which allows you to apply values for specific attributes of the target object type.  The setters can be a simple value or complex.  The first example here is simply applying a color to the background property of the target. <Setter Property="Background" Value="White"/> The second setter example here is for the same property, but we are applying a the definition of a LinearGradientBrush. <Setter Property="Background"> <Setter.Value> <LinearGradientBrush> <GradientStop Offset="0" Color="Black"/> <GradientStop Offset="1" Color="White"/> </LinearGradientBrush> </Setter.Value> </Setter> The last thing I want to cover here is that you can leverage the system styles and then override or extend them.  The BasedOn attribute of the Style tag allows this sort of inheritance.  In the example below I am going to start with the PhoneTextTitleStyle and then override properties as needed. <Style x:Key="TextBlockTitle" BasedOn="{StaticResource PhoneTextTitle1Style}" TargetType="TextBlock"> So now that we have our styles defined applying it is fairly straight forward.  Add the style name as a static resource to the style property of the element in your page and off you go. <Grid x:Name="LayoutRoot" Style="{StaticResource PageGridStyle}"> So this is one step in creating consistency in your application’s look.  In future posts I will dig a little deeper. del.icio.us Tags: windows phone 7,mobile development,windows phone 7 development,.NET,software development,design,UX

    Read the article

  • Actor and Sprite, who should own these properties?

    - by Gerardo Marset
    I'm writing sort of a 2D game engine for making the process of creating games easier. It has two classes, Actor and Sprite. Actor is used for interactive elements (the player, enemies, bullets, a menu, an invisible instance that controls score, etc) and Sprite is used for animated (or not) images with transparency (or not). The actor may have an assigned sprite that represents it on the screen, which may change during the game. E.g. in a top-down action game you may have an actor with a sprite of a little guy that changes when attacking, walking, and facing different directions, etc. Currently the actor has x and y properties (its coordinates in the screen), while the sprite has an index property (the number of the frame currently being shown by the sprite). Since the sprite doesn't know which actor it belongs to (or if it belongs to an actor at all), the actor must pass its x and y coordinates when drawing the sprite. Also, since a actors may reset its sprite each frame (and usually do), the sprite's index property must be passed from the old to the new sprite like so (pseudocode): function change_sprite(new_sprite) old_index = my.sprite.index my.sprite = new_sprite() my.sprite.index = old_index % my.sprite.frames end I always thought this was kind of cumbersome, but it never was a big problem. Now I decided to add support for more properties. Namely a property to draw the sprite rotated, a property to draw it flipped, it a property draw it stretched, etc. These should probably belong to the sprite and not the actor, but if they do, the actor would have to pass them from the old to the new sprite each time it changes... On the other hand, if they belonged to the actor, the actor would have to pass each property to the sprite when drawing it (since the sprite doesn't know which actor it belongs to, and it shouldn't, since sprites aren't just meant to be used by actors, really). Another option I thought of would be having an extra class that owns all these properties (plus index, x and y) and links an actor with a sprite, but that doesn't come without drawbacks. So, what should I do with all these properties? Thanks!

    Read the article

  • How do I maintain a really poorly written code base?

    - by onlineapplab.com
    Recently I got hired to work on existing web application because of NDA I'm not at liberty to disclose any details but this application is working online in sort of a beta testing stage before official launch. We have a few hundred users right now but this number is supposed to significantly increase after official launch. The application is written in PHP (but it is irrelevant to my question) and is running on a dual xeon processor standalone server with severe performance problems. I have seen a lot of bad PHP code but this really sets new standards, especially knowing how much time and money was invested in developing it. it is as badly coded as possible there is PHP, HTML, SQL mixed together and code is repeated whenever it is necessary (especially SQL queries). there are not any functions used, not mentioning any OOP there are four versions of the app (desktop, iPhone, Android + other mobile) each version has pretty much the same functionality but was created by copying the whole code base, so now there are some differences between each version and it is really hard to maintain the database is really badly designed, which is causing severe performance problems also for fixing some errors in PHP code there is a lot of database triggers used which are updating data on SELECT and on INSERT so any testing is a nightmare Basically, any sin of a bad programming you can imagine is there for example it is not only possible to use SQL injections in literally every place but you can log into app if you use a login which doesn't exist and an empty password. The team which created this app is not working on it any more and there is an outsourced team which suggested that there are some problems but was never willing to deal with the elephant in the room partially because they've got a very comfortable contract and partially due to lack of skills (just my opinion). My job was supposed to be fixing some performance problems and extending existing functionality but first thing I was asked to do was a review of the existing code base. I've made my review and it was quite a shock for the management but my conclusions were after some time finally confirmed by other programmers. Management made it clear that it is not possible to start rewriting this app from scratch (which in my opinion should be done). We have to maintain its operable state and at the same time fix performance errors and extend the functionality. My question is, as I don't want just to patch the existing code, how to transform this into properly written app while keeping the existing code working at the same time? My plan is: Unify four existing versions into common code base (fixing only most obvious errors). Redesign db and use triggers to populate it with data (so data will be maintained in two formats at the same time) All new functionality will be written as separate project. Step by step transfer existing functionality into the new project After some time everything will be in the new project Some explanation about #2, right now it is practically impossible to make any updates in existing db any change requires reviewing whole code and making changes in many places. Is such plan feasible at all? Another solution is to walk away and leave the headache to someone else.

    Read the article

  • 2D camera perspective projection from 3D coordinates -- HOW?

    - by Jack
    I am developing a camera for a 2D game with a top-down view that has depth. It's almost a 3D camera. Basically, every object has a Z even though it is in 2D, and similarly to parallax layers their position, scale and rotation speed vary based on their Z. I guess this would be a perspective projection. But I am having trouble converting the objects' 3D coordinates into the 2D space of the screen so that everything has correct perspective and scale. I never learned matrices though I did dig the topic a bit today. I tried without using matrices thanks to this article but every attempt gave awkward results. I'm using ActionScript 3 and Flash 11+ (Starling), where the screen coordinates work like this: Left-handed coordinates system illustration I can explain further what I did if you want to help me sort out what's wrong, or you can directly tell me how you would do it properly. In case you prefer the former, read on. These are images showing the formulas I used: upload.wikimedia.org/math/1/c/8/1c89722619b756d05adb4ea38ee6f62b.png upload.wikimedia.org/math/d/4/0/d4069770c68cb8f1aa4b5cfc57e81bc3.png (Sorry new users can't post images, but both are from the wikipedia article linked above, section "Perspective projection". That's where you'll find what all variables mean, too) The long formula is greatly simplified because I believe a normal top-down 2D camera has no X/Y/Z rotation values (correct ?). Then it becomes d = a - c. Still, I can't get it to work. Maybe you could explain what numbers I should put in a(xyz), c(xyz), theta(xyz), and particularly, e(xyz) ? I don't quite get how e is different than c in my case. c.z is also an issue to me. If the Z of the camera's target object is 0, should the camera's Z be something like -600 ? ( = focal length of 600) Whatever I do, it's wrong. I only got it to work when I used arbitrary calculations that "looked" right, like most cameras with parallax layers seem to do, but that's fake! ;) If I want objects to travel between Z layers I might as well do it right. :) Thanks a lot for your help!

    Read the article

  • Techniques to re-factor garbage and maintain sanity?

    - by Incognito
    So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack("1"); string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr," "); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr," "); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr," "); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr," "); //(hundreds of lines of this) //.... //.... SQL = "...... lots of SQL .... "; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • Lenovo Thinkpad X1 Carbon support

    - by Robottinosino
    I am considering selling my Mac to get money towards a Lenovo Thinkpad X1 because what I really want to do is to be running an Ubuntu system all the time. Is this machine completely supported in Ubuntu, with no tiny little feature missing just because I am "going Linux"? Optional user story section, skip to the question below if you don't have time: I have a friend who bought a "works on Ubuntu" system a year ago and has hated the fact ever since: battery lasts less than if he boots in Windows (which he despises) and he ascribes that to "no good OS/harware integration and support for advanced chipset power management features", odd behaviour on suspend/resume/hibernate (says: "when it works 90% of the time and the other 10% it makes you lose your work is as good as broken - 90% is the same as 0% he says), some occasional graphics card glitches he can perfectly well live with and has almost grown affectionate to, and finally, and that is what would make him undo his choice if he could, bad "input device drivers". He says: trackpoint and trackpad just "feel different", "so much better" on Windows and that was impossible to know from the website brochure. That story makes me very doubtful... but I want to abandon this "walled garden" of prison that is my Mac and go Ubuntu all the way, no doubt about that! My dilemma at this time is just: "I don't want to live with those eternal frustrations for sure"! Here's a directly answerable phrasing of my question: Is the Lenovo Thinkpad X1 supported on Ubuntu? Yes/no, which version? Which hardware features are not supported? Provide a list Optionally: sort the list in descending order of frustration from your experience Optionally: mention if there are acceptable workarounds to the "out-of-the-box" condition described in the earlier points and whether this ameliorates frustration at least to "tolerable" levels Comment: the Ubuntu hardware certification page is so not-for-end-users it's unreal. Whoa. What would make it end-user friendly is: Link to "buy here and you'll be just fine, this is the right configuration for you, it'll work as long as you press BUY on that page and don't browse further" Remove mentions of may and might not work. Just tell it straight: press buy here and you will get a working system with the exception of A, B, C (so that I can decide whether the philosophical "freedom pleasure" I get from escaping an Apple world is enough to off-balance the loss, for instance, of Bluetooth capabilities (something that I of course use on my Mac) but "could" lose to use free (as in freedom) software The certification page fails to dispel doubts in me as an end-user. I don't feel "eased into Ubuntu", I feel "partially informed".

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • How to refactor my design, if it seems to require multiple inheritance?

    - by Omega
    Recently I made a question about Java classes implementing methods from two sources (kinda like multiple inheritance). However, it was pointed out that this sort of need may be a sign of a design flaw. Hence, it is probably better to address my current design rather than trying to simulate multiple inheritance. Before tackling the actual problem, some background info about a particular mechanic in this framework: It is a simple game development framework. Several components allocate some memory (like pixel data), and it is necessary to get rid of it as soon as you don't need it. Sprites are an example of this. Anyway, I decided to implement something ala Manual-Reference-Counting from Objective-C. Certain classes, like Sprites, contain an internal counter, which is increased when you call retain(), and decreased on release(). Thus the Resource abstract class was created. Any subclass of this will obtain the retain() and release() implementations for free. When its count hits 0 (nobody is using this class), it will call the destroy() method. The subclass needs only to implement destroy(). This is because I don't want to rely on the Garbage Collector to get rid of unused pixel data. Game objects are all subclasses of the Node class - which is the main construction block, as it provides info such as position, size, rotation, etc. See, two classes are used often in my game. Sprites and Labels. Ah... but wait. Sprites contain pixel data, remember? And as such, they need to extend Resource. But this, of course, can't be done. Sprites ARE nodes, hence they must subclass Node. But heck, they are resources too. Why not making Resource an interface? Because I'd have to re-implement retain() and release(). I am avoiding this in virtue of not writing the same code over and over (remember that there are multiple classes that need this memory-management system). Why not composition? Because I'd still have to implement methods in Sprite (and similar classes) that essentially call the methods of Resource. I'd still be writing the same code over and over! What is your advice in this situation, then?

    Read the article

  • Investment scheme for a PC game the project

    - by Alex Kamen
    Good day everyone, I am working on a PC game project that has 3 phases planned, micro, macro and mmo versions [if confused, see a brief description at the bottom]. I have found a potential investor for the micro version of the game, but naturally, he requested a detailed plan of how the game will pay back. And the problem is that micro version itself is not supposed to be monetized much, other than some ads and limited in-game currency utilization. The idea is that with this combat demo already at hand, it should be possible to get a really large enough investment (millions of dollars) and use it to pay back the initial small one (thousands of dollars) and take the project into macro phase, which will really make profit. This way, everybody is going to win, provided that I can deliver the end-product. Yet while I am confident of that both the conception of the macro and the real game-play of the micro versions are going to be appealing, I don’t know how to obtain any guarantee of that I will be able to get funded once I have the prototype ready. And without that, I won’t receive the funds for the prototype in the first place! To summarize, my question is: how to figure out my future possibilities of getting funded once I have combat demo out, basically “whom to write to and what”. Ideally, I would like some sort of a preliminary agreement with a game publisher, something that would basically state “If the developer provides the product in time and in quality corresponding to the specifications given, the publisher guarantees to allocate funds for distribution and further development, thereby acquiring the right to X part of all future profits”. Does this sound sane? It’s just that I don’t want to sell all of my rights out straight away by taking a big outside investment while the project is in such early stage. I would appreciate if you would share your thoughts on this kind of scheme, and be sure to ask questions as I am sure I must have forgotten to mention a ton of important things, like the fact that initial funds are going to be spent on outsourcing (living in Siberia is really just great). [here’s a brief outline of what each version will feature] [micro] 1) turn based tactical combat rules 2) character development 3) arena/tournament system [macro] 4) ai-ruled dynamic interactive worlds 5) global map adventuring 6) strategic rpg + god simulator gameplay [mmo] 7) Persistent worlds system 8) Social structures system (“guilds/clans”) 9) god-simulation on the mmo scale P.S. Obviously, these features are incremental, so that mmo version has all 9.

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • "static" as a semantic clue about statelessness?

    - by leoger
    this might be a little philosophical but I hope someone can help me find a good way to think about this. I've recently undertaken a refactoring of a medium sized project in Java to go back and add unit tests. When I realized what a pain it was to mock singletons and statics, I finally "got" what I've been reading about them all this time. (I'm one of those people that needs to learn from experience. Oh well.) So, now that I'm using Spring to create the objects and wire them around, I'm getting rid of static keywords left and right. (If I could potentially want to mock it, it's not really static in the same sense that Math.abs() is, right?) The thing is, I had gotten into the habit of using static to denote that a method didn't rely on any object state. For example: //Before import com.thirdparty.ThirdPartyLibrary.Thingy; public class ThirdPartyLibraryWrapper { public static Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... ThirdPartyLibraryWrapper.newThingy(input); //After public class ThirdPartyFactory { public Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... thirdPartyFactoryInstance.newThingy(input); So, here's where it gets touchy-feely. I liked the old way because the capital letter told me that, just like Math.sin(x), ThirdPartyLibraryWrapper.newThingy(x) did the same thing the same way every time. There's no object state to change how the object does what I'm asking it to do. Here are some possible answers I'm considering. Nobody else feels this way so there's something wrong with me. Maybe I just haven't really internalized the OO way of doing things! Maybe I'm writing in Java but thinking in FORTRAN or somesuch. (Which would be impressive since I've never written FORTRAN.) Maybe I'm using staticness as a sort of proxy for immutability for the purposes of reasoning about code. That being said, what clues should I have in my code for someone coming along to maintain it to know what's stateful and what's not? Perhaps this should just come for free if I choose good object metaphors? e.g. thingyWrapper doesn't sound like it has state indepdent of the wrapped Thingy which may itself be mutable. Similarly, a thingyFactory sounds like it should be immutable but could have different strategies that are chosen among at creation. I hope I've been clear and thanks in advance for your advice!

    Read the article

  • Why do old programming languages continue to be revised?

    - by SunAvatar
    This question is not, "Why do people still use old programming languages?" I understand that quite well. In fact the two programming languages I know best are C and Scheme, both of which date back to the 70s. Recently I was reading about the changes in C99 and C11 versus C89 (which seems to still be the most-used version of C in practice and the version I learned from K&R). Looking around, it seems like every programming language in heavy use gets a new specification at least once per decade or so. Even Fortran is still getting new revisions, despite the fact that most people using it are still using FORTRAN 77. Contrast this with the approach of, say, the typesetting system TeX. In 1989, with the release of TeX 3.0, Donald Knuth declared that TeX was feature-complete and future releases would contain only bug fixes. Even beyond this, he has stated that upon his death, "all remaining bugs will become features" and absolutely no further updates will be made. Others are free to fork TeX and have done so, but the resulting systems are renamed to indicate that they are different from the official TeX. This is not because Knuth thinks TeX is perfect, but because he understands the value of a stable, predictable system that will do the same thing in fifty years that it does now. Why do most programming language designers not follow the same principle? Of course, when a language is relatively new, it makes sense that it will go through a period of rapid change before settling down. And no one can really object to minor changes that don't do much more than codify existing pseudo-standards or correct unintended readings. But when a language still seems to need improvement after ten or twenty years, why not just fork it or start over, rather than try to change what is already in use? If some people really want to do object-oriented programming in Fortran, why not create "Objective Fortran" for that purpose, and leave Fortran itself alone? I suppose one could say that, regardless of future revisions, C89 is already a standard and nothing stops people from continuing to use it. This is sort of true, but connotations do have consequences. GCC will, in pedantic mode, warn about syntax that is either deprecated or has a subtly different meaning in C99, which means C89 programmers can't just totally ignore the new standard. So there must be some benefit in C99 that is sufficient to impose this overhead on everyone who uses the language. This is a real question, not an invitation to argue. Obviously I do have an opinion on this, but at the moment I'm just trying to understand why this isn't just how things are done already. I suppose the question is: What are the (real or perceived) advantages of updating a language standard, as opposed to creating a new language based on the old?

    Read the article

  • Nautilus ignores / misinterprets view size

    - by BlueZero4
    I noticed that a lot of my folders had suddenly switched to higher view sizes than I had specificied. I was assuming that somehow nautilus had suddenly decided to create per-folder entries for said folders with incorrect view sizes. So I found this question: How to reset all per-folder view settings in nautilus? I found the folder specified in the answer (~/.local/share/gvfs-metadata) and found that it was actually important to delete the files INSIDE the folder, because for some reason deleting the folder itself didn't work for some reason. After doing that, I discovered that the odd setting was for the default view settings, not for a handful of files. Nautilus actually handles the per-folder settings like it should, but it ignores the global folder settings. I want Nautilus to, by default, display all non-specified folders as compact view, 50%. My folders are using the compact setting like I want, but they are not down to 50%. At a guess, they are at 100%. Altering the view size of the icon view can set the compact view to 33%, but I'm not sure by what mechanism this functions. I haven't extensively tested the other view sizes because I don't plan on using them much at all. Next I looked up questions like How do I reset nautilus to the default configuration? I'm expecting the problem to be a corrupted config file or something of the sort, so I hunted down directories like ~/.nautilus, ~/.gconf/apps/nautilus, and ~/.gnome2/nautilus. (I don't have a ~/.nautilus directory, so I'm assuming that's only for older versions.) I attempted to remove the contents of each, but I can't seem to force Nautilus back to default configuration settings. Actually viewing Nautilus's preferences in GConf made the settings look like they were what I wanted them to be, which is odd. I'd like to force Nautilus to default settings, basically. Though if something else will fix it, I'll take it too. I'm not interested in doing a full uninstall, reinstall of Nautilus if I don't have to. ==EDIT1== Turns out that Nautilus just writes the settings in GConf for the heck of it. Nautilus only really uses the settings that it stores in DConf. I did gsettings reset-recursively org.gnome.nautilus, which actually did reset Nautilus to default, but it still doesn't like my view size settings.

    Read the article

  • Antenna Aligner Part 5: Devil is in the detail

    - by Chris George
    "The first 90% of a project takes 90% of the time and the last 10% takes the another 200%"  (excerpt from onista) Now that I have a working app (more or less), it's time to make it pretty and slick. I can't stress enough how useful it is to get other people using your software, and my simple app is no exception. I handed my iPhone to a couple of my colleagues at Red Gate and asked them to use it and give me feedback. Immediately it became apparent that the delay between the list page being shown and the list being drawn was too long, and everyone who tried the app clicked on the "Recalculate" button before it had finished. Similarly, selecting a transmitter heralded a delay before the compass page appeared with similar consequences. All users expected there to be some sort of feedback/spinny etc. to show them it is actually doing something. In a similar vein although for opposite reasons, clicking the Recalculate button did indeed recalculate the available transmitters and redraw them, but it did this too fast! One or two users commented that they didn't know if it had done anything. All of these issues resulted in similar solutions; implement a waiting spinny. Thankfully, jquery mobile has one built in, primarily used for ajax operations. Not wishing to bore you with the many many iterations I went through trying to get this to work, I'll just give you my solution! (Seriously, I was working on this most evenings for at least a week!) The final solution for the recalculate problem came in the form of the code below. $(document).on("click", ".show-page-loading-msg", function () {            var $this = $(this),                theme = $this.jqmData("theme") ||                        $.mobile.loadingMessageTheme;            $.mobile.showPageLoadingMsg(theme, "recalculating", false);            setTimeout(function ()                           { $.mobile.hidePageLoadingMsg(); }, 2000);            getLocationData();        })        .on("click", ".hide-page-loading-msg", function () {              $.mobile.hidePageLoadingMsg();        }); The spinny is activated by setting the class of a button (for example) to the 'show-page-loading-msg' class. &lt;a data-role="button" class="show-page-loading-msg"Recalculate This means the code above is fired, calling the showPageLoadingMsg on the document.mobile object. Then, after a 2 second timeout, it calls the hidePageLoadingMsg() function. Supposedly, it should show "recalculating" underneath the spinny, but I've not got that to work. I'm wondering if there is a problem with the jquery mobile implementation. Anyway, it doesn't really matter, it's the principle I'm after, and I now have spinnys!

    Read the article

  • No description for any page on the website is available in Google despite robots.txt allowing crawling

    - by Abhijit
    I seem to have the weirdest issue with Search Engine Optimization, and I asked the IT folks at my university, I asked people on Joomla forums and I am trying to sort this issue out using Google Webmaster Tools for more than 2 months to little avail. I want to know if I have some blatantly wrong configuration somewhere that is causing search engines to be unable to index this site. I noticed a similar issue with another website I searched for online (ECEGSA - The University of British Columbia at gsa.ece.ubc.ca), making me believe this might be a concern that people might be looking an answer for. Here are the details: The website in question is: http://gsa.ece.umd.edu/. It runs using Joomla 2.5.x (latest). The site was up since around mid December of 2013, and I noticed right from the get go that the site was not being indexed correctly on Google. Specifically I see the following message when I search for the website on Google: A description for this result is not available because of this site's robots.txt – learn more. The thing is in December till around March I used the default Joomla robots.txt file which is: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Nothing there should stop Google from searching my website. And even more confusingly, when I go to Google Webmaster tools, under "Blocked URLs" tab, when I try many of the links on the site, they are all shown up as "Allowed". I then tried adding a sitemap, putting it in the robots.txt file. That did not help. Same exact search result, same behavior in the "Blocked URLs" tab on the webmaster tools. Now additionally, the "sitemaps" tab says for several links an error saying "URL is robotted out". I tried those exact links in the "Blocked URLs" and they are allowed! I then tried deleting the robots.txt file. No use. Same exact problem. Here is an example screenshot from Google's Webmaster Tools: At this point I cannot give a rational explanation to why this is happening and neither can anyone in the IT department here. No one on Joomla forums can seem to understand what is going on. Based on what I explained, does it seem that I have somehow set a setting in the robots.txt or in .htaccess or somewhere else, incorrectly?

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >