Search Results

Search found 39598 results on 1584 pages for 'good design'.

Page 80/1584 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • SEO & Multilingual: would be this a good practise?

    - by Younès
    I am currently making a bilingual website and I'd like to get nice SEO results of course. Here's my idea: The internal links would be composed of the "www" subdomain so that people can share links regardless of their language. Anyway, their language is determined by the HTTP_ACCEPT_LANGUAGE PHP variable. So, they would see http:// www.site.com/mydocument/123 in their adress bar and never see any links like "http:// fr.site.com/mydocument/123" or "http://en.site.com/mydocument/123" The user can always switch the page's language thanks to links in the footer. The switching language link would be : http:// fr.site.com/mydocument/123 , and clicking on it would change his language session and redirects the user to http:// www.site.com/mydocument/123 In case of a crawling bot: I read that if the HTTP_USER_LANGUAGE variable was missing then it's a crawling bot. So, in that case, we set the defaut language as English. Each page, as I mentionned earlier, has a link for another language: On the page: http:// www.site.com/document/1323, the link http:// fr.site.com/document/1323 can be seen by the bot and be crawled. What do you think about this practise ? Would I get good SEO results for each language ?

    Read the article

  • One good reason for a rewrite

    - by Supermighty
    I have a personal web project I cut my teeth on learning how to program. I wrote it in PHP and learned as I went. I eventually I re-factored it to use MVC and removed all mixing of php/html. Right now it has no users, save myself, and it makes no money. I have a strong desire to rewrite the entire app. Which really isn't that large of an app. I have a lot of reasons why I should not rewrite it. I know that I should move forward. It's a working app now and it will only set me back to rewrite it. But I can't shake this feeling that I would be better off using a different programming language in the long run. That I'd enjoy it more. That I'd feel comfortable with it. I feel like my one good reason to rewrite my app is that I have a gut feeling that I should. PHP seems like a hack thrown together. I want to use a language that feels more elegant to me. Any feedback you have would be welcome.

    Read the article

  • Good sysadmin practise?

    - by Randomthrowaway
    Throwaway account here. Recently our sysadmin sent us the following email (I removed the names): Hi, I had a situation yesterday (not mentioning names) when I had to perform a three way md5 checksum verification over the phone, more than once. If we can stick to the same standards then this will save any confusion if you are ever asked to repeat something over the phone or in the office for clarification. This is of particular importance when trying to speak or say this over the phone … m4f7s29gsd32156ffsdf … that’s really difficult to get right on a bad line. The rule is very simple: 1) Speak in blocks of 4 characters and continue until the end. The recipient can read back or ask for verification on one of the blocks. 2) Use the same language! http://en.wikipedia.org/wiki/Phoenetic_alphabet#NATO Myself, xxx and a few others I know all speak the NATO phonetic alphabet (aka police speak) and this makes it so much easier and saves so much time. If you want to learn quickly then all you really need is A to F and 0 to 9. 0 to 9 is really easy, A to F is only 6 characters to learn. Could you tell me if forcing the developers to learn NATO alphabet is a good practise, or if there are ways (and which ways) to avoid being in such a situation?

    Read the article

  • Any good tutorials or resources for learning how to design a scalable and "component" based game 'fr

    - by CodeJustin.com
    In short I'm creating a 2D mmorpg and unlike my last "mmo" I started developing I want to make sure that this one will scale well and work well when I want to add new in-game features or modify existing ones. With my last attempt with an avatar chat within the first few thousand lines of code and just getting basic features added into the game I seen my code quality lowering and my ability to add new features or modify old ones was getting lower too as I added more features in. It turned into one big mess that some how ran, lol. This time I really need to buckle down and find a design that will allow me to create a game framework that will be easy to add and remove features (aka things like playing mini-games within my world or a mail system or buddy list or a new public area with interactive items). I'm thinking that maybe a component based approach MIGHT be what I'm looking for but I'm really not sure. I have read documents on mmorpg design and 2d game engine architecture but nothing really explained a way of designing a game framework that will basically let me "plug-in" new features into the main game and use the resources of the main game without changing much within my 'main game code'. Hope someone understands what I mean, any help will is appreciated.

    Read the article

  • What are some good ways to store performance statistics in a database for querying later?

    - by Nathan
    Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time. Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like. Some Ideal Attributes of a Solution Causes no noticeable performance hit on the server being monitored Has a very high precision of measurement Does not store useless or redundant information Is easy to query (lends itself to gathering/displaying useful information) Lends itself to being graphed easily Is accurate Is elegant Primary Questions 1) What is a good design/method/scheme for triggering the storing of statistics? 2) What is a good database design for how to actually store the data? Example answers...that are sort of vague and lame... 1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server. 2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1. 3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items. Etc. In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!

    Read the article

  • When is a good time to start thinking about scaling?

    - by Slokun
    I've been designing a site over the past couple days, and been doing some research into different aspects of scaling a site horizontally. If things go as planned, in a few months (years?) I know I'd need to worry about scaling the site up and out, since the resources it would end up consuming would be huge. So, this got me to thinking, when is the best time to start thinking about, and designing for, scalability? If you start too early on, you could easily over complicate your design, and make it impossible to actually build. You could also get too caught up in the details, the architecture, whatever, and wind up getting nothing done. Also, if you do get it working, but the site never takes off, you may have wasted a good chunk of extra effort. On the other hand, you could be saving yourself a ton of effort down the road. Designing it from the ground up to be big would make it much easier later on to let it grow big, with very little rewriting going on. I know for what I'm working on, I've decided to make at least a few choices now on the side of scaling, but I'm not going to do a complete change of thinking to get it to scale completely. Notably, I've redesigned my database from a conventional relational design to one similar to what was suggested on the Reddit site linked below, and I'm going to give memcache a try. So, the basic question, when is a good time to start thinking or worrying about scaling, and what are some good designs, tips, etc. for when doing so? A couple of things I've been reading, for those who are interested: http://www.codinghorror.com/blog/2009/06/scaling-up-vs-scaling-out-hidden-costs.html http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html http://developer.yahoo.com/performance/rules.html

    Read the article

  • Is it poor design to create objects that only execute code during the constructor?

    - by Curtix
    In my design I am using objects that evaluate a data record. The constructor is called with the data record and type of evaluation as parameters and then the constructor calls all of the object's code necessary to evaluate the record. This includes using the type of evaluation to find additional parameter-like data in a text file. There are in the neighborhood of 250 unique evaluation types that use the same or similar code and unique parameters coming from the text file. Some of these evaluations use different code so I benefit a lot from this model because I can use inheritance and polymorphism. Once the object is created there isn't any need to execute additional code on the object (at least for now) and it is used more like a struct; its kept on a list and 3 properties are used later. I think this design is the easiest to understand, code, and read. A logical alternative I guess would be using functions that return score structs, but you can't inherit from methods so it would make it kind of sloppy imo. I am using vb.net and these classes will be used in an asp.net web app as well as in a distributed app. thanks for your input

    Read the article

  • Is there a design pattern that expresses objects (an their operations) in various states?

    - by darren
    Hi I have a design question about the evolution of an object (and its state) after some sequence of methods complete. I'm having trouble articulating what I mean so I may need to clean up the question based on feedback. Consider an object called Classifier. It has the following methods: void initialise() void populateTrainingSet(TrainingSet t) void pupulateTestingSet(TestingSet t) void train() void test() Result predict(Instance i) My problem is that these methods need to be called in a certain order. Futher, some methods are invalid until a previous method is called, and some methods are invalid after a method has been called. For example, it would be invalid to call predict() before test() was called, and it would be invalid to call train() after test() was called. My approach so far has been to maintain a private enum that represents the current stateof the object: private static enum STATE{ NEW, TRAINED, TESTED, READY}; But this seems a bit cloogy. Is there a design pattern for such a problem type? Maybe something related to the template method.

    Read the article

  • Which is the correct design pattern for my PHP application?

    - by user1487141
    I've been struggling to find good way to implement my system which essentially matches the season and episode number of a show from a string, you can see the current working code here: https://github.com/huddy/tvfilename I'm currently rewriting this library and want a nicer way to implement how the the match happens, currently essentially the way it works is: There's a folder with classes in it (called handlers), every handler is a class that implements an interface to ensure a method called match(); exists, this match method uses the regex stored in a property of that handler class (of which there are many) to try and match a season and episode. The class loads all of these handlers by instantiating each one into a array stored in a property, when I want to try and match some strings the method iterates over these objects calling match(); and the first one that returns true is then returned in a result set with the season and episode it matched. I don't really like this way of doing it, it's kind of hacky to me, and I'm hoping a design pattern can help, my ultimate goal is to do this using best practices and I wondered which one I should use? The other problems that exist are: More than one handler could match a string, so they have to be in an order to prevent the more greedy ones matching first, not sure if this is solvable as some of the regex patterns have to be greedy, but possibly a score system, something that shows a percentage of how likely the match is correct, i'd have no idea how to actually implement this though. I'm not if instantiating all those handlers is a good way of doing it, speed is important, but using best practices and sticking to design patterns to create good, extensible and maintainable code is my ultimate priority. It's worth noting the handler classes sometimes do other things than just regex matching, they sometimes prep the string to be matched by removing common words etc. Cheers for any help Billy

    Read the article

  • How to force VS to react on a changing of an attached property in design time?

    - by sedovav
    Imagine, we have a wpf class library with a window1.xaml and a resource dictionary res.xaml defined in it. I know how to use styles that defined in the res.xaml for the controls that defined into the window: <Window x:Class="...Window1"> <Window.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="res.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> <\Window.Resources> </Window> So we can use the dictionary's styles for all elements into the window (except the window element... I don't know how to set the style from the res.xaml for the window :( ). I saw the article where describes how to create and use attached property to add resource dictionaries to a FrameworkElement.Resources.MergedDictionaries list. It's good! We can do the same as we done in the example above but we can use the window style now. It looks like this: <Window x:Class="...Window1" xmlns: resources="..." resources:SharedResources.MergedDictionaries="res.xaml"> </Window> That's good but VS2008 cannot recognize resources from res.xaml in design time. So we have a sad situation: all styles from res.xaml are available in run-time but in the design-time VS cannot display the window (it can't find the mentioned styles). Does anybody know how to fix this situation?

    Read the article

  • Is Using Tuples in my .NET 4.0 Code a Poor Design Decision?

    - by Jason Webb
    With the addition of the Tuple class in .net 4, I have been trying to decide if using them in my design is a bad choice or not. The way I see it, a Tuple can be a shortcut to writing a result class (I am sure there are other uses too). So this: public class ResultType { public string StringValue { get; set; } public int IntValue { get; set; } } public ResultType GetAClassedValue() { //..Do Some Stuff ResultType result = new ResultType { StringValue = "A String", IntValue = 2 }; return result; } Is equivalent to this: public Tuple<string, int> GetATupledValue() { //...Do Some stuff Tuple<string, int> result = new Tuple<string, int>("A String", 2); return result; } So setting aside the possibility that I am missing the point of Tuples, is the example with a Tuple a bad design choice? To me it seems like less clutter, but not as self documenting and clean. Meaning that with the type ResultType, it is very clear later on what each part of the class means but you have extra code to maintain. With the Tuple<string, int> you will need to look up and figure out what each Item represents, but you write and maintain less code. Any experience you have had with this choice would be greatly appreciated.

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • When too much encapsulation was reached

    - by Samuel
    Recently, I read a lot of gook articles about how to do a good encapsulation. And when I say "good encapsulation", I don't talk about hiding private fields with public properties; I talk about preventing users of your Api to do wrong things. Here is two good articles about this subject: http://blog.ploeh.dk/2011/05/24/PokayokeDesignFromSmellToFragrance.aspx http://lostechies.com/derickbailey/2011/03/28/encapsulation-youre-doing-it-wrong/ At my job, the majority a our applications are not destined to other programmers but rather to the customers. About 80% of the application code is at the top of the structure (Not used by other code). For this reason, there is probably no chance ever that this code will be used by other application. An example of encapsulation that prevent user to do wrong thing with your Api is to return an IEnumerable instead of IList when you don't want to give the ability to the user to add or remove items in the list. My question is: When encapsulation could be considered like too much of purism object oriented programming while keeping in mind that each hour of programming is charged to the customer? I want to do good code that is maintainable, easy to read and to use but when this is not a public Api (Used by other programmer), where could we put the line between perfect code and not so perfect code? Thank you.

    Read the article

  • Making money from a custom built interpreter?

    - by annoying_squid
    I have been making considerable progress lately on building an interpreter. I am building it from NASM assembly code (for the core engine) and C (cl.exe the Microsoft compiler for the parser). I really don't have a lot of time but I have a lot of good ideas on how to build this so it appeals to a certain niche market. I'd love to finish this but I need to face reality here ... unless I can make some good monetary return on my investment, there is not a lot of time for me to invest. So I ask the following questions to anyone out there, especially those who have experience in monetizing their programs: 1) How easy is it for a programmer to make good money from one design? (I know this is vague but it will be interesting to hear from those who have experience or know of others' experiences). 2) What are the biggest obstacles to making money from a programming design? 3) For the parser, I am using the Microsoft compiler (no IDE) I got from visual express, so will this be an issue? Will I have to pay royalties or a license fee? 4) As far as I know NASM is a 2-clause BSD licensed application. So this should allow me to use NASM for commericial development unless I am missing something? It's good to know these things before launching into the meat and potatoes of the project.

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • Windows 8: Everything from design, build, and how to sell a Metro style app

    - by Thomas Mason
    For me, there are a lot of similarities between an application developed for Windows Phone and a Metro style app developed for Windows 8. A Windows Phone 7 application (rather than an XNA game) is built in .NET and XAML against a subset of the .NET framework and the application has a lifecycle which needs to be conscious of battery life and so is split out into foreground/background pieces. The application is sandboxed in terms of its interactions with the local device and is packaged with a manifest which describes those interactions. The app needs to be aware of network connectivity status and its work on the network is done asynchronously to preserve the user experience.The app is packaged and deployed to a Marketplace which the user browses to find the app, read reviews, perhaps purchase it and then install it and receive updates over time. Quite a lot of those statements are as true of a Windows 8 Metro style app as they are for a Windows Phone app and so a Windows Phone app developer already has a good head start when it comes to building Metro style apps for Windows 8. With that in mind, there is an event to help developers with a Windows Phone app in Marketplace to begin the process of looking at Windows 8 and whether you can get a quick win by bringing your Phone application onto Windows. The idea of the event was to provide a space where developers can get together over 2 days and take the time out to look at what it means to take their app from Windows Phone to Windows 8. Kicking off on Saturday 16th June at 10am, we are told they have plenty of power sockets, WiFi, whiteboards, drinks, pizza, games, prizes and some quiet space that you can work in. Including people on hand with Windows Phone and Windows 8 experience to help everything along the way. There will be an attendee-voted schedule of talks but we’ll keep these out of your way if you just want to get on and code. We’ll also provide information around submitting your app to the Windows Store If you have a Windows Phone app in Marketplace, now’s a great time to look at getting it onto Windows 8. Sign up. Bring your laptop. Bring your app. Bring Windows 8 and Visual Studio 11. And everyone will their best to help you get your app onto Windows 8. Location & Venue TBA but it will be in central London, accessible by major railway and underground transportation. Day 1 Saturday 16th June 10am – 9pm Day 2 Sunday 17th June 10am – 4pm

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • Web Apps vs Web Services: 302s and 401s are not always good Friends

    - by Your DisplayName here!
    It is not very uncommon to have web sites that have web UX and services content. The UX part maybe uses WS-Federation (or some other redirect based mechanism). That means whenever an authorization error occurs (401 status code), this is picked by the corresponding redirect module and turned into a redirect (302) to the login page. All is good. But in services, when you emit a 401, you typically want that status code to travel back to the client agent, so it can do error handling. These two approaches conflict. If you think (like me) that you should separate UX and services into separate apps, you don’t need to read on. Just do it ;) If you need to mix both mechanisms in a single app – here’s how I solved it for a project. I sub classed the redirect module – this was in my case the WIF WS-Federation HTTP module and modified the OnAuthorizationFailed method. In there I check for a special HttpContext item, and if that is present, I suppress the redirect. Otherwise everything works as normal: class ServiceAwareWSFederationAuthenticationModule : WSFederationAuthenticationModule {     protected override void OnAuthorizationFailed(AuthorizationFailedEventArgs e)     {         base.OnAuthorizationFailed(e);         var isService = HttpContext.Current.Items[AdvertiseWcfInHttpPipelineBehavior.DefaultLabel];         if (isService != null)         {             e.RedirectToIdentityProvider = false;         }     } } Now the question is, how do you smuggle that value into the HttpContext. If it is a MVC based web service, that’s easy of course. In the case of WCF, one approach that worked for me was to set it in a service behavior (dispatch message inspector to be exact): public void BeforeSendReply( ref Message reply, object correlationState) {     if (HttpContext.Current != null)     {         HttpContext.Current.Items[DefaultLabel] = true;     } } HTH

    Read the article

  • Good Customer Service Example

    - by MightyZot
    Here’s another good customer service example for you! My wife purchased a Galaxy last week and she loves the phone.  She asked me to add it to our AT&T Microcell last night. I purchased the AT&T Microcell a couple of years ago, because cell signal out where I live sucks! Since microcells are managed on the AT&T web site, I went to the site and tried to sign in. Naturally, having not managed that microcell in a couple of years…and much to my chagrin…I discovered that I didn’t know my password OR my user ID. So, I decided to call and see if I could get my account reset that late in the day (we’re talking last night, so it was well after 7pm.) I called the technical support line, touched the appropriate numbers to navigate to microcell support, turned on my speaker phone, and prepared for the long wait. After about 45 seconds I was delighted to hear “Jeffrey” break in and ask what he could help me with. I explained that I have not managed my microcell for some time and had forgotten the user name and password.  “No problem”, he replied, and he asked me for the line I used to register the microcell. After confirming the last four digits of my IMEI number, he asked me for my wife’s number. I gave him my wife’s number and he said, “I’ve taken care of it Mr Pope. Just have her reboot her phone and you should see your microcell.” We rebooted her phone, it connected to the microcell, and voila, she was online! “Is there anything else I can help you with while I’ve got you on the line”, he said. “Nope”, I replied. “Ok, have a great night.” What made this a great customer service experience for me was that “Jeffrey” didn’t stop at giving me my user account and password, which I would probably forget anyway after setting up my wife’s new phone. Instead, he solved the real problem for me – adding my wife’s new phone to my microcell. Great job Jeffrey!

    Read the article

  • How to become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Bootstrap responsive CSS [migrated]

    - by savolai
    I have a four column design and I am using Bootstrap. The design renders fine in a single column in mobile devices, but in "(min-width: 768px) and (max-width: 979px)", I get four columns though there is room for only two. So clearly, the rows/spans setup would need to be rethought for those sizes. The only way I can imagine of doing this is to have semantic CSS classes used in the HTML and only including grid classes in the CSS using LESS, and then depending on screen size, including different grid classes to achieve four or two column layout. Not sure if this would work either though. Is this the way to go with, or am I thinking this too complicatedly? Thanks! Also at: https://groups.google.com/forum/#!topic/twitter-bootstrap/R5jEp0oQ_-E

    Read the article

  • Seperating entities from their actions or behaviours

    - by Jamie Dixon
    Hi everyone, I'm having a go at creating a very simple text based game and am wondering what the standard design patterns are when it comes to entities (characters, sentient scenery) and the actions those entities can perform. As an example, I have entity that is a 'person' with various properties such as age, gender, height, etc. This 'person' can also perform some actions such as speaking, walking, jumping, flying, etc etc. How would you seperate out the entity from the actions it can perform and what are some common design patterns that solve this kind of problem?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >