Search Results

Search found 1806 results on 73 pages for 'opinion'.

Page 61/73 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • jQuery "best practices" is an oxymoron [closed]

    - by n00b
    jQuery "best practices", oxymoron for several reasons sadly To speak candidly, the entire jQuery library has a HIGHLY inconsistent API, and virtually no coding standards. It's basically a subset of JavaScript, a new language if you will, which is probably the most confusing part for normal javascript developers. In my opinion, use normal JavaScript where possible to save on memory consumption. Or, better yet, use a professional tool like YUI. If you already come from a javascript background, you will appreciate it much more than jQuery because it doesn't have n00b wrappers around everything. In tools like YUI, you interact directly with the native DOM to do things, instead of a super-jQuery object that tries to do everything. I'll get voted down for saying the truth. Don't get me wrong, jQuery is cool if you wanna throw something flashy up quick, but if you're going to build a larger app you're going to need a more refined tool (especially since jQuery leaks memory like no other once you start chaining too much things). jQuery does have one of the lowest learning curves, and you won't outgrow it for awhile, but when you do it will be highly apparent and tedious to merge to something else. over 60 years programming experience someone clean this up and community wiki please

    Read the article

  • Is .NET 4.0 just a show?

    - by Will Marcouiller
    I went to a presentation about the .NET Framework and Visual Studio 2010, last night. The topis were: ASP.NET 4 - Some of the new features of ASP.NET 4 More control over ClientID's in WebForms; Output Caching; ... // Some other stuff I don't really remember being more in framework and WinForms world. Entity Framework 2.0 (.NET 4.0) T4 Templates; Domain driven development; Data driven development; Contexts (edmx files); Some of real-world limitations of EF4 (projects with over 70 to 75 tables); Better POCO support, despite there are still these hidden EntityObject and StructuralObject, but used differently in comparison to EF 1.0 so that it doesn't take off your inheritance; Allows to easily choose how to persist the hierarchy into the underlying database; Code only (start working with EF4 directly from your code!); Design by Contract (DbC). The most interesting feature is, and only, as far as I'm concerned, all related to parallelism made easier. Which really works! No additional assembly references to add. In conclusion, I'm far from impressed about .NET Framework 4.0, apart that it makes some things easier to do. But when you're used to make it a way, it doesn't really change much, in my opinion. Is it me who cannot foresee what .NET 4.0 has to offer? What would you guys base your decision on to migrate to .NET 4.0, in a practical way?

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • How Session out trigger on browser close?

    - by Hemant Kothiyal
    Hi, Yesterday morning i open gmail account in Internet Exlorer second tab. I checked my mail and closed that tab (not browser). Then at the time of evining i again open second tab of browser and enetr gmail.com, it automatically redirect me at my email account without asking login. I shocked and i thought i should remain browser open for whole night and today open gmail in second tab , it behave similar means without login screen it redirect in my gmail account. Then i closed that tab and open another browser session and enter gmail i again surprised that i redirect me login page. At the same time i open second tab of first browser and it automatically redirect me at mail account page. What i councluded by this behaviour is that might be gmail server keep my browser id at their server so that whenever i eneter gmail.com on second tab of first browser, it automatically redirect me at gmail account. I don't know i am right or not? Please clear me this concept? What happens with my session at gmail server when i closed my browser tab? As per my opinion it should automatically logout me but why this doesn't happened?

    Read the article

  • How do you work around memcached's key/value limitations?

    - by mjy
    Memcached has length limitations for keys (250?) and values (roughtly 1MB), as well as some (to my knowledge) not very well defined character restrictions for keys. What is the best way to work around those in your opinion? I use the Perl API Cache::Memcached. What I do currently is store a special string for the main key's value if the original value was too big ("parts:<number") and in that case, I store <number parts with keys named 1+<main key, 2+<main key etc.. This seems "OK" (but messy) for some cases, not so good for others and it has the intrinsic problem that some of the parts might be missing at any time (so space is wasted for keeping the others and time is wasted reading them). As for the key limitations, one could probably implement hashing and store the full key (to work around collisions) in the value, but I haven't needed to do this yet. Has anyone come up with a more elegant way, or even a Perl API that handles arbitrary data sizes (and key values) transparently? Has anyone hacked the memcached server to support arbitrary keys/values?

    Read the article

  • facebook authentication / login trouble

    - by salmane
    I have setup facebook authentication using php and it goes something like this first getting the authorization here : https://graph.facebook.com/oauth/authorize?client_id=<?= $facebook_app_id ?>&redirect_uri=http://www.example.com/facebook/oauth/&scope=user_about_me,publish_stream then getting the access Token here : $url = "https://graph.facebook.com/oauth/access_token?client_id=".$facebook_app_id."&redirect_uri=http://www.example.com/facebook/oauth/&client_secret=".$facebook_secret."&code=".$code;" function get_string_between($string, $start, $end){ $string = " ".$string; $ini = strpos($string,$start); if ($ini == 0) return ""; $ini += strlen($start); $len = strpos($string,$end,$ini) - $ini; return substr($string,$ini,$len); } $access_token = get_string_between(file_get_contents($url), "access_token=", "&expires="); then getting user info : $facebook_user = file_get_contents('https://graph.facebook.com/me?access_token='.$access_token); $facebook_id = json_decode($facebook_user)->id; $first_name = json_decode($facebook_user)->first_name; $last_name = json_decode($facebook_user)->last_name; this is pretty ugly ( in my opinion ) but it works....how ever....the user is still not logged in...because i did not create or retrieve any session variables to confirm that the user is logged in to facebook... which means that after getting the authentication done the use still has to login .... first: is there a better way using php to do what i did above ? second: how do i set/ get session variable / cookies that ensure that the user doesnt have to click login thanks for your help

    Read the article

  • Refining Search Results [PHP/MySQL]

    - by Dae
    I'm creating a set of search panes that allow users to tweak their results set after submitting a query. We pull commonly occurring values in certain fields from the results and display them in order of their popularity - you've all seen this sort of thing on eBay. So, if a lot of rows in our results were created in 2009, we'll be able to click "2009" and see only rows created in that year. What in your opinion is the most efficient way of applying these filters? My working solution was to discard entries from the results that didn't match the extra arguments, like: while($row = mysql_fetch_assoc($query)) { foreach($_GET as $key => $val) { if($val !== $row[$key]) { continue 2; } } // Output... } This method should hopefully only query the database once in effect, as adding filters doesn't change the query - MySQL can cache and reuse one data set. On the downside it makes pagination a bit of a headache. The obvious alternative would be to build any additional criteria into the initial query, something like: $sql = "SELECT * FROM tbl MATCH (title, description) AGAINST ('$search_term')"; foreach($_GET as $key => $var) { $sql .= " AND ".$key." = ".$var; } Are there good reasons to do this instead? Or are there better options altogether? Maybe a temporary table? Any thoughts much appreciated!

    Read the article

  • Good way to allow people to select a lot of things?

    - by jphenow
    I'm using jQuery, ASP.NET, SQL Server, and the other usual suspects to design a company CRM. After they put in contact info, notes, dates, places and so forth they have to be able to select many different people to be "CC'ed." A group of people will be required to be one either "CC'ed" or "ToDo." The rest of the people can be nothing or "CC" or "ToDo." Currently we have it set up as a huge databind to templates with radio buttons for each option. Looks like shit. Anyone have any suggestions? I'd like to use a template with a datasource and have a good way to retrieve their answers and use them. I'm leaning jQuery direction but like I said I'll need there to be up to 3 possible options for the people. This is going to be all opinion so I'm just looking for options. Just to re-clarify, this concept is similar to email but I don't want them to have to type anything in as it is a set group of names that they're allowed to select from. Looking for quick simple and pretty. somewhere in the range of 120 names.

    Read the article

  • Omit return type in C++0x

    - by Clinton
    I've recently found myself using the following macro with gcc 4.5 in C++0x mode: #define RETURN(x) -> decltype(x) { return x; } And writing functions like this: template <class T> auto f(T&& x) RETURN (( g(h(std::forward<T>(x))) )) I've been doing this to avoid the inconvenience having to effectively write the function body twice, and having keep changes in the body and the return type in sync (which in my opinion is a disaster waiting to happen). The problem is that this technique only works on one line functions. So when I have something like this (convoluted example): template <class T> auto f(T&& x) -> ... { auto y1 = f(x); auto y2 = h(y1, g1(x)); auto y3 = h(y1, g2(x)); if (y1) { ++y3; } return h2(y2, y3); } Then I have to put something horrible in the return type. Furthermore, whenever I update the function, I'll need to change the return type, and if I don't change it correctly, I'll get a compile error if I'm lucky, or a runtime bug in the worse case. Having to copy and paste changes to two locations and keep them in sync I feel is not good practice. And I can't think of a situation where I'd want an implicit cast on return instead of an explicit cast. Surely there is a way to ask the compiler to deduce this information. What is the point of the compiler keeping it a secret? I thought C++0x was designed so such duplication would not be required.

    Read the article

  • How to extract byte-array from one xml and store it in another in Java

    - by grobartn
    So I am using DocumentBuilderFactory and DocumentBuilder to parse an xml. So it is DOM parser. But what I am trying to do is extract byte-array data (its an image encoded in base64) Store it in one object and later in code write it out to another xml encoded in base64. What is the best way to store this in btw. Store it as string? or as ByteArray? How can I extract byte array data in best way and write it out. I am not experienced with this so wanted to get opinion from the group. UPDATE: I am given XML I do not have control of incoming XML that comes in binary64 encoded < byte-array > ... base64 encoded image ... < /byte-array > Using parser I have I need to store this node and question is should that be byte or string and then writing it out to another node in new xml. again in base64 encoding. thanks

    Read the article

  • is using private shared objects/variables on class level harmful ?

    - by haansi
    Hello, Thanks for your attention and time. I need your opinion on an basic architectural issue please. In page behind classes I am using a private and shared object and variables (list or just client or simplay int id) to temporary hold data coming from database or class library. This object is used temporarily to catch data and than to return, pass to some function or binding a control. 1st: Can this approach harm any way ? I couldn't analyze it but a thought was using such shared variables may replace data in it when multiple users may be sending request at a time? 2nd: Please comment also on using such variables in BLL (to hold data coming from DAL/database). In this example every time new object of BLL class will be made. Here is sample code: public class ClientManager { Client objclient = new Client(); //Used in 1st and 2nd method List<Client> clientlist = new List<Client>();// used in 3rd and 4th method ClientRepository objclientRep = new ClientRepository(); public List<Client> GetClients() { return clientlist = objclientRep.GetClients(); } public List<Client> SearchClients(string Keyword) { return clientlist = objclientRep.SearchClients(Keyword); } public Client GetaClient(int ClientId) { return objclient = objclientRep.GetaClient(ClientId); } public Client GetClientDetailForConfirmOrder(int UserId) { return objclientRep.GetClientDetailForConfirmOrder(UserId); } } I am really thankful to you for sparing time and paying kind attention.

    Read the article

  • ASP.NET - What is the best way to block the application usage?

    - by Tufo
    Our clients must pay a monthly Fee... if they don't, what is the best way to block the asp.net software usage? Note: The application runs on the client own server, its not a SaaS app... My ideas are: Idea: Host a Web Service on the internet that the application will use to know if the client can use the software. Issue 1 - What happen if the client internet fails? Or the data center fails? Possible Answer: Make each web service access to send a key that is valid for 7 or 15 days, so each web service consult will enable the software to run more 7 or 15 days, this way the application will only be locked after 7 or 15 days without consulting our web servicee. Issue 2 - And if the client don't have or don't want to enable internet access to the application? Idea 2: Send a key monthly to the client. Issue - How to make a offline key? Possible Answer: Generate a Hash using the "limit" date, so each login try on software will compare the today hash with the key? Issue 2 - Where to store the key? Possible Answer: Database (not good, too easy to change), text file, registry, code file, assembly... Any opinion will be very appreciated!

    Read the article

  • Gathering entropy in web apps to create (more) secure random numbers

    - by H M
    after several days of research and discussion i came up with this method to gather entropy from visitors (u can see the history of my research here) when a user visits i run this code: $entropy=sha1(microtime().$pepper.$_SERVER['REMOTE_ADDR'].$_SERVER['REMOTE_PORT']. $_SERVER['HTTP_USER_AGENT'].serialize($_POST).serialize($_GET).serialize($_COOKIE)); note: pepper is a per site/setup random string set by hand. then i execute the following (My)SQL query: $query="update `crypto` set `value`=sha1(concat(`value`, '$entropy')) where name='entropy'"; that means we combine the entropy of the visitor's request with the others' gathered already. that's all. then when we want to generate random numbers we combine the gathered entropy with the output: $query="select `value` from `crypto` where `name`='entropy'"; //... extract(unpack('Nrandom', pack('H*', sha1(mt_rand(0, 0x7FFFFFFF).$entropy.microtime())))); note: the last line is a part of a modified version of the crypt_rand function of the phpseclib. please tell me your opinion about the scheme and other ideas/info regarding entropy gathering/random number generation. ps: i know about randomness sources like /dev/urandom. this system is just an auxiliary system or (when we don't have (access to) these sources) a fallback scheme.

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

  • JAXB code generation: how to remove a zero occurrence field?

    - by reef
    Hi all, I use JAXB 2.1 to generate Java classes from several XSD files, and I have a problem related to complex type restriction. On of the restrictions modifies the occurence configuration from minOccurs="0" maxOccurs="unbounded" to minOccurs="0" maxOccurs="0". Thus this field is not needed anymore in the restricted type. But actually JAXB generates the restricted class with a [0..1] cardinality instead of 0. By the way the generation is tuned with <xjc:treatRestrictionLikeNewType / so that a XSD restriction is not mapped to a Java class inheritance. Here is an example: Here is the way a field is defined in a complex type A: <element name="qualifier" type="CR" maxOccurs="unbounded" minOccurs="0"/ Here is the way the same field is restricted in another complex type B that restricts A: <element name="qualifier" type="CR" minOccurs="0" maxOccurs="0"/ In the A generated class I have: @XmlElement(name = "qualifier") protected List<CR qualifiers; And in the B generated class I have: protected CR qualifiers; With my poor understanding of JAXB the absence of the XmlElement annotation tells JAXB not to marshall/unmarshall this field. Am I wrong? If I am right is there a way to tell JAXB not to generate the qualifiers field at all? This would be in my opinion a much better generation as it respects the constraints. Any idea, thougths on the topic? Thanks!!

    Read the article

  • What Getters and Setters should and shouldn't do.

    - by cyclotis04
    I've run into a lot of differing opinions on Getters and Setters lately, so I figured I should make it into it's own question. A previous question of mine received an immediate comment (later deleted) that stated setters shouldn't have any side effects, and a SetProperty method would be a better choice. Indeed, this seems to be Microsoft's opinion as well. However, their properties often raise events, such as Resized when a form's Width or Height property is set. OwenP also states "you shouldn't let a property throw exceptions, properties shouldn't have side effects, order shouldn't matter, and properties should return relatively quickly." Yet Michael Stum states that exceptions should be thrown while validating data within a setter. If your setter doesn't throw an exception, how could you effectively validate data, as so many of the answers to this question suggest? What about when you need to raise an event, like nearly all of Microsoft's Control's do? Aren't you then at the mercy of whomever subscribed to your event? If their handler performs a massive amount of information, or throws an error itself, what happens to your setter? Finally, what about lazy loading within the getter? This too could violate the previous guidelines. What is acceptable to place in a getter or setter, and what should be kept in only accessor methods?

    Read the article

  • What's best choice career-wise, to know a little about a lot or a lot about a little?

    - by nimo
    I work as a developer at a rather small company and we are providing a web application that is used by a big base of customers. Because we are so small everyone have to be able to do a lot of different tasks. It ranges from advanced support, developing the product (programming: c/c++, c#, php, sql, javascript, html, css), handle network configuration and network related issues and even sometimes go on sales meetings with potential customers. My concern is that I don't really specialize in any specific area. I know and learn little about a lot. I have graduated from school two years ago and this is my first real employment and when I look at other positions out there they always require so and so many years of experience in a specific area (for example 5 years of C#). For me to get that kind of specialized experience will be really hard at my current job. My question for you is what is, in your opinion, best choice career-wise, to know a little about a lot or a lot about a little? What path did you take? pros and cons that comes with that choice.

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • Typical SVN repo structure seems to be sub-optimal for continuous integration...

    - by Dave
    I've set up our SVN repository like the Subversion book suggests, and this is also how my previous companies have done it. It looks something like this: /trunk /branches /tags /extlibs /docs where the first three are pretty obvious, and extlibs is for 3rd party assemblies that we wouldn't typically recompile ourselves. All of this works great for the daily development stuff. Now I've installed TeamCity and have builds, unit tests, code coverage, and code analysis running. Everything is great, except for the fact that this code structure results in too much code getting downloaded. So here's the catch 22, in my opinion: it's silly to download all of aforementioned folders from the SVN repo when I only need /trunk and /extlibs. But I can only specify one repo folder to download in the TeamCity VCS settings. So then the other possibility is to put the /extlibs folder into /trunk, but in order to compile branches, /extlibs would have to go into all of those as well (since I usually branch the trunk, and not individual subfolders... and this would seem infinitely more evil since /extlibs could actually be larger than /trunk and /branches, with all of the binaries stored there... Do you guys have any suggestions for me? Thanks!

    Read the article

  • Should developers *really* have private offices?

    - by Aron Rotteveel
    We will probably be moving within a year, so we have to make some decisions regarding office layout. At the moment, our company is basically one big office. When our developers can't bother to be disturbed at all, we all have our own headphones to mute the outside world. Still, it seems a lot of people feel that private offices are no doubt the way to go. From Joel's article Private Offices Redux: Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space. Don't fall for it. They also want M&Ms for breakfast and a pony. Open space is fun but not productive. Even though I can understand the benefit on productivity, does having a private office really result in more net productivity? There seem to be plenty of companies that create wide open spaces and still maintain good productivity. Or so it seems. (I should mention many of them use cubicles, though) What is your opinion on this? What does your company do? Is there some middle ground in this? Some more related information on this matter: Private Offices Redux The new Fog Creek office A Field Guide to Developers Gmail recruitment page. Found this last one somewhat remarkable since the Gmail recruitment page promotes the "wide open space" idea.

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • Setting up a web development/build environment

    - by Eric
    Hello all, My current project has a development web server and live web server. Developers make changes to files on the dev server and test them (by going to the dev address) and make changes as necessary. When the file or files are ready to go, they are copied to the live server. There is no version control. As you might expect, there are some problems with this model: It's hard to keep track of what other programmers have done. It's hard to keep track of what files should be copied to the live server. There is no version control. I'm in a position to make nearly any change I like, but I want it to be the right one! I have been turning this over in my head for quite a while, and I have a solution that might be okay. But I want SO's opinion. Certainly version control needs to be added. But how should it work with the existing codebase and where should the developers be testing? How can anyone know what needs to be moved to the live server? What other details need to be addressed? How would you attack this problem? Supplementary information: The website is vital, but not mission critical. A small amount of downtime is acceptable. There are very few developers. (Right now, only 4.) History: Before I started, the project used Visual Source Safe. This was a sufficiently bad experience that they quit using it and abandoned version control. The project is an ASP.NET (C#) website. This seems like a question that may have a complicated answer. Thanks for thinking about it!

    Read the article

  • When should I implement globalization and localization in C#?

    - by Geo Ego
    I am cleaning up some code in a C# app that I wrote and really trying to focus on best practices and coding style. As such, I am running my assembly through FXCop and trying to research each message it gives me to decide what should and shouldn't be changed. What I am currently focusing on are locale settings. For instance, the two errors that I have currently are that I should be specifying the IFormatProvider parameter for Convert.ToString(int), and setting the Dataset and Datatable locale. This is something that I've never done, and never put much thought into. I've always just left that overload out. The current app that I am working on is an internal app for a small company that will very likely never need to run in another country. As such, it is my opinion that I do not need to set these at all. On the other hand, doing so would not be such a big deal, but it seems like it is unneccessary and could hinder readability to a degree. I understand that Microsoft's contention is to use it if it's there, period. Well, I'm technically supposed to call Dispose() on every object that implements IDisposable, but I don't bother doing that with Datasets and Datatables, so I wonder what the practice is "in the wild."

    Read the article

  • When is Googling it wrong?

    - by Drahcir
    I've been going through Stack Overflow for quite a bit now and noticed certain people (usually experienced programmers) frown upon Googling (researching) certain problems. Since I myself tend to use Google quite a bit to solve certain programming related issues I found certain comments rather demoralising. Now some of you may have come here trigger happy to delete this post but I needed some clarification. I usually Google things that usually syntax related that I would have never figured out on my own. For example I once wondered how to access the properties of a class that I didn't have a direct relationship to. So after a bit of research I discovered reflection and got what I wanted. Now in another scenario is learning a new language, in my case Silverlight were it differs in certain aspects of .NET compared to say ASP.NET. A few weeks ago I had no idea how to load another Silverlight page (usercontrol) and had to Google my way to the solution which I found wasn't as simple as I had imagined. In scenario three is were I myself frown up, that is just stealing a huge chunk of code to avoid doing the work yourself, for example paging a HTML table using JavaScript, where one just copies and pastes the JavasSript code without as much as trying to understand how it works. I do admit I have done this once or twice before for trivial tasks that had very little time limit and weren't all that important but most of the time still have to throw away what I found because it took too much time to adapt it and get what I wanted out of it. In the last scenario, I sometimes have a piece of code that I would be really unhappy about, as in I find it sloppy or too overcomplicated and try to look on the Internet to see other ways to tackle the same problem, let's say filtering through a table. With the knowledge I acquire I learned new coding practices that help me work more efficiently like "Do not repeat yourself" and such. Now in your opinion when do you find it wrong to use Google (or any other researching tool) to find a solution to your problem?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >