Search Results

Search found 28672 results on 1147 pages for 'best practise'.

Page 5/1147 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Best practice in setting return value (use else or?)

    - by Deckard
    Whenever you want to return a value from a method, but whatever you return depends on some other value, you typically use branching: int calculateSomething() { if (a == b) { return x; } else { return y; } } Another way to write this is: int calculateSomething() { if (a == b) { return x; } return y; } Is there any reason to avoid one or the other? Both allow adding "else if"-clauses without problems. Both typically generate compiler errors if you add anything at the bottom. Note: I couldn't find any duplicates, although multiple questions exist about whether the accompanying curly braces should be on their own line. So let's not get into that.

    Read the article

  • GWT Best Practices - MVP

    - by GWTNewbie
    A question for all the GWT gurus out there. I'm a newbie in GWT and am trying to understand the best practices of coding a GWT application. I have gone through "Large scale application development and MVP" based on Ray Ryan's talk at Google I/O 2009 and it has given me a good starting point. I downloaded the sample source code as well for the Contacts application based on the best practices listed. The application I'm trying to develop using GWT is a bit bigger (in terms of the modules involved) when compared to the sample "Contacts" application & so I want to split it up into multiple functions. I have been reading that having a single Entry point in a GWT application is a good idea, and I don't want to dump all the code in one single AppController class & one single RpcService, what would be the best approach in this situation? How would I go about dispatching the control to multiple controllers? Is there a way to achieve this using some classes in the GWT framework?

    Read the article

  • Best Practice for Images with Codeigniter : Generating Thumbs or Resizing on the Fly

    - by Steve K
    Hi all, I know there’s been a good deal written on thumbnail generation and the like with CI, but I wanted to explain what I’ve made and see what kind of best-practice advice I could find. Here’s my story… Currently, I have a site which allows users to upload collections of photos to projects they’ve created after first creating an account. Upon account creation, the site generates folders for the users in the following fashion for each of five pre-defined projects: /students/username/project_num/images/thumbs/ (This is to say that within a pre-created students folder, the username, project_num, images and thumbs folders are created recursively five times.) When a user uploads images to a project, I have a gallery controller which uploads the full images into the images folder for the project_num, and then creates a smaller thumbnail which maintains its ratio. So far so good. On the index page of the site, where these thumbnails and full images are displayed, I had a bit of a brain lapse, thinking I could simply output the full image while resizing it via css for a ‘medium-size’ image which would lead to the full-size image when clicked. (To be clear, the path is: Click on thumbnail— Load scaled full-size (medium-size) image via ajax into a display area above thumbs— Click on medium-sized image— Load full size image via lightbox, or something of that nature.) I have everything working to this point, except, as one might imagine, resizing the full-sized images with css doesn’t maintain aspect ratio for the thumbs, which means I need to find the best way to resize these. In thinking about it, I figured I had two options: I could resize the image on the fly when the user clicks a thumbnail to load the medium-sized image via ajax. (I have a method ‘get_image($url)’ in my gallery controller which simply loads a view with an image tag and the image source passed to it, etc.) I thought perhaps I could send it first to my gallery model, resize it there on the fly, and send it on to the view. The problem I’m having is that resizing it on the fly and echoing it out gives me the raw image data (I apologize, I don’t know that’s the right term). I’ve tried using data_uris to format the raw data into something echoable, but with no success. Is this method possible? The second option I considered was to generate a second medium-sized thumbnail when the user uploads the image with maintain_ratio set to true. This method is slightly less ideal, given that when providing a way for the user to delete their projects, I’ll need to scan for an additional set of images to delete. Not a huge deal, definitely, but something I figured could be avoided by generating the medium-sized image on the fly. I hope I’ve been clear in my explanations, if long-winded! I’m very curious to see what suggestions folks have about the best way to handle this. Much thanks for reading, and any suggestions are much-appreciated! Steve K.

    Read the article

  • best-practices for displaying new view controllers ( iPhone )

    - by Tristan
    I need to display a couple of view controllers (eg, login screen, registration screen etc). What's the best way to bring each screen up? Currently for each screen that I'd like to display, I call a different method in the app delegate like this: Code: - (void) registerScreen { RegistrationViewController *reg = [[RegistrationViewController alloc] initWithNibName:@"RegistrationViewController" bundle:nil]; [window addSubview:reg.view]; } - (void) LoginScreen { LoginViewController *log = [[LoginViewController alloc] initWithNibName:@"LoginViewController" bundle:nil]; [window addSubview:log.view]; } It works, but I cant imagine it being the best way.

    Read the article

  • Sitemaps on multiple front end servers using a http handler, is that a good idea?

    - by Rihan Meij
    Question 1 We would like to generate a site map for our CMS site We have multiple front end servers with approx a million articles. Environment multiple MS SQL servers multiple front end servers (load balanced) ASP.net - and IIS 6 Windows 2003 To have the site maps (the site map index file, and the site map files) physically on the front end servers will be a operations nightmare and error prone. So we are considering using http handlers instead so that it does not matter what server gets the request, it will be able to serve the correct xml file. Question 2 If we ping Google each time we publish a new article will that effect us negatively if that happens more than once a hour. Thanks!

    Read the article

  • Best Practices for working with files via c#

    - by user177883
    Application I work on generates several hundreds of files in a 15 minutes period of times. and the back end of the application takes these files and process them (updates database with those values). One problem is database locks. What are the best practices on working with several thousands of files to avoid locking and efficiently processing these files? Would it be more efficient to create a single file and process it? or process single file at a time? What are so common best practices?

    Read the article

  • pointer as second argument instead of returning pointer?

    - by Tyler
    I noticed that it is a common idiom in C to accept an un-malloced pointer as a second argument instead of returning a pointer. Example: /*function prototype*/ void create_node(node_t* new_node, void* _val, int _type); /* implementation */ node_t* n; create_node(n, &someint, INT) Instead of /* function prototype */ node_t* create_node(void* _val, int _type) /* implementation */ node_t* n = create_node(&someint, INT) What are the advantages and/or disadvantages of both approaches? Thanks!

    Read the article

  • Asp.Net Program Architecture

    - by Pino
    I've just taken on a new Asp.Net MVC application and after opening it up I find the following, [Project].Web [Project].Models [Project].BLL [Project].DAL Now, something thats become clear is that there is the data has to do a hell of a lot before it makes it to the View (DatabaseDALRepoBLLConvertToModelControllerView). The DAL is Subsonic, the repositorys in the DAL return the subsonic entities to the BLL which process them does crazy things and converts them into a Model (From the .Models) sometimes with classes that look like this public DataModel GetDataModel(EntityObject Src) { var ReturnData = new DataModel(): ReturnData.ID = Src.ID; ReturnDate.Name = Src.Name; //etc etc } Now, the question is, "Is this complete overkill"? Ok the project is of a decent size and can only get bigger but is it worth carrying on with all this? I dont want to use AutoMapper as it just seems like it makes the complication worse. Can anyone shed any light on this?

    Read the article

  • Best practice? iphone: sync data

    - by Andy Jacobs
    So i'm working on a project where there is data visualization. My ultimate goal is that i have a set of data shipped with the download of the iphone app. But i want it connected to a backend, that if the iphone has a connection with the internet. it can sync the changes from the backend. The syncing is no problem or the connection between the backend & the iphone. But what should i use as data storage on my iphone? what is the best way. my data is purely text and doesn't have to be secure. But it's main feature should be updating certain parts of data ( adding and deleting are not so important ) so what is the easiest (read: least time consuming development ) or the best way? sqlite? plist? ..?

    Read the article

  • Best practice for making code portable for domains, subdomains or directores

    - by Duopixel
    I recently coded something where it wasn't known if the end code would reside in a subdomain (http://user.domain.com/) or in a subdomain (http://domain.com/user), and I was lost as to the best practice for these unknown scenarios. I could thinks of a couple: Use absolute paths (/css/styles.css) and modrewrite if it ends up being /user Have a settings file and declare a variable with the path (<? php echo $domain . "/css/styles" ?>) Use relative paths (../css/styles.css). What is the best way to handle this?

    Read the article

  • Compact Framework best practices: Building a GUI

    - by Ciaran
    I'm maintaining a Windows CE app built with the .NET Framework that has about 45 forms. There are 5 'sections' which lead to the function you want. The application is 100% full screen and it is important that it can't be minimized. Since there are so many forms, it's difficult to keep track of which form should be displayed after one is closed. For this, I'm setting the form owner property before showing it, and showing the owner when closing it. I've also been advised that it is best to instantiate all forms when the application loads, and not dispose them to save processing time. I'm not sure about this. My question is, what is the best way to go about showing, hiding forms where you want any 1 form to be in front, full screen all time?

    Read the article

  • Stream (.NET) handling best-practices

    - by Jader Dias
    The question is entitled with the word "Stream" because the question below is a concrete example of a more generic doubt I have about Streams: I have a problem that accepts two solutions and I want to know the best one: I download a file, save it to disk (2 min), read it and write the contents to the DB (+ 2 min). I download a file and write the contents directly to the DB (3 min). If the write to DB fails I'll have to download again in the second case, but not in the first case. Which is best? Which would you use?

    Read the article

  • Best Practise: DNS and VPN (with private network IPs)

    - by ribx
    I am trying to find the best solution for my DNS problem. We are running several services in our company that you can reach only over VPN. Other services, that are reachable through the internet got the domain ... At the moment all services inside the VPN network go by .local... These have an VPN IP of the private network 192.168.252.0/24. Clients reach from Linux over OSX to Windows. I can think of 4 possibilities to implement a DNS infrastructure: Most common: an internal DNS Server, that is pushed by the VPN. But this has several drawbacks: your DNS responses are limited to the speed of the VPN Connection and your own DNS server. Because of very complex websites, this can increase the time for a page to load quite a lot. Also: we have several VPNs that are not connected to each other and all of them have their own DNS server. Several DNS servers locally. These have to be configured by hand. And you have to use some third party tool like dnsmasq. If you start a DNS request, you ask your locally running DNS server, which decides which server to ask for which domain name. One college of mine uses such a solution with this OSX (I am sorry, I don't remember the name of the application). You use your domain hoster. Most of them have APIs available to manipulate your DNS entries. So you could pull your private network informations to your domain hoster. I am not sure whether they all accept private network IPs. But I guess there will be some problems in the same way as in number 4. The one we currently use, because it's for us the most logical choice: we forward the sub domain *.local.. to our own public DNS Server. This works quite good for some public DNS Servers like Google. But most ISPs do not forward the answers. Or don't do that always. Like my ISP sends me a positive result of the a DNS request of a *.local.. domain only every 10th time I make a nslookup. (Can someone explain this?) Here the real Question: Is there another solution we were not thinking about? Or: What of these methods do you use?

    Read the article

  • What are some Java memory management best practices?

    - by Ascalonian
    I am taking over some applications from a previous developer. When I run the applications through Eclipse, I see the memory usage and the heap size increase a lot. Upon further investigation, I see that they were creating an object over-and-over in a loop as well as other things. I started to go through and do some clean up. But the more I went through, the more questions I had like "will this actually do anything?" For example, instead of declaring a variable outside the loop mentioned above and just setting its value in the loop... they created the object in the loop. What I mean is: for(int i=0; i < arrayOfStuff.size(); i++) { String something = (String) arrayOfStuff.get(i); ... } versus String something = null; for(int i=0; i < arrayOfStuff.size(); i++) { something = (String) arrayOfStuff.get(i); } Am I incorrect to say that the bottom loop is better? Perhaps I am wrong. Also, what about after the second loop above, I set "something" back to null? Would that clear out some memory? In either case, what are some good memory management best practices I could follow that will help keep my memory usage low in my applications? Update: I appreciate everyones feedback so far. However, I was not really asking about the above loops (although by your advice I did go back to the first loop). I am trying to get some best practices that I can keep an eye out for. Something on the lines of "when you are done using a Collection, clear it out". I just really need to make sure not as much memory is being taken up by these applications.

    Read the article

  • Best Practice for Uploading Many (2000+) Images to A Server

    - by bob
    Hello, I have a general question about this. When you have a gallery, sometimes people need to upload 1000's of images at once. Most likely, it would be done through a .zip file. What is the best way to go about uploading this sort of thing to a server. Many times, server have timeouts etc. that need to be accounted for. I am wondering what kinds of things should I be looking out for and what is the best way to handle a large amount of images being uploaded. I'm guessing that you would allow a user to upload a zip file (assuming the timeout does not effect you), and this zip file is uploaded to a specific directory, lets assume in this case a directory is created for each user in the system. You would then unzip the directory on the server and scan the user's folder for any directories containing .jpg or .png or .gif files (etc.) and then import them into a table accordingly. I'm guessing labeled by folder name. What kind of server side troubles could I run into? I'm aware that there may be many issues. Even general ideas would be could so I can then research further. Thanks! Also, I would be programming in Ruby on Rails but I think this question applies accross any language.

    Read the article

  • best practice for boolean REST results

    - by Andrew Patterson
    I have a resource /system/resource And I wish to ask the system a boolean question about the resource that can't be answered by processing on the client (i.e I can't just GET the resource and look through the actual resource data - it requires some processing on the backend using data not available to the client). eg /system/resource/related/otherresourcename I want this is either return true or false. Does anyone have any best practice examples for this type of interaction? Possibilities that come to my mind: use of HTTP status code, no returned body (smells wrong) return plain text string (True, False, 1, 0) - Not sure what string values are appropriate to use, and furthermore this seems to be ignoring the Accept media type and always returning plain text come up with a boolean object for each of my support media types and return the appropriate type (a JSON document with a single boolean result, an XML document with a single boolean field). However this seems unwieldy. I don't particularly want to get into a long discussion about the true meaning of a RESTful system etc - I have used the word REST in the title because it best expresses the general flavour of system I am designing (even if perhaps I am tending more towards RPC over the web rather than true REST). However, if someone has some thoughts on how a true RESTful system avoids this problem entirely I would be happy to hear them.

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • For a Javascript library, what is the best or standard way to support extensibility

    - by Michael Best
    Specifically, I want to support "plugins" that modify the behavior of parts of the library. I couldn't find much information on the web about this subject. But here are my ideas for how a library could be extensible. The library exports an object with both public and "protected" functions. A plugin can replace any of those functions, thus modifying the library's behavior. Advantages of this method are that it's simple and that the plugin's functions can have full access to the library's "protected" functions. Disadvantages are that the library may be harder to maintain with a larger set of exposed functions and it could be hard to debug if multiple plugins are involved (how to know which plugin modified which function?). The library provides an "add plugin" function that accepts an object with a specific interface. Internally, the library will use the plugin instead of it's own code if appropriate. With this method, the internals of the library can be rearranged more freely as long as it still supports the same plugin interface. This could also support having different plugin interfaces to modify different parts of the library. A disadvantage of this method is that the plugins may have to re-implement code that is already part of the library since the library's internal functions are not exported. The library provides a "set implementation" function that accepts an object inherited from a specific base object. The library's public API calls functions in the implementation object for any functionality that can be modified and the base implementation object includes the core functionality, with both external (to the API) and internal functions. A plugin creates a new implementation object, which inherits from the base object and replaces any functions it wants to modify. This combines advantages and disadvantages of both the other methods.

    Read the article

  • Constructor overloading in Java - best practice

    - by errr
    There are a few topics similar to this, but I couldn't find one with a sufficient answer. I would like to know what is the best practice for constructor overloading in Java. I already have my own thoughts on the subject, but I'd like to hear more advice. I'm referring to both constructor overloading in a simple class and constructor overloading while inheriting an already overloaded class (meaning the base class has overloaded constructors). Thanks :)

    Read the article

  • Best practices concerning view model and model updates with a subset of the fields

    - by Martin
    By picking MVC for developing our new site, I find myself in the midst of "best practices" being developed around me in apparent real time. Two weeks ago, NerdDinner was my guide but with the development of MVC 2, even it seems outdated. It's an thrilling experience and I feel privileged to be in close contact with intelligent programmers daily. Right now I've stumbled upon an issue I can't seem to get a straight answer on - from all the blogs anyway - and I'd like to get some insight from the community. It's about Editing (read: Edit action). The bulk of material out there, tutorials and blogs, deal with creating and view the model. So while this question may not spell out a question, I hope to get some discussion going, contributing to my decision about the path of development I'm to take. My model represents a user with several fields like name, address and email. All the names, in fact, on field each for first name, last name and middle name. The Details view displays all these fields but you can change only one set of fields at a time, for instance, your names. The user expands a form while the other fields are still visible above and below. So the form that is posted back contains a subset of the fields representing the model. While this is appealing to us and our layout concerns, for various reasons, it is to be shunned by serious MVC-developers. I've been reading about some patterns and best practices and it seems that this is not in key with the paradigm of viewmodel == view. Or have I got it wrong? Anyway, NerdDinner dictates using FormCollection och UpdateModel. All the null fields are happily ignored. Since then, the MVC-community has abandoned this approach to such a degree that a bug in MVC 2 was not discovered. UpdateModel does not work without a complete model in your formcollection. The view model pattern receiving most praise seems to be Dedicated view model that contains a custom view model entity and is the only one that my design issue could be made compatible with. It entails a tedious amount of mapping, albeit lightened by the use of AutoMapper and the ideas of Jimmy Bogard, that may or may not be worthwhile. He also proposes a 1:1 relationship between view and view model. In keeping with these design paradigms, I am to create a view and associated view for each of my expanding sets of fields. The view models would each be nearly identical, differing only in the fields which are read-only, the views also containing much repeated markup. This seems absurd to me. In future I may want to be able to display two, more or all sets of fields open simultaneously. I will most attentively read the discussion I hope to spark. Many thanks in advance.

    Read the article

  • LDAP Best Practices

    - by Vik Gamov
    Hi, there. I'm interesting in best practices of using LDAP authentication in java-based web application. In my app I don't want to store username\password, only some id. But, I want retrieve addition information (Name, Last name) if any exists on LDAP catalog.

    Read the article

  • Best way to define an immutable class in Objective C

    - by Patrick Marty
    Hi, I am a newbie in Objective C and I was wondering what is the best way to define an immutable class in Objective-C (like NSString for example). I want to know what are the basic rules one has to follow to make a class immutable. I think that : setters shouldn't be provided if properties are used, they should be readonly accessInstanceVariablesDirectly must be override and return NO Did I forget something ? Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >