Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 385/906 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • MEF, IServiceProvider and Testing Visual Studio Extensions

    - by Daniel Cazzulino
    In the latest and greatest version of Visual Studio, MEF plays a critical role, one that makes extending VS much more fun than it ever was. So typically, you just [Export] something, and then someone [Import]s it and that's it. MEF in all its glory kicks in and gets all your dependencies satisfied. Cool, you say, so let's now import ITextTemplating and have some T4-based codegen going! Ah, if only it was that easy. Turns out by default, none of the VS built-in services are exposed to MEF, apparently because there wasn't enough time to analyze the lifetime, initialization, dependencies, etc. for each one before launch, which makes perfect sense. You don't want to blindly export everything now just in case. There's also the whole VS package initialization thing which in this version of VS is not so transparently integrated with the MEF publishing side (i.e. a MEF export from a package can get instantiated before its owning package, and in fact, the package can remain unloaded forever and the export will continue to be visible to anyone)....Read full article

    Read the article

  • Get a culture specific list of month names

    - by erwin21
    A while ago I found a clever way to retrieve a dynamic culture specific list of months names in C# with LINQ. 1: var months = Enumerable.Range(1, 12) 2: .Select(i => new 3: { 4: Month = i.ToString(), 5: MonthName = new DateTime(1, i, 1).ToString("MMMM") 6: }) 7: .ToList(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } It’s fairly simple, for a range of numbers from 1 to 12 a DateTime object is created (year and day doesn’t matter in this case), then the date time object formatted to a full month name with ToString(“MMMM”). In this example an anonymous object is created with a Month and MonthName property. You can use this solution to populate your dropdown list with months or to display a user friendly month name.

    Read the article

  • Suggested HTTP REST status code for 'request limit reached'

    - by Andras Zoltan
    I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code 403 - Forbidden was the one, but this, from the spec: Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that 503 - Service Unavailable is a better one to use - since it allows for the communication of a retry time through the use of the Retry-After header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code 402 - Payment Required had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered?

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • Cron: job starts but doesn't complete

    - by Guandalino
    I have a problem with a cron job which starts but doesn't complete. Running the command manually works fine. I already read the page about cron issues and solutions here on AskUbuntu, tried the proposed solutions but didn't find an answer working in my case. I'm using Ubuntu 12.04. $ crontab -e SHELL=/bin/bash # otherwise it would be /bin/sh 59 16 * * * /bin/duply calendar backup > /tmp/duply.log Btw, the cron file ends with an empty line, as someone pointed out. Once the job has "finished"...: $ cat /tmp/duply.log Start duply v1.5.7, time is 2012-06-22 16:59:01. Instead, running manually the script it works correctly and gives this output: Start duply v1.5.7, time is 2012-06-22 17:06:39. [cut] ... here is a long output generated by duply. ... and yes, files have been backed up. [cut] --- Finished state OK at 17:06:42.581 - Runtime 00:00:03.170 --- I also tried to restart the cron daemon (sudo service cron restart) but nothing changed. Do you have any suggestion to fix the issue?

    Read the article

  • how to word wrap, align text like the output of man?

    - by cody
    what is the command that word wraps and justifies a text file so that the output looks like that of a man page: All of these system calls are used to wait for state changes in a child of the calling process, and obtain information about the child whose state has changed. A state change is considered to be: the child terminated; the child was stopped by a signal; or the child was resumed by a signal. In the case of a terminated child, performing a wait allows the system to release the resources associated with the child; if a wait is not performed, then the termi- nated child remains in a "zombie" state (see NOTES below). Thanks.

    Read the article

  • Applying Interactive Sorting to Multiple Columns in Reporting Services

    - by smisner
    A nice feature that appeared first in SQL Server 2008 is the ability to allow the user to click a column header to sort that column. It defaults to an ascending sort first, but you can click the column again to switch to a descending sort. You can learn more about interactive sorts in general at the Adding Interactive Sort to a Data Region in Books Online. Not mentioned in the article is how to apply interactive sorting to multiple columns, hence the reason for this post! Let’s say that I have a simple table like this: To enable interactive sorting, I open the Text Box properties for each of the column headers – the ones in the top row. Here’s an example of how I set up basic interactive sorting: Now when I preview the report, I see icons appear in each text box on the header row to indicate that interactive sorting is enabled. The initial sort order that displays when you preview the report depends on how you design the report. In this case, the report sorts by Sales Territory Group first, and then by Calendar Year. Interactive sorting overrides the report design. So let’s say that I want to sort first by Calendar Year, and then by Sales Territory Group. To do this, I click the arrow to the right of Calendar Year, and then, while pressing the Shift key, I click the arrow to the right of Sales Territory Group twice (once for ascending order and then a second time for descending order). Now my report looks like this: This technique only seems to work when you have a minimum of three columns configured with interactive sorting. If I remove the property from one of the columns in the above example, and try to use the interactive sorting on the remaining two columns, I can sort only the first column. The sort on the second column gets ignored. I don’t know if that’s by design or a bug, but I do know that’s what I’m experiencing when I try it out!

    Read the article

  • MVC Validation with ModelState.isValid through a wizard

    - by Emmanuel TOPE
    I'm working on a small educational project on MVC 3, and I'm facing a small problem, when attempting to handle validation in my application through a wizard. I tried to get benefit from the ability of MVC3 to deliver content of a different view using the same URL, when handling an [HttpPost] method on a page. I my case,my main model's class contains about ten [Required] properties, that I would like to expose through a small wizard in 3 steps , So I want that the user may be able to enter his personal informations in the first step, then respond to some questions in the second stepp and finally receive a confirmation mail from the web application whit his credentials in the last step. I can't access the last step, because of the ModelState.isValid method that I use to handle validations, and which can't perform properly if I define some properties as [Required], but don't put them on the first view. As the replies to those questions remain in a couple of choices, I've thinked that I may use some nullable bool? for in order to avoid validation issues, but know that it's not the proper way. Are there someone who would like to help me find a way to extend my validation to those three steps ? Thanks in advance and sorry for my english, I'm not a native speaker.

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • Export-Mailbox - "an unknown error has occurred"

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes?

    Read the article

  • Easy Transfer from a dead computer

    - by Nathan DeWitt
    I had a computer that electrocuted me and the company sent me a new one. The hard drive from the old computer works fine and is in my new computer. I would like to transfer my files from the old drive to the new one, preferably using Easy Transfer (old & new computers were Win7). When I go through the Easy Transfer wizard, it assumes my old computer is running and that I can run a process to backup all my data to a single file. However, in my case I have the system drive in my new computer and want to pull the data off it. I would like to avoid rebooting the old computer, to avoid damage to myself or my data. I would like to avoid booting into the old system drive, as my new hardware is significantly different and I imagine I'll run into some missing hardware issues. What's the easiest way to get my data off this drive?

    Read the article

  • Monitoring on java daemon on centos

    - by user111196
    I have a java application which I run using yasjw tool as a daemon. I need to monitor it in case it goes down I need some kind of alert or even restart it. Is there any tool can help me do this on centos environment? The results of ps -ef | grep java root 3109 1 0 Apr06 ? 00:04:35 /usr/java/jdk1.6.0_18/bin/java -Dwrapper.pidfile=/var/run/wrapper.commServer.pid -Dwrapper.service=true -Dwrapper.visible=false -jar /usr/local/yajsw-beta-10.2/wrapper.jar -c /usr/local/yajsw-beta-10.2/conf/wrapper.conf root 3132 3109 0 Apr06 ? 00:25:26 /usr/java/jdk1.6.0_18/bin/java -classpath /usr/local/yajsw-beta-10.2/./wrapperApp.jar:/usr/local -Xrs -Dwrapper.service=true -Dwrapper.console.visible=false -Dwrapper.visible=false -Dwrapper.pidfile=/var/run/wrapper.commServer.pid -Dwrapper.config=/usr/local/yajsw-beta-10.2/conf/wrapper.conf -Dwrapper.port=15003 -Dwrapper.key=4276015160565963367 -Dwrapper.teeName=4276015160565963367$1333699547154 -Dwrapper.tmpPath=/tmp org.rzo.yajsw.app.WrapperJVMMain root 23986 23945 0 16:53 pts/0 00:00:00 grep java pidof java 3132 3109

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Share on: FB, Tweet, Digg, Linkedin, Delicious, My mother, ... it's just on fashion, or some real value?

    - by Marco Demaio
    Nowadays your site is not in fashion if you don't show at least a couple of share buttons like these: Is this just fashion, or do people actually get something good out of it? When I say "something good" I mostly mean something that you could measure, and not just the feeling that was good. Maybe I can better explain with an example: did you notice (in some way) that many people clicked on those links to share your page/s on those web 2.0 social sites? And in such a case on which social networks did you see they mostly share your pages? BTW I'm not talking about Google PR, i know all web 2.0 social sites use nofollow everywhere and even hidden links, so they are useless by themselves for PR. UPDATE: According to this video, Google's Alter Ego says that they now use in some way data from social sites in ranking. If this is true, it's obvious that the Share on button for FB, Tweet, etc are definitely of some values. But again my question is more about what you noticed in your real experience to be a direct benefit of adding those type of "Share On" links on your webisite? I.e. did you see more traffic coming in form FB, or some users who bought your products because of FB or Twitter? Or any other benefits? Thanks

    Read the article

  • Proxmox - Uploading disk image

    - by davids
    I've got a KVM Virtual Machine in my local PC, and I'd like to copy it to a Proxmox server. According to the docs, I just have to create a new VM on Proxmox and add the existing disk image to it, but how do I upload the image to the server? In the admin panel, if I click in MyStorage - Content - Upload, it just give me options to upload ISOs, VZDump backup files or OpenVZ templates. Would it be enough with a copy using scp? In that case, in which folder?

    Read the article

  • How do http proxies determine https traffic with a single port?

    - by badunk
    If a proxy receives the tcp packet, then the destination ip address and port are that of the proxy. In that case, I imagine the only way a proxy can still resolve the intended destination is either through routing the source ip address/port or through the host field in the http header. Is this correct? In both Fiddler and Charles http proxies, I noticed that the tool accepts both http and https connections through a single port that you can specify. What do these tools do to tell the difference between the two types of connections?

    Read the article

  • How to copy a cell's formatting using a formula?

    - by Alvin Lim
    For example, cell A1 contains the text "Hello World" which is in bold. In cell A2, I use the formula =A1. Therefore cell A2 now also contains "Hello World", but it is not in bold. How can I modify the formula to also copy the formatting (in this case, bold) of A1? A more complex example is strikethrough properties, i.e. A1 contains "Orange/Red". How do I show the same content in cell A2 dynamically, so that any changes made in A1 will update A2 as well?

    Read the article

  • Unable to drag and drop / select multiple with mouse

    - by J. Scott Elblein
    I'm running into a perplexing issue with Windows 8 Pro x64, where randomly I'm unable to drag to select multiple files (i.e. in Explorer or Directory Opus). I've also noticed that a similar issue happens when I'm running for example Photoshop or Illustrator and can't drag to select multiple layers, or drag to do some other things in them. it happens randomly and have found no way to reliably reproduce it, but it happens VERY frequently. I have read some tips saying pressing the ESC button usually fixes the issue, but it doesn't in my case. From what I understand, it's probably due to some other process locking the drag feature somehow, but I've not found a way to tell which process is the perp; I've even tried using unlock software on files when I'm suddenly unable to drag and I'm told by it that nothing is locking it. Anyone have any ideas?

    Read the article

  • Should I be running my scheduled backups as SYSTEM or as the our domain admin?

    - by MetalSearGolid
    I have a daily backup which is scheduled through the Task Scheduler. It failed with a strange error code last night, but I was able to search and find a blog post with how to avoid the error in the future. However, one of his recommendations was to run the backups as the Administrator user of the domain. Since all of the files being backed up are local to this system, should I continue to have the backups run as SYSTEM? Or is it actually better to run it as a different user? I have been running these backups for well over a year now and have only had a handful of failures, but ironically when it does fail, the error code means it was a permissions issue (or so I read, this code seems to be undocumented by Microsoft). Thanks in advance for any insight into this. Might as well post the error code here too, in case anyone would like to share their insight on this as well, but I rarely ever get this error, so I don't care too much about it: 4294967294

    Read the article

  • Upgrade from Linux Mint 12 to Kubuntu 12.04?

    - by MountainX
    Is there an "easy" way to "upgrade" my existing Linux Mint 12 install to Kubuntu 12.04 beta 2? I know I could reinstall. Usually I would do a clean install to avoid unexpected issues. But in this case, I don't have time to reconfigure everything from my printers to my installed software, so I am looking for the quick/easy way, but I also want to avoid big risks of an upgrade gone wrong. I'm hoping to just change some repos and run a few commands from the terminal. I don't mind editing a few config files as long as I can find good HOWTOs. But I don't want to be the pioneer (arrows in back). I'm hoping someone has done this before and has a set of steps. For context, I recently installed KDE 4.8 SC onto Kubuntu 11.10 using PPAs. This was on another computer. That wasn't a problem. But I decided to do a fresh install of Kubuntu 12.04 later. I like it well enough that I want to change my other computer from Linux Mint 12 to Kubuntu. (I'm going all-in with KDE. It's now my desktop of choice.) This Linux Mint upgrade will be a move from Gnome and MGSE to KDE, so that will probably complicate things at bit compared to something like upgrading Kubuntu 11.10 to KDE 4.8. References: http://www.psychocats.net/ubuntu/kde Is it safe to install Kubuntu-desktop in 11.10?

    Read the article

  • Session serialization in JavaEE environment

    - by Ionut
    Please consider the following scenario: We are working on a JavaEE project for which the scalability starts to become an issue. Up until now, we were able to scale up but this is no longer an option. Therefore we need to consider scaling out and preparing the App for a clustered environment. Our main concern right now is serializing the user sessions. Sadly, we did not consider from the beginning the issue and we are encountering the following excetion: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: org.apache.catalina.session.StandardSessionFacade I did some research and this exception is thrown because there are objects stored on the session which does not implement the Serializable interface. Considering that all over the app there are quite a few custom objects which are stored on the session without implementing this interface, it would require a lot of tedious work and dedication to fix all these classes declaration. We will fix all this declarations but the main concern is that, in the future, there may be a developer which will add a non Serializable object on the session and break the session serialization & replication over multiple nodes. As a quick overview of the project, we are developing using a home grown framework based on Struts 1 with the Servlet 3.0 API. This means that at this point, we are using the standard session.getAttribute() and session.setAttribute() to work with the session and the session handling is scattered all over the code base. Besides updating the classes of the objects stored on session and making sure that they implement the Serializable interface, what other measures of precaution should we take in order to ensure a reliable Session replication capability on the Application layer? I know it is a little bit late to consider this but what would be the best practice in this case? Furthermore, are there any other issues we should consider regarding this transition? Thank you in advance!

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >