Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 263/811 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • BIOS setting: AHCI or RAID (when using SSD + 2x HDD in RAID-0)

    - by nixdagibts
    Hello there, I want to add a new SSD and use it as system drive with Win7 x64 installed. As driver I chose newest Intel Rapid Storage driver (not MSAHCI). I know that I have to use AHCI as BIOS setting for optimal SSD read/write performance. But I'm also using 2 normal HDDs as separate RAID-0 SSD: Win7 HDD: RAID-0 HDD: RAID-0 If I set my BIOS on my ASUS P5W DH Deluxe to AHCI, my RAID-0 cant be recognized And If im using RAID as setting, maybe my SSD has not its top speed. But I'm not sure about that. In short: AHCI no RAID-0 RAID no optimal SSD performance (?) Now my question: Can I use RAID as BIOS setting and be sure, that theres no decrease in SSD performance? Google finds so many articles with similar topics and my head is just exploding. Two examples: - set AHCI and after installing OS switch to RAID as BIOS setting... what? - use a diskette and F6 while installing win7... really? O.o I thought those times are gone

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Implications of disabling the AMD Phenom's TLB patch?

    - by DMA57361
    I'm currently running a AMD Phenom X4 9600 processor (yeah, it's aging a bit, but other recent problems mean it's not getting upgraded in the immediate future), which happens to be one of the chips that suffer from the TLB errata. I recall that the first time I played with disabling the TLB patch (probably over a year ago, while playing a game that had a severe performance problem such that it was almost unplayable unless the patch was disabled) I had at least one BSOD, but I can't remeber them being particularly frequent. However, because it decreased instability, I stopped disabling the patch once I was done with the game. Now, after some recent hardware changes I was experiancing much worse performance than expected from the new hardware under some circumstances, and the TLB jumped to mind - after testing I found that disabling the patch would improve the performance to expected levels. I'm now wondering if it's worthwhile always having the patch disabled to avoid any potential slowdowns cropping up in the future, or if it is too dangerous. Everything I read states that the bug, when not patched, can causes a system lock-up in "rare circumstances". So, with the TLB patch disabled: How frequently should system lock-ups be expected? Do we know what the circumstances that trigger the lock-ups are? (Don't worry too much about being highly technical, but essentially I wonder if the chip more vunerable under heavy load, or heavy memory usage, etc?) Are there any secondary problems I should be aware of? (Don't include things that are charateristic to all lock-ups, please)

    Read the article

  • Are there any disadvantages of having a "free fall sensor" on a hard disk drive?

    - by therobyouknow
    This is a general question that came out of a specific comparison between the Western Digital Scorpio WD3200BEKT and Western Digital Scorpio WD3200BJKT (which is the same as the former but with a free fall sensor.) Note: I'm not asking for a review or appraisal of these specific drives, as the general question does apply on other brands as well. Though your input would help my decision. To break down the general question in order to answer it, I would be looking for comments on things like: if it's necessary to have differing physical dimensions between free fall sensor drives and those without, e.g. does it make it any thicker, and therefore reduce the systems where it can be installed - particularly smaller laptops? does it actually make the system less reliable - because of false alarms whereby the drive thought the laptop was falling but it wasn't? I suppose that the fact that a manufacturer produces both drives with and without free fall sensors says something about possible disadvantages. Or it could be standard marketing techniques where by making drives with and without results in larger sales volume than just those with the feature alone.

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Curious enigma of a network cable / connection / quality

    - by Foo Bar
    So, the situation is like this: I'm renting an apartment in a large house and I'm sharing internet with the landlord who lives downstairs. The internet is (in my best guess) optical 20/20Mbit. I don't know how it's all wired in his flat (haven't been there / seen it). Anyway, in my flat comes a cable which seems to be connected directly to the optic to ethernet router (and the password is the default one, so I have access, he he). There was a switch connected to that and to wires that go around the flat, and the wiring is terrible. It's even mixing phone and ethernet, and from what I see some cables are even interconnected!? Anyways, this cable that comes to my flat is very short. I can barely connect my computer on it, but if I do, I seem to get decent speed / performance. Not great, but decent. If, however, I connect switch to it (tried 2 different switches and a wifi switch) it's all blinking but I can't even connect to 192.168.1.1 (the router). DHCP fails, ping is losing 80-100% of replies. So I connected this cable directly to the other cable which goes to my work room, with a connector that has two female jacks and no electronics. Now when I connect my computer in my room, again, the performance is decent. When I connect WRT54GL (with tomato, DHCP disabled) to it and I plug a cable in this WRT and to my computer... the performance is gone. Download seems okay on Speedtest, but upload is .2Mbps and it's connecting forever. So what kind cable troll am I having here? Any ideas?

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • How to Unit Test HtmlHelper with Moq?

    - by DaveDev
    Could somebody show me how you would go about creating a mock HTML Helper with Moq? This article has a link to an article claiming to describe this, but following the link only returns an ASP.NET Runtime Error [edit] I asked a more specific question related to the same subject here, but it hasn't gotten any responses. I figured it was too specific, so I thought I could get a more general answer to a more general question and modify it to meet my requirements. Thanks

    Read the article

  • Should I use android AccountManager?

    - by Phil
    I've seen AccountManager in the android SDK, and can see it is used for storing account information, but I can't find any general discussion of what it is intended for. Anyone know of any helpful discussions of what the intention behind AccountManager is and what it buys you Any opinions of what type of Accounts this is suitable for? Would this be where you'd put your user's account information for a general web service? Regards Phil

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Simple Remote Shared Object with Red5 Flash Server

    - by John Russell
    Hello, I am trying to create a simple chat client using the red5 media server, but I seem to be having a slight hiccup. I am creating a shared object on the server side, and it seems to be creating it successfully. However, when I make changes to the object via the client (type a message), the SYNC event fires, but the content within the shared object remains empty. I suspect I am doing something wrong on the java end, any advice? Console Results: Success! Server Message: clear Server Message: [object Object] Local message: asdf Server Message: change Server Message: [object Object] Local message: fdsa Server Message: change Server Message: [object Object] Local message: fewa Server Message: change Server Message: [object Object] Server Side: package org.red5.core; import java.util.List; import org.red5.server.adapter.ApplicationAdapter; import org.red5.server.api.IConnection; import org.red5.server.api.IScope; import org.red5.server.api.service.ServiceUtils; import org.red5.server.api.so.ISharedObject; // import org.apache.commons.logging.Log; // import org.apache.commons.logging.LogFactory; public class Application extends ApplicationAdapter { private IScope appScope; // private static final Log log = LogFactory.getLog( Application.class ); /** {@inheritDoc} */ @Override public boolean connect(IConnection conn, IScope scope, Object[] params) { appScope = scope; createSharedObject(appScope, "generalChat", false); // Creates general chat shared object return true; } /** {@inheritDoc} */ @Override public void disconnect(IConnection conn, IScope scope) { super.disconnect(conn, scope); } public void updateChat(Object[] params) { ISharedObject so = getSharedObject(appScope, "generalChat"); // Declares and stores general chat data in general chat shared object so.setAttribute("point", params[0].toString()); } } Client Side: package { import flash.display.MovieClip; import flash.events.*; import flash.net.*; // This class is going to handle all data to and from from media server public class SOConnect extends MovieClip { // Variables var nc:NetConnection = null; var so:SharedObject; public function SOConnect():void { } public function connect():void { // Create a NetConnection and connect to red5 nc = new NetConnection(); nc.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler); nc.connect("rtmp://localhost/testChat"); // Create a StoredObject for general chat so = SharedObject.getRemote("generalChat", nc.uri, false); so.connect(nc); so.addEventListener(SyncEvent.SYNC, receiveChat) } public function sendChat(msg:String) { trace ("Local message: " + msg); nc.call("updateChat", null, msg) } public function receiveChat(e:SyncEvent):void { for (var i in e.changeList) { trace ("Server Message: " + e.changeList[i].code) trace ("Server Message: " + e.changeList[i]) } } // Given result, determine successful connection private function netStatusHandler(e:NetStatusEvent):void { if (e.info.code == "NetConnection.Connect.Success") { trace("Success!"); } else { trace("Failure!\n"); trace(e.info.code); } } } }

    Read the article

  • Which MacBook(Pro) for running Visual Studio 2010 on VMWare Fusion on a Mac?

    - by Greg
    Hi Anyone have experience running Visual Studio 2010 on a MacBook or MacBook Pro? (via VMWare fusion) Any feedback / advice based on your experience re what level of MacBook Pro (i.e. CPU type, CPU speed) you would target to get reasonable/good performance from VS2010 on it? (I'm just concerned about getting a base level MacBook Pro 13" 2.4GHz Core2Duo whether I would be frustrated with performance or not)

    Read the article

  • C# memory management: unsafe keyword and pointers

    - by Alerty
    What are the consequences (positive/negative) of using the unsafe keyword in C# to use pointers? For example, what becomes of garbage collection, what are the performance gains/losses, what are the performance gains/losses compared to other languages manual memory management, what are the dangers, in which situation is it really justifiable to make use of this language feature... ?

    Read the article

  • Action Delegate C#

    - by user275561
    So I read MSDN, And stackoverflow. I understand what the Action Delegate does in general but it is not clicking no matter how many examples I do. In General same goes for the idea of delegates. So here is my question when you have a function like this public GetCustomers(Action<IEnumerable<Customer>,Exception> callBack) { } I just Dont have a clue on what is that or what should i pass to it.

    Read the article

  • Mongoid or MongoMapper?

    - by PanosJee
    I have tried MongoMapper and it is feature complete (offering almost all AR functionality) but i was not very happy with the performance when using large datasets. Has anyone compared with Mongoid? Any performance gains ?

    Read the article

  • How to modify code so that it adheres to the Law of Demeter

    - by guazz
    public class BigPerformance { public decimal Value {get;set;} } public class Performance { public BigPerformance BigPerf {get; set}; } public class Category { public Performance Perf {get;set; } } If I call: Category cat = new Cateogry(); cat.Perf.BigPerf.Value = 1.0; I assume this this breaks the LoD? If so, how do I remedy this if I have a large number of inner class Properties?

    Read the article

  • Caching web API proxy?

    - by Jeremy Dunck
    I was wondering if anyone knows of a caching proxy specifically for dealing with API responses? Ideally, I'd be able to declare what caching policy to use for different API semantics, e.g. cache album art for 1 day, cache favorite tweets for 5 minutes, cache map tiles forever, except invalidate when this other API is called. I know about using Apache, Squid, etc for caching in general -- I'm just hoping for something with nicer usage semantics by restricting the design goal to dealing with APIs rather than the web in general.

    Read the article

  • Windows RPC vs XML-RPC

    - by Y.Z
    Is there any benchmark about encoding/decoding certain common typed data in Microsoft RPC NDR engine (DCE 1.1) in comparison with that in XML-RPC-C/C++ in the de-facto C/C++ implementation in XML-RPC? Actually I have to choose between Windows RPC and XML-RPC-C/C++ to implement my own common object infrastructure for High Performance Computing on Windows. Any recommandation about which with regard to their performance? Thank you. Best Regards, Yang

    Read the article

  • StringBuilder vs XmlTextWriter

    - by Wololo
    I am trying to squeeze as much performance as i can from a custom HttpHandler that serves Xml content. I' m wondering which is better for performance. Using the XmlTextWriter class or ad-hoc StringBuilder operations like: StringBuilder sb = new StringBuilder("<?xml version="1.0" encoding="UTF-8" ?>"); sb.AppendFormat("<element>{0}</element>", SOMEVALUE); Does anyone have first hand experience?

    Read the article

  • Running virtual machines: Linux vs Windows 7

    - by vikp
    Hi, I have tried running windows xp development virtual machine under windows 7 and the performance was dreadful. I'm considering installing Linux and running the virtual machine from the Linux, but I'm not sure whether I can expect any performance gains? It's a 2.4ghz core 2 duo machine with 4gb ram and 5400 rpm hdd. Can somebody please recommend very cut down version of linux that can run VMWare player and isn't resource hungry? Thank you

    Read the article

  • String literals vs constants for Session[...] dictionary keys

    - by FreshCode
    Session[Constant] vs Session["String Literal"] Performance I'm retrieving user-specific data like ViewData["CartItems"] = Session["CartItems"]; with a string literal for keys on every request. Should I be using constants for this? If yes, how should I go about implementing frequently used string literals and will it significantly affect performance on a high-traffic site? Related question does not address ASP.NET MVC or Session.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >