Search Results

Search found 8125 results on 325 pages for 'metadata cache'.

Page 80/325 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How do I convert a video to GIF using ffmpeg, with reasonable quality?

    - by Kamil Hismatullin
    I'm converting .flv movie to .gif file with ffmpeg. ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif It works great, but output gif file has a very law quality. Any ideas how can I improve quality of converted gif? Output of command: $ ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:52:53 with gcc 4.7.2 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.flv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 Duration: 00:00:18.85, start: 0.000000, bitrate: 3098 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 2905 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 192 kb/s Metadata: creation_time : 2013-02-14 04:00:07 [buffer @ 0x92a8ea0] w:1280 h:720 pixfmt:yuv420p [scale @ 0x9215100] w:1280 h:720 fmt:yuv420p -> w:320 h:240 fmt:rgb24 flags:0x4 Output #0, gif, to 'output.gif': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 encoder : Lavf53.21.1 Stream #0.0(und): Video: rawvideo, rgb24, 320x240, q=2-31, 200 kb/s, 90k tbn, 10 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream mapping: Stream #0.0 -> #0.0 Press ctrl-c to stop encoding frame= 101 fps= 32 q=0.0 Lsize= 8686kB time=10.10 bitrate=7045.0kbits/s dup=0 drop=149 video:22725kB audio:0kB global headers:0kB muxing overhead -61.778676% Thanks.

    Read the article

  • flashcache with mdadm and LVM

    - by Backtogeek
    I am having trouble setting up flashcache on a system with LVM and mdadm, I suspect I am either just missing an obvious step or getting some mapping wrong and hoped someone could point me in the right direction? system info: CentOS 6.4 64 bit mdadm config md0 : active raid1 sdd3[2] sde3[3] sdf3[4] sdg3[5] sdh3[1] sda3[0] 204736 blocks super 1.0 [6/6] [UUUUUU] md2 : active raid6 sdd5[2] sde5[3] sdf5[4] sdg5[5] sdh5[1] sda5[0] 3794905088 blocks super 1.1 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md3 : active raid0 sdc1[1] sdb1[0] 250065920 blocks super 1.1 512k chunks md1 : active raid10 sdh1[1] sda1[0] sdd1[2] sdf1[4] sdg1[5] sde1[3] 76749312 blocks super 1.1 512K chunks 2 near-copies [6/6] [UUUUUU] pcsvan PV /dev/mapper/ssdcache VG Xenvol lvm2 [3.53 TiB / 3.53 TiB free] Total: 1 [3.53 TiB] / in use: 1 [3.53 TiB] / in no VG: 0 [0 ] flashcache create command used: flashcache_create -p back ssdcache /dev/md3 /dev/md2 pvdisplay --- Physical volume --- PV Name /dev/mapper/ssdcache VG Name Xenvol PV Size 3.53 TiB / not usable 106.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 28952 Free PE 28912 Allocated PE 40 PV UUID w0ENVR-EjvO-gAZ8-TQA1-5wYu-ISOk-pJv7LV vgdisplay --- Volume group --- VG Name Xenvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.53 TiB PE Size 128.00 MiB Total PE 28952 Alloc PE / Size 40 / 5.00 GiB Free PE / Size 28912 / 3.53 TiB VG UUID 7vfKWh-ENPb-P8dV-jVlb-kP0o-1dDd-N8zzYj So that is where I am at, I thought that was the job done however when creating a logical volume called test and mounting it is /mnt/test the sequential write is pathetic, 60 ish MB/s /dev/md3 has 2 x SSD's in Raid0 which alone is performing at around 800 MB/s sequential write and I am trying to cache /dev/md2 which is 6 x 1TB drives in raid6 I have read a number of pages through the day and some of them here, it is obvious from the results that the cache is not functioning but I am unsure why. I have added the filter line in the lvm.conf filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/md3|" ] It is probably something silly but the cache is clearly performing no writes so I suspect I am not mapping it or have not mounted the cache correctly. dmsetup status ssdcache: 0 7589810176 flashcache stats: reads(142), writes(0) read hits(133), read hit percent(93) write hits(0) write hit percent(0) dirty write hits(0) dirty write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) metadata dirties(0), metadata cleans(0) metadata batch(0) metadata ssd writes(0) cleanings(0) fallow cleanings(0) no room(0) front merge(0) back merge(0) force_clean_block(0) disk reads(9), disk writes(0) ssd reads(133) ssd writes(9) uncached reads(0), uncached writes(0), uncached IO requeue(0) disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0) lru hot blocks(31136000), lru warm blocks(31136000) lru promotions(0), lru demotions(0) Xenvol-test: 0 10485760 linear I have included as much info as I can think of, look forward to any replies.

    Read the article

  • LVM2 volume group lost

    - by MrG
    I updated one of my servers, but - although I took care not to modify - the volume groups on /dev/sdb1 were lost, although the physical volumes seem to be still there: [root@server ~]# pvscan PV /dev/sda2 VG VolGroup lvm2 [465,16 GiB / 0 free] PV /dev/sdb1 lvm2 [1,82 TiB] Total: 2 [2,27 TiB] / in use: 1 [465,16 GiB] / in no VG: 1 [1,82 TiB] [root@server ~]# pvs -v Scanning for physical volume names PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/sda2 VolGroup lvm2 a-- 465,16g 0 465,16g HftbaD-MBs0-3p7D-6O13-CrzU-T9Gb-6W0ofB /dev/sdb1 lvm2 a-- 1,82t 1,82t 1,82t dD4XZP-WStA-61xV-5Sff-ifmW-R4rR-JenHoU [root@server ~]# pvck -d -v /dev/sdb1 Scanning /dev/sdb1 Found label on /dev/sdb1, sector 1, type=LVM2 001 Found text metadata area: offset=4096, size=1044480 Found LVM2 metadata record at offset=10752, size=1037824, offset2=0 size2=0 Found LVM2 metadata record at offset=9216, size=1536, offset2=0 size2=0 Found LVM2 metadata record at offset=7168, size=2048, offset2=0 size2=0 Found LVM2 metadata record at offset=5632, size=1536, offset2=0 size2=0 I attempted to fix it as described here and was able to extract the 4 meta data sets listed above (using i.e. dd bs=1 skip=5632 count=1536 if=/dev/sdb1 of=output.file), none of them includes the lv_data which I'm missing. Please advise how I could access the files which should be on /dev/sdb1 there. Any help is appreciated!

    Read the article

  • taglib link errors

    - by Vihaan Verma
    I m using taglib for one of my projects . The Debug/Release library is build using MSVC 10. On compiling the code with the library in taglib/taglib/Release some linker error are thrown . id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class TagLib::AudioPropertie s * __cdecl TagLib::FileRef::audioProperties(void)const " (__imp_?audioProperties@FileRef@TagLib@@QEBAPEAVAudioProp erties@2@XZ) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,st ruct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@ DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __cdecl TagLib::Stri ng::~String(void)" (__imp_??1String@TagLib@@UEAA@XZ) referenced in function "struct MetaData __cdecl ID3::getMetaDa taOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfF ile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class std::basic_string<char ,struct std::char_traits<char>,class std::allocator<char> > __cdecl TagLib::String::to8Bit(bool)const " (__imp_?to8 Bit@String@TagLib@@QEBA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@_N@Z) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class s td::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator @D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __cdecl TagLib::File Ref::~FileRef(void)" (__imp_??1FileRef@TagLib@@UEAA@XZ) referenced in function "struct MetaData __cdecl ID3::getMet aDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaData OfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class TagLib::Tag * __cdecl TagLib::FileRef::tag(void)const " (__imp_?tag@FileRef@TagLib@@QEBAPEAVTag@2@XZ) referenced in function "struct Meta Data __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator <char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z ) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: bool __cdecl TagLib::FileRef ::isNull(void)const " (__imp_?isNull@FileRef@TagLib@@QEBA_NXZ) referenced in function "struct MetaData __cdecl ID3: :getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getM etaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl TagLib::FileRef::Fil eRef(class TagLib::FileName,bool,enum TagLib::AudioProperties::ReadStyle)" (__imp_??0FileRef@TagLib@@QEAA@VFileName @1@_NW4ReadStyle@AudioProperties@1@@Z) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AU MetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl TagLib::FileName::Fi leName(char const *)" (__imp_??0FileName@TagLib@@QEAA@PEBD@Z) referenced in function "struct MetaData __cdecl ID3:: getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMe taDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) I m only including tag.lib from taglib/taglib/Release folder . Is there some other library I m missing out?

    Read the article

  • The one feature that would make me invest in SSIS 2012

    - by Peter Larsson
    This week I was invited my Microsoft to give two presentations in Slovenia. My presentations went well and I had good energy and the audience was interacting with me. When I had some time over from networking and partying, I attended a few other presentations. At least the ones who where held in English. One of these was "SQL Server Integration Services 2012 - All the News, and More", given by Davide Mauri, a fellow co-worker from SolidQ. We started to talk and soon came into the details of the new things in SSIS 2012. All of the official things Davide talked about are good stuff, but for me, the best thing is one he didn't cover in his presentation. In earlier versions of SSIS than 2012, it is possible to have a stored procedure to act as a data source, as long as it doesn't have a temp table in it. In that case, you will get an error message from SSIS that "Metadata could not be found". This is still true with SSIS 2012, so the thing I am talking about is not really a SSIS feature, it's a SQL Server 2012 feature. And this is the EXECUTE WITH RESULTSETS feature! With this, you can have a stored procedure with a temp table to deliver the resultset to SSIS, if you execute the stored procedure from SSIS and add the "WITH RESULTSETS" option. If you do this, SSIS is able to take the metadata from the code you write in SSIS and not from the stored procedure! And it's very fast too. Let's say you have a stored procedure in earlier versions and when referencing that stored procedure in SSIS forced SSIS to call the stored procedure (which can take hours), to retrieve the metadata. Now, with RESULTSETS, SSIS 2012 can continue in milliseconds! This is because you provide the metadata in the RESULTSETS clause, and if the data from the stored procedure doesn't match this RESULTSETS, you will get an error anyway, so it makes sense Microsoft has provided this optimization for us.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • Why do I lose my javascript from the browser cache after a full page postback?

    - by burak ozdogan
    Hi, I have an external javascript file which I include to my page on the code behind (as seen below). My problem is, when I my page makes a postback (not partial one), I check the loaded scripts by using FireBug, and I cannot see the javascript file in the list after the post back. I asusmed once it is included to page on the first load, browser will be caching it so that I do not need to re-include it. What am I doing wrong? The way I include the script is here: protected override void OnInit(EventArgs e) { if (this.Page.IsPostBack==false) { if (this.Page.ClientScript.IsClientScriptIncludeRegistered("ctlPalletDetail")==false) { string guidParamToHackBrowserCaching = System.Guid.NewGuid().ToString(); this.Page.ClientScript.RegisterClientScriptInclude("ctlPalletDetail", ResolveUrl(String.Format("~/clientScripts/ctlLtlRequestDetail.js?par={0}",guidParamToHackBrowserCaching))); } } base.OnInit(e); }

    Read the article

  • Does the asp.net RoleManager really cache the roles for a user in a cookie if so configured?

    - by Ralph Shillington
    In my web.config I have the Role Manager configured as follows: <roleManager enabled="true" cacheRolesInCookie="true" cookieName=".ASPROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All"> however in our custom RoleProvider it would seems that the GetRolesForUser method is always being called, rather than as I would have expected, the RoleManager serving up the roles from its cookie. We're using something like to get the roles for a user: string[] myroles = Role.GetRolesForUser("myuser"); Is there something that I'm missing in the configuration, or in the use of the RoleManager Thanks in advance

    Read the article

  • Is it possible to output cache by host name? ie varybyhost or varbyhostheader?

    - by Pure.Krome
    Hi folks, i've got a website that has a number of host headers. Depending on the host header, the results are different - both visually (theme'd) and data. So lets imagine i have a website called 'Foo' - that returns search results (original, eh?). Now, the same code runs both sites. It is physically the same server/website (using Host Headers) :- www.foo.com www.foo.com.au Now, if i goto '.com', the site is theme'd in blue. if i goto the '.com.au' site, it's theme'd in red. And the data is different for the same search result, based on the host name (ie. us results for .com, au results for .com.au) SO .. if i wish to use OutputCaching .. can this be handled / differ by the host name? I don't want to have the first person goto the .com site .. grab the results ... and the a second person goto my .com.au .. same search data .. and get the theme and results for the .com site. Possible?

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • Cache inferno - how can Model.A be two different things at the same time?

    - by Martin
    I have this <%=Model.StartDate%> <%=Html.Hidden("StartDate", Model.StartDate)%> it outputs: 2010-05-11 11:00:00 +01:00 <input type="hidden" value="2010-03-17 11:00:00 +01:00" name="StartDate" id="StartDate"> What the... It's a paging mechanism so the hidden value was valid on the first page and I've been able to move forward to the next page. But since the values won't update properly it ends there. What do I need to do. Using firefox.

    Read the article

  • How do we know if a query is cache or retrieved from database?

    - by Hadi
    For example: class Product has_many :sales_orders def total_items_deliverable self.sales_orders.each { |so| #sum the total } #give back the value end end class SalesOrder def self.deliverable # return array of sales_orders that are deliverable to customer end end SalesOrder.deliverable #give all sales_orders that are deliverable to customer pa = Product.find(1) pa.sales_orders.deliverable #give all sales_orders whose product_id is 1 and deliverable to customer pa.total_so_deliverable The very point that i'm going to ask is: how many times SalesOrder.deliverable is actually computed, from point 1, 3, and 4, They are computed 3 times that means 3 times access to database so having total_so_deliverable is promoting a fat model, but more database access. Alternatively (in view) i could iterate while displaying the content, so i ends up only accessing the database 2 times instead of 3 times. Any win win solution / best practice to this kind of problem ?

    Read the article

  • Good way to cache data during Android application lifecycle?

    - by sniurkst
    Hello, keeping my question short, I have creating application with 3 activities, where A - list of categories, B - list of items, C - single item. Data displayed in B and C is parsed from online XML. But, if I go trough A - B1 - C, then back to A and then back to B1 I would like to have it's data cached somewhere so I wouldn't have to request XML again. I'm new to Android and Java programming, I've googled a lot and still can't find (or simply do not have an idea where to look) a way to do what I want. Would storing all received data in main activity A (HashMaps? ContentProviders?) and then passing to B and C (if they get same request that was before) be a good idea?

    Read the article

  • How to clear cache for previously installed InfoPath forms on a client computer?

    - by user313067
    Hi folks, We recently had a strange issue with an InfoPath 2007 form being opened from SharePoint 2007 and receiving the error message "the system cannot find the file specified". To be clear, this was not a form services enabled form. Anyway, after spending way too much time trying to figure out what was going on (nothing in the MOSS 2007 server log files), we determined that the user had previously installed an older version of the form (but with the same name) on their workstation using a no longer available msi file (meaning we could not uninstall it from the workstation). So I wanted to pass on a very simple solution for anyone who is unfortunate to run into this problem in the future (since I lost a great deal of hair over it): Fire up regedit, go to HKEY_LOCALMACHINE-Software-Microsoft-Office-InfoPath-SolutionsCatalog. Locate the key that has the previously installed form name, and delete it. This will cause InfoPath to stop trying to open the form locally (which is either old or doesn't exist) and force it to open your form from SharePoint. Hope this helps someone!

    Read the article

  • dynamic page caching- show redirected html cache page or show the dynamic page?

    - by i need help
    would like your comments. Eg: When user first visit www.testing.com/productdetailpage.asp I will use caching- store the whole page into productdetailpage.html When the user go to reopen productdetailpage.asp, user will be redirected to www.testing.com/productdetailpage.html It means they will see productdetailpage.html, not .asp Is this a good way? Any implication in terms of SEO and other part? Will it be better to read datas from .html into .asp and show the final page as .asp all the time?

    Read the article

  • How could I cache images that I'm pulling from a magento database through ajax?

    - by wes
    Here's script being called through ajax: <?php require_once '../app/Mage.php'; umask(0); /* not Mage::run(); */ Mage::app('default'); $cat_id = ($_POST['cat_id']) ? $_POST['cat_id'] : NULL; try { $category = new Mage_Catalog_Model_Category(); $category->load($cat_id); $collection = $category->getProductCollection(); $output = '<ul>'; foreach ($collection as $product) { $cProduct = Mage::getModel('catalog/product'); $cProduct->load($product->getId()); $output .= '<li><img id="'.$product->getId().'" src="' . (string)Mage::helper('catalog/image')->init($cProduct, 'small_image')->resize(75) . '" class="thumb" /></li>'; } $output .= '</ul>'; echo $output; } catch (Exception $e) { echo 'Caught exception: ', $e->getMessage(), "\n"; } I'm just passing in the Category ID, which I've tacked onto the navigation links, then doing some work to eventually just pass back all product images in that category. I'm using this on a drag and drop build-a-bracelet type of application, and the amount of images returned is sometimes in the 500s. So it get's pretty held up during transmission, sometimes 10 seconds or so. I know I'd do good by caching them some way, just not sure how to go about it. Any help is much appreciated. Thanks. -Wes

    Read the article

  • How to unit test synchronized code

    - by gillJ
    Hi, I am new to Java and junit. I have the following peice of code that I want to test. Would appreciate if you could send your ideas about what's the best way to go about testing it. Basically, the following code is about electing a leader form a Cluster. The leader holds a lock on the shared cache and services of the leader get resumed and disposed if it somehow looses the lock on the cache. How can i make sure that a leader/thread still holds the lock on the cache and that another thread cannot get its services resumed while the first is in execution? public interface ContinuousService { public void resume(); public void pause(); } public abstract class ClusterServiceManager { private volatile boolean leader = false; private volatile boolean electable = true; private List<ContinuousService> services; protected synchronized void onElected() { if (!leader) { for (ContinuousService service : services) { service.resume(); } leader = true; } } protected synchronized void onDeposed() { if (leader) { for (ContinuousService service : services) { service.pause(); } leader = false; } } public void setServices(List<ContinuousService> services) { this.services = services; } @ManagedAttribute public boolean isElectable() { return electable; } @ManagedAttribute public boolean isLeader() { return leader; } public class TangosolLeaderElector extends ClusterServiceManager implements Runnable { private static final Logger log = LoggerFactory.getLogger(TangosolLeaderElector.class); private String election; private long electionWaitTime= 5000L; private NamedCache cache; public void start() { log.info("Starting LeaderElector ({})",election); Thread t = new Thread(this, "LeaderElector ("+election+")"); t.setDaemon(true); t.start(); } public void run() { // Give the connection a chance to start itself up try { Thread.sleep(1000); } catch (InterruptedException e) {} boolean wasElectable = !isElectable(); while (true) { if (isElectable()) { if (!wasElectable) { log.info("Leadership requested on election: {}",election); wasElectable = isElectable(); } boolean elected = false; try { // Try and get the lock on the LeaderElectorCache for the current election if (!cache.lock(election, electionWaitTime)) { // We didn't get the lock. cycle round again. // This code to ensure we check the electable flag every now & then continue; } elected = true; log.info("Leadership taken on election: {}",election); onElected(); // Wait here until the services fail in some way. while (true) { try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} if (!cache.lock(election, 0)) { log.warn("Cache lock no longer held for election: {}", election); break; } else if (!isElectable()) { log.warn("Node is no longer electable for election: {}", election); break; } // We're fine - loop round and go back to sleep. } } catch (Exception e) { if (log.isErrorEnabled()) { log.error("Leadership election " + election + " failed (try bfmq logs for details)", e); } } finally { if (elected) { cache.unlock(election); log.info("Leadership resigned on election: {}",election); onDeposed(); } // On deposition, do not try and get re-elected for at least the standard wait time. try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } else { // Not electable - wait a bit and check again. if (wasElectable) { log.info("Leadership NOT requested on election ({}) - node not electable",election); wasElectable = isElectable(); } try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } } public void setElection(String election) { this.election = election; } @ManagedAttribute public String getElection() { return election; } public void setNamedCache(NamedCache nc) { this.cache = nc; }

    Read the article

  • Can you solve my odd Sharepoint CSS cache / customising problem?

    - by Aidan
    I have a weird situation with my sharepoint css. It is deployed as part of a .wsp solution and up until now everything has been fine. The farm it deploys too has a couple of webfront ends and a single apps server and SQL box. The symptom is that if I deploy the solution, then use a webbrowser to view the page it has no styles, and if I access the .css directly I see the first 100 or so bytes of the .css. However if I go into sharepoint designer and look at the file it is looks fine, and if I check it out and publish it (customising the file but not actually changing anything in it) then the website works fine and the css downloads completely. There is some fairly complex caching on the servers Disk based and object caches. as far as I can tell I have cleared these (and an issreset should clear them anyway... shouldn't it?) I have used this tool to clear the blobcache from the whole farm http://blobcachefarmflush.codeplex.com/

    Read the article

  • How does browser work with expiration headers, cache-control headers, last-modified-header ?

    - by Umair
    I am a web developer, have worked with PHP and .NET both. having over a year of experience working on web I haven't been able to understand the browser caching features thoroughly, I hope Web Gurus here can help me with it. Questions I have in my mind are : How does browser actually caches stuff, does it request for to see if the cached file has changed on the server or not, What is the Ideal way for a developer to make use of browser chaching to its full, but also to be able to push new changes on the site with no hassle at all. I think if browser somehow chaches my CSS and JS and Images, and then just makes a checks for their modification to the server everytime, this can sort the issue. but I am not sure how to do it, waiting for interesting answers :)

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • In ASP.NET, is it possible to output cache by host name? ie varybyhost or varbyhostheader?

    - by Pure.Krome
    Hi folks, I've got a website that has a number of host headers. Depending on the host header, the results are different - both visually (theme'd) and data. So lets imagine i have a website called 'Foo' - that returns search results (original, eh?). Now, the same code runs both sites. It is physically the same server/website (using Host Headers) :- www.foo.com www.foo.com.au Now, if i goto '.com', the site is theme'd in blue. if i goto the '.com.au' site, it's theme'd in red. And the data is different for the same search result, based on the host name (ie. us results for .com, au results for .com.au) SO .. if i wish to use OutputCaching .. can this be handled / differ by the host name? I don't want to have the first person goto the .com site .. grab the results ... and the a second person goto my .com.au .. same search data .. and get the theme and results for the .com site. Possible?

    Read the article

  • iPhone Safari Web Application not using cache at all?

    - by Liuyi Sun
    Hi, guys, I've been developing an iphone web application for a while, and encountered a weird problem: when open the web app in safari(with safari chrome, not starting it from home screen), safari can generate proper "If-Not-Modified-Since" and "If-None-Matches", so the server simply gives 304 Not Modified to speed up the process. however, when starting the app from home screen, safari seems to forget these two headers, and server always replies with 200 OK... any ideas for this?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >