Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 144/587 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • AIX: iscsi volumes disappear after reboot

    - by Dan
    We have an IBM P505 AIX box, with two internal disks and a defined iSCSI volume. The iSCSI volume is defined in it's own volume group, and is connected to an IBM iSCSI DS3300 disk array via the secondary onboard ethernet port (ie, we're not using a dedicated HBA, we're using the second onboard ethernet port for iSCSI exclusively.) When we reboot the AIX box, the iSCSI volume doesn't get mounted (which is fine; I've figured out that it fails to mount because AIX tries mounting it's volumes before starting the networking stack.) The problem is, after the server has booted it fails to redetect the iSCSI target as a physical disk. This means the volume group (iscsivg) can't go online. if I run cfgmgr -v to redetect the iscsi volume it successfully detects the iscsi target volume and creates a physical volume reference, but allocates it a different volume ID to what was defined before. eg - rootvg contains hdisk 0 and 1 iscsivg was originally defined with hdisk2 as the physical iSCSI volume. after reboot and running cfgmgr -v, AIX detects physical volumes hdisk0, hdisk11 and hdisk3. As there's no hdisk2, I can't varyon the iscsivg volume group. I can't seem any existing hdisk2 definition in the ODM. I can't easily add or change the definition of the physcial disk in the iscsivg volume group as it won't "varyon". Exporting the volume group deletes it completely, recreating the volume group by "importing" it from the reallocated disk makes it available again, but surely there's a better way? Can I force a specific hdisk drive designation for an iscsi target? How do you bring online iSCSI volumes after a reboot? I assume this "just works" with a dedicated HBA instead of a generic ethernet adapter? By the way, the iSCSI volume works fine once it's mounted; we only have problems getting it working - and only with AIX. The iSCSI array works fine with our Linux and Windows servers; ie the volumes get detected and remounted after boot time without any problems, using generic ethernet adapters. Here's some of the config from the AIX box: defined disks / devices: # lsdev hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive hdisk3 Available Other iSCSI Disk Drive iscsi0 Available iSCSI Protocol Device scsi0 Available 06-08-00 PCI-X Dual Channel Ultra320 SCSI Adapter bus scsi1 Available 06-08-01 PCI-X Dual Channel Ultra320 SCSI Adapter bus ses0 Available 06-08-01-15,0 SCSI Enclosure Services Device sisscsia0 Available 06-08 PCI-X Dual Channel Ultra320 SCSI Adapter iscsi target definition in /etc/iscsi/targets: # IBM DS3300 disk array # port 1 on second controller 10.10.xx.xxx 3260 iqn.1992-01.com.lsi:1535.600a0b80005b0a7fxxxxxxxxxxxx physical volumes (after reimporting the volume group) # lspv hdisk0 0003b08a0d4936b6 rootvg active hdisk1 0003b08aaa5cb366 rootvg active hdisk3 0003b08a032d04bb iscsivg active

    Read the article

  • Linux boot on a raid1 software raid ?

    - by azera
    Hello I am trying to convert my single disk boot to a raid1 boot So far here is what i have: I sucessfully create the raid 1 as degraded with the new drive alone, I copied all the data on it I can mount that raid 1, see its files etc I already have a raid5 that is working on the same box (although not booting on it) I have installed grub on both drive When grub boot, it loads the kernel alright, but during the kernel boot it fails to load the "root block device" The kernel tells me : 1 - detected that root device is an md device 2 - determining root devices 3 - mounting root 4 - mounting /dev/md125 on /newroot failed: input/output error. Please enter another root device: ... At this point, if I enter /dev/sda3 (my "old" root device that isn't converted to raid yet) everything boots fine without the root. The /dev/md125 device is indeed created but it seems to be created after the error happens, as in it creates it after loading the device, when mdadm is loaded. Somehow it looks like it can't/doesn't load the raid array before it needs to mount it, and I don't know how I can solve that. My config files (taken from the system once it boots with sda3 as root device): $ cat /etc/mdadm.conf ARRAY /dev/md/md0-r5 metadata=0.90 UUID=1a118934:c831bdb3:64188b84:66721085 ARRAY /dev/md125 metadata=0.90 UUID=48ec4190:a80d4dde:64188b84:66721085 $ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] md125 : active raid1 sdc3[1] 477853312 blocks [2/1] [_U] md127 : active raid5 sdd[0] sdf[3] sdb[2] sde[1] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> $ cat /boot/grub/menu.lst default 0 timeout 8 splashimage=(hd0,0)/boot/grub/splash.xpm.gz title Gentoo Linux 2.6.31-r10 root (hd0,0) #kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/ram0 real_root=/dev/sda3 kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/md125 md=125,/dev/sdc3,/dev/sda3 initrd /boot/initramfs-genkernel-x86_64-2.6.31-gentoo-r10 # blkid /dev/sda1: UUID="89fee223-b845-4e0a-8a0b-e6cf695d5bcf" TYPE="ext2" /dev/sda2: UUID="a72296a8-d7d4-447f-a34b-ee920fd1a767" TYPE="swap" /dev/sda3: UUID="97eb0a6a-c385-4a9d-bf74-c0bab1fa4dc1" TYPE="ext3" /dev/sdb: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc1: UUID="d36537fd-19a0-b8a3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdd: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sde: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/md127: UUID="13a41589-4cf1-4c04-91ca-37484182c783" TYPE="ext4" /dev/sdf: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc2: UUID="a1916397-1b48-45d7-9f98-73aa521e882f" TYPE="swap" /dev/sdc3: UUID="48ec4190-a80d-4dde-6418-8b8466721085" TYPE="linux_raid_member" /dev/md125: UUID="c947ed64-1d4d-4d1d-b4d2-24669fff916e" SEC_TYPE="ext2" TYPE="ext3" # mdadm -E mdadm: No devices to examine # fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sda1 1 5 40131 83 Linux /dev/sda2 6 1311 10490445 82 Linux swap / Solaris /dev/sda3 1312 60801 477853425 83 Linux Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sdc1 1 5 40131 83 Linux /dev/sdc2 6 1311 10490445 82 Linux swap / Solaris /dev/sdc3 1312 60801 477853425 83 Linux Disk /dev/md125: 489.3 GB, 489321791488 bytes 2 heads, 4 sectors/track, 119463328 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000 Disk /dev/md125 doesn't contain a valid partition table

    Read the article

  • External hard drive FAT32 to NTFS conversion fails

    - by Pieter
    I'm trying to convert the FAT32 file system of an external hard drive to NTFS. Here's what happened: C:\Windows\system32>chkdsk G: The type of the file system is FAT32. Volume PIETEREXT created 3/19/2008 12:43 Volume Serial Number is 1806-2E30 Windows is verifying files and folders... File and folder verification is complete. Windows has scanned the file system and found no problems. No further action is required. 488,264,768 KB total disk space. 72,192 KB in 1,503 hidden files. 1,281,792 KB in 40,029 folders. 309,235,168 KB in 199,915 files. 177,675,584 KB are available. 32,768 bytes in each allocation unit. 15,258,274 total allocation units on disk. 5,552,362 allocation units available on disk. C:\Windows\system32>cd \ C:\>convert g: /fs:ntfs The type of the file system is FAT32. Enter current volume label for drive G: PIETEREXT Volume PIETEREXT created 3/19/2008 12:43 Volume Serial Number is 1806-2E30 Windows is verifying files and folders... File and folder verification is complete. Windows has scanned the file system and found no problems. No further action is required. 488,264,768 KB total disk space. 72,192 KB in 1,503 hidden files. 1,281,792 KB in 40,029 folders. 309,235,168 KB in 199,915 files. 177,675,584 KB are available. 32,768 bytes in each allocation unit. 15,258,274 total allocation units on disk. 5,552,362 allocation units available on disk. Determining disk space required for file system conversion... Total disk space: 488384001 KB Free space on volume: 177675584 KB Space required for conversion: 975155 KB Converting file system The conversion failed. G: was not converted to NTFS I looked at the TechNet page for my error, but after closing every app the conversion was still failing halfway through. Why does it keep failing? I kept an eye on Task Manager but it didn't look like my system resources were near depletion. I'm using Windows 8.

    Read the article

  • Few questions on dvd ram/rom

    - by Nrew
    Why other dvd rom/ram can read data from scratched disk and others cannot. For me its a comparison between LG and Lite On. Lite on, gets errors while reading disk with scratches. LG can read disk with scratches with no problems. Why is that? Is it possible to modify the software or the driver of the dvd rom so that it won't get errors when copying data from disk with scratches?

    Read the article

  • Zero'ing out deleted files

    - by Bozojoe
    I want to minimize the amount of space my virtual disk is using, by zero'ing out the data on any deleted files. The virtual disk is a VDMK running ubuntu on VirtualBox. What is the best way to find any deleted files, and zero them out, so that when I export the appliance the disk size is only existing files on the disk?

    Read the article

  • Issue using a "used" SSD as a Windows 8.1 Boot Drive

    - by EpiGrad
    So, I'm something of a Mac person, but decided to take a stab at this whole "build yourself a PC" thing - right now, the thing is assembled, posts just fine, and can get to the BIOS. The problem is the drive I want to use - I intended to use a 80 GB Corsair SSD I've had sitting around as the boot drive, and a new Samsung SSD for games and the like. So I boot using a Windows 8.1 install USB stick, and if the Samsung drive is plugged in, it happily offers to install Windows on it. The Corsair drive though, it's flipped out - I reformatted it as a blank NTFS drive (it was HFS for Mac purposes) and the BIOS can't see it, nor can the Windows installer. What's wrong, and how do I fix it? The tools at my disposal are: The current ASUS BIOS that came with my motherboard (a Z87I-Deluxe), a Mac running the latest OS X which can also boot to Windows 7 if needed via either Parallels or Bootcamp. Update 1: Update: Based on a friend's suggestion to switch SATA ports, Windows 8.1's installer can now see the drive as Drive 0, Partition 1, a 83.8 GB "Primary" partition. But when I click it and hit "Next", I get the following error: "We couldn't create a new partition or locate an existing one. For more information, see the Setup log files" - not that it gives any clue how to access those. Update 2: Following a trail of Google suggestions, I ended up going into advanced tools and just reformatting the drive as follows: Start DISKPART. Type LIST DISK and identify your SSD disk number (from 0 to n disks). Type SELECT DISK <n> where <n> is your SSD disk number. Type CLEAN Type CREATE PARTITION PRIMARY Type ACTIVE Type FORMAT FS=NTFS QUICK Type ASSIGN Type EXIT twice (one to get out of DiskPart, the other to exit the command line tool) Per these instructions. This goes well enough, but now I can select the disk for installation, and I get a new error: "Windows 8 cannot be installed to this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only be installed to GP disks." So, Googling that, I do the following: select disk 0 clean convert gpt exit ...and we might have fixed it. Windows is at least trying to install now.

    Read the article

  • What will happen if on my DB server I'll run out of space?

    - by Noam
    I'm seeing a hugh difference of free disk space between df -h and du -sxh / I've understood in my question Resolving unix server disk space not adding up that du -sxh / is a better estimation as to when I will run out of disk space. Having said that, assuming in my case the above sentence will prove to be wrong and I will run soon out of disk space, what will happen? I assume the MySQL will fail INSERT queries, but other than that, will I just need to delete some files or will it be a problematic situation?

    Read the article

  • Windows 8 RTM Final on Macbook Air

    - by Xarem
    If I try to install the Windows 8 RTM Final (from MSDN) on my Macbook Air Mid 2011 using Boot Camp, I get this error: Windows cannot be installed to this disk. This computer's hardware may not support booting to this disk. Ensure that the disk's controller is enabled in the computer's BIOS menu I already formatted the disk with NTFS, removed it, reformatted and more... Does anyone know to resolve this error? Thank you very much

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • HELP: MS Virtual Disk Service to Access Volumes and Discs on Local Machine.

    - by Jibran Ahmed
    Hi, here it is my code through which I am successfully initialize the VDS service and get the Packs but When I call QueryVolumes on IVdsPack Object, I am able to get IEnumVdsObjects but unable to get IUnknown* array through IEnumVdsObject::Next method, it reutrns S_FALSE with IUnkown* = NULL. So this IUnknown* cant be used to QueryInterface for IVdsVolume Below is my code HRESULT hResult; IVdsService* pService = NULL; IVdsServiceLoader *pLoader = NULL; //Launch the VDS Service hResult = CoInitialize(NULL); if( SUCCEEDED(hResult) ) { hResult = CoCreateInstance( CLSID_VdsLoader, NULL, CLSCTX_LOCAL_SERVER, IID_IVdsServiceLoader, (void**) &pLoader ); //if succeeded load VDS on local machine if( SUCCEEDED(hResult) ) pLoader->LoadService(NULL, &pService); //Done with Loader now release VDS Loader interface _SafeRelease(pLoader); if( SUCCEEDED(hResult) ) { hResult = pService->WaitForServiceReady(); if ( SUCCEEDED(hResult) ) { AfxMessageBox(L"VDS Service Loaded"); IEnumVdsObject* pEnumVdsObject = NULL; hResult = pService->QueryProviders(VDS_QUERY_SOFTWARE_PROVIDERS, &pEnumVdsObject); IUnknown* ppObjUnk ; IVdsSwProvider* pVdsSwProvider = NULL; IVdsPack* pVdsPack = NULL; IVdsVolume* pVdsVolume = NULL; ULONG ulFetched = 0; hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsSwProvider, (void**)&pVdsSwProvider); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsSwProvider->QueryPacks(&pEnumVdsObject); hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsPack, (void**)&pVdsPack); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsPack->QueryVolumes(&pEnumVdsObject); pEnumVdsObject->Reset(); hResult = E_INVALIDARG; ulFetched = 0; BOOL bDone = FALSE; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); //hResult = ppObjUnk->QueryInterface(IID_IVdsVolume, (void**)&pVdsVolume); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); _SafeRelease(pVdsPack); _SafeRelease(pVdsSwProvider); // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) AfxMessageBox(L"Add Access Path Successfully"); else AfxMessageBox(L"Unable to Add access path"); //UUID of IVdsVolumeMF {EE2D5DED-6236-4169-931D-B9778CE03DC6} static const GUID GUID_IVdsVolumeMF = {0xEE2D5DED, 0x6236, 4169,{0x93, 0x1D, 0xB9, 0x77, 0x8C, 0xE0, 0x3D, 0XC6} }; hResult = pService->GetObject(GUID_IVdsVolumeMF, VDS_OT_VOLUME, &ppObjUnk); if(hResult == VDS_E_OBJECT_NOT_FOUND) AfxMessageBox(L"Object Not found"); if(hResult == VDS_E_INITIALIZED_FAILED) AfxMessageBox(L"Initialization failed"); // pVdsVolume = reinterpret_cast(ppObjUnk); if(SUCCEEDED(hResult)) { // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) { IVdsAsync* ppVdsSync; AfxMessageBox(L"Formatting is about to Start......"); // hResult = pVdsVolume-Format(VDS_FST_UDF, TEXT("UDF_FORMAT_TEST"), 2048, TRUE, FALSE, FALSE, &ppVdsSync); if(SUCCEEDED(hResult)) AfxMessageBox(L"Formatting Started......."); else AfxMessageBox(L"Formatting Failed"); } else AfxMessageBox(L"Unable to Add Access Path"); } _SafeRelease(pVdsVolume); } else { AfxMessageBox(L"VDS Service Cannot be Loaded"); } } } _SafeRelease(pService);

    Read the article

  • Why is WCF Stream response getting corrupted on write to disk?

    - by Alvin S
    I am wanting to write a WCF web service that can send files over the wire to the client. So I have one setup that sends a Stream response. Here is my code on the client: private void button1_Click(object sender, EventArgs e) { string filename = System.Environment.CurrentDirectory + "\\Picture.jpg"; if (File.Exists(filename)) File.Delete(filename); StreamServiceClient client = new StreamServiceClient(); int length = 256; byte[] buffer = new byte[length]; FileStream sink = new FileStream(filename, FileMode.CreateNew, FileAccess.Write); Stream source = client.GetData(); int bytesRead; while ((bytesRead = source.Read(buffer,0,length))> 0) { sink.Write(buffer,0,length); } source.Close(); sink.Close(); MessageBox.Show("All done"); } Everything processes fine with no errors or exceptions. The problem is that the .jpg file that is getting transferred is reported as being "corrupted or too large" when I open it. What am I doing wrong? On the server side, here is the method that is sending the file. public Stream GetData() { string filename = Environment.CurrentDirectory+"\\Chrysanthemum.jpg"; FileStream myfile = File.OpenRead(filename); return myfile; } I have the server configured with basicHttp binding with Transfermode.StreamedResponse.

    Read the article

  • Why do I lose my javascript from the browser cache after a full page postback?

    - by burak ozdogan
    Hi, I have an external javascript file which I include to my page on the code behind (as seen below). My problem is, when I my page makes a postback (not partial one), I check the loaded scripts by using FireBug, and I cannot see the javascript file in the list after the post back. I asusmed once it is included to page on the first load, browser will be caching it so that I do not need to re-include it. What am I doing wrong? The way I include the script is here: protected override void OnInit(EventArgs e) { if (this.Page.IsPostBack==false) { if (this.Page.ClientScript.IsClientScriptIncludeRegistered("ctlPalletDetail")==false) { string guidParamToHackBrowserCaching = System.Guid.NewGuid().ToString(); this.Page.ClientScript.RegisterClientScriptInclude("ctlPalletDetail", ResolveUrl(String.Format("~/clientScripts/ctlLtlRequestDetail.js?par={0}",guidParamToHackBrowserCaching))); } } base.OnInit(e); }

    Read the article

  • Does the asp.net RoleManager really cache the roles for a user in a cookie if so configured?

    - by Ralph Shillington
    In my web.config I have the Role Manager configured as follows: <roleManager enabled="true" cacheRolesInCookie="true" cookieName=".ASPROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All"> however in our custom RoleProvider it would seems that the GetRolesForUser method is always being called, rather than as I would have expected, the RoleManager serving up the roles from its cookie. We're using something like to get the roles for a user: string[] myroles = Role.GetRolesForUser("myuser"); Is there something that I'm missing in the configuration, or in the use of the RoleManager Thanks in advance

    Read the article

  • How to send Sound Stream of a file from disk over network using FMOD?

    - by chris
    Hey everyone, i'm currently working on a project in college. my application should do some things with audio files from my computer. i'm using FMOD as sound library. the problem i have is, that i dont know how to access the data of a soundfile (wich was opened and startet using the FMOD methods) to stream it over network for playback on another pc in the net. does anyone has a similar problem?! any help is apreciated. thanks in advance. chris

    Read the article

  • Is it possible to output cache by host name? ie varybyhost or varbyhostheader?

    - by Pure.Krome
    Hi folks, i've got a website that has a number of host headers. Depending on the host header, the results are different - both visually (theme'd) and data. So lets imagine i have a website called 'Foo' - that returns search results (original, eh?). Now, the same code runs both sites. It is physically the same server/website (using Host Headers) :- www.foo.com www.foo.com.au Now, if i goto '.com', the site is theme'd in blue. if i goto the '.com.au' site, it's theme'd in red. And the data is different for the same search result, based on the host name (ie. us results for .com, au results for .com.au) SO .. if i wish to use OutputCaching .. can this be handled / differ by the host name? I don't want to have the first person goto the .com site .. grab the results ... and the a second person goto my .com.au .. same search data .. and get the theme and results for the .com site. Possible?

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • Drawbacks with reading and writing a big file to or from disk at once instead of small chunks?

    - by Johann Gerell
    I work mainly on Windows and Windows CE based systems, where CreateFile, ReadFile and WriteFile are the work horses, no matter if I'm in native Win32 land or in managed .Net land. I have so far never had any obvious problem writing or reading big files in one chunk, as opposed to looping until several smaller chunks are processed. I usually delegate the IO work to a background thread that notifies me when it's done. But looking at file IO tutorials or "textbook examples", I often find the "loop with small chunks" used without any explanation of why it's used instead of the more obvious (I dare to say!) "do it all at once". Are there any drawbacks to the way I do that I haven't understood?

    Read the article

  • Cache inferno - how can Model.A be two different things at the same time?

    - by Martin
    I have this <%=Model.StartDate%> <%=Html.Hidden("StartDate", Model.StartDate)%> it outputs: 2010-05-11 11:00:00 +01:00 <input type="hidden" value="2010-03-17 11:00:00 +01:00" name="StartDate" id="StartDate"> What the... It's a paging mechanism so the hidden value was valid on the first page and I've been able to move forward to the next page. But since the values won't update properly it ends there. What do I need to do. Using firefox.

    Read the article

  • How do we know if a query is cache or retrieved from database?

    - by Hadi
    For example: class Product has_many :sales_orders def total_items_deliverable self.sales_orders.each { |so| #sum the total } #give back the value end end class SalesOrder def self.deliverable # return array of sales_orders that are deliverable to customer end end SalesOrder.deliverable #give all sales_orders that are deliverable to customer pa = Product.find(1) pa.sales_orders.deliverable #give all sales_orders whose product_id is 1 and deliverable to customer pa.total_so_deliverable The very point that i'm going to ask is: how many times SalesOrder.deliverable is actually computed, from point 1, 3, and 4, They are computed 3 times that means 3 times access to database so having total_so_deliverable is promoting a fat model, but more database access. Alternatively (in view) i could iterate while displaying the content, so i ends up only accessing the database 2 times instead of 3 times. Any win win solution / best practice to this kind of problem ?

    Read the article

  • Good way to cache data during Android application lifecycle?

    - by sniurkst
    Hello, keeping my question short, I have creating application with 3 activities, where A - list of categories, B - list of items, C - single item. Data displayed in B and C is parsed from online XML. But, if I go trough A - B1 - C, then back to A and then back to B1 I would like to have it's data cached somewhere so I wouldn't have to request XML again. I'm new to Android and Java programming, I've googled a lot and still can't find (or simply do not have an idea where to look) a way to do what I want. Would storing all received data in main activity A (HashMaps? ContentProviders?) and then passing to B and C (if they get same request that was before) be a good idea?

    Read the article

  • Using Python, what's the best way to create a set of files on disk for testing?

    - by Chris R
    I'm looking for a way to create a tree of test files to unit test a packaging tool. Basically, I want to create some common file system structures -- directories, nested directories, symlinks within the selected tree, symlinks outside the tree, &c. Ideally I want to do this with as little boilerplate as possible. Of course, I could hand-write the set of files I want to see, but I'm thinking that somebody has to have automated this for a test suite somewhere. Any suggestions?

    Read the article

  • How to clear cache for previously installed InfoPath forms on a client computer?

    - by user313067
    Hi folks, We recently had a strange issue with an InfoPath 2007 form being opened from SharePoint 2007 and receiving the error message "the system cannot find the file specified". To be clear, this was not a form services enabled form. Anyway, after spending way too much time trying to figure out what was going on (nothing in the MOSS 2007 server log files), we determined that the user had previously installed an older version of the form (but with the same name) on their workstation using a no longer available msi file (meaning we could not uninstall it from the workstation). So I wanted to pass on a very simple solution for anyone who is unfortunate to run into this problem in the future (since I lost a great deal of hair over it): Fire up regedit, go to HKEY_LOCALMACHINE-Software-Microsoft-Office-InfoPath-SolutionsCatalog. Locate the key that has the previously installed form name, and delete it. This will cause InfoPath to stop trying to open the form locally (which is either old or doesn't exist) and force it to open your form from SharePoint. Hope this helps someone!

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >