Search Results

Search found 159 results on 7 pages for 'contiguous'.

Page 5/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • Why are these divs repelling each other?

    - by Terminal Frost
    <html> <div style="width:200px;"> <div style="background:red;height:5px"></div> <div style="background:yellow"> Magnets? </div> <div style="background:green;height:5px"></div> </div> </html> Rendering with "Magnets?" wrapped in h3 tags How come the divs cease to be contiguous if "Magnets?" is wrapped in a paragraph or heading tag?

    Read the article

  • Need advice with HTML table

    - by misha-moroshko
    I would like to code an HTML table with messages like this: The table will contain messages that will spread over first N columns (N may change). Lets call these N columns, the message area. Each message is located on X contiguous cells in the message area. X may also change. Each message has a name that contains words separated with underscores. How would you recommend to code this table in Javascript/jQuery such that: It would be easy to define a message (start cell, end cell, color, name) The name will break only after underscores (rather than in the middle of the word)

    Read the article

  • java cannot reserver heap size error on windows server

    - by Prad
    HI, I have the following configuration: Server : windows 2003 server (32 bit) java version: 1.5_0_22 I get the following error when executing from command line ( my code is based off eclipse wihch gives the same error) java -XX:MaxPermSize=256m -Xmx512m Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. The server has over 20GB physical memory with over 19 GB free right now. It does not give an error upto -Xmx486m I have read other articles about contiguous memory space. There is hardly anything running on this server. Can I validae this in any way? Thanks

    Read the article

  • Loop through non-integer rows using SQL

    - by Jesse
    I know how to accomplish my task with .NET, but I wanted to do this just in SQL. I need to loop through all of the rows where the primary key is somewhat arbitrary. It can be a number or a series of letters, and probably any number of unusual things. I know I could do something like this... DECLARE @numRows INT SET @numRows = (SELECT COUNT(pkField) FROM myTable) DECLARE @I INT SET @I = 1 WHILE (@I <= @numRows) BEGIN --Do what I need to here SET @I = @I + 1 END ...if my rows were indexed in a contiguous fashion, but I don't know enough about SQL to do that if they're not. I keep coming across the use of "cursors," but I come across just as much reading about avoiding cursors. I found this SO solution but I'm not sure if that's what I'm needing? I appreciate any ideas.

    Read the article

  • Why do apache2 upgrades remove and not re-install libapache2-mod-php5?

    - by nutznboltz
    We repeatedly see that when an apache2 update arrives and is installed it causes the libapache2-mod-php5 package to be removed and does not subsequently re-install it automatically. We must subsequently re-install the libapache2-mod-php5 manually in order to restore functionality to our web server. Please see the following github gist, it is a contiguous section of our server's dpkg.log showing the November 14, 2011 update to apache2: https://gist.github.com/1368361 it includes 2011-11-14 11:22:18 remove libapache2-mod-php5 5.3.2-1ubuntu4.10 5.3.2-1ubuntu4.10 Is this a known issue? Do other people see this too? I could not find any launchpad bug reports about it. Platform details: $ lsb_release -ds Ubuntu 10.04.3 LTS $ uname -srvm Linux 2.6.38-12-virtual #51~lucid1-Ubuntu SMP Thu Sep 29 20:27:50 UTC 2011 x86_64 $ dpkg -l | awk '/ii.*apache/ {print $2 " " $3 }' apache2 2.2.14-5ubuntu8.7 apache2-mpm-prefork 2.2.14-5ubuntu8.7 apache2-utils 2.2.14-5ubuntu8.7 apache2.2-bin 2.2.14-5ubuntu8.7 apache2.2-common 2.2.14-5ubuntu8.7 libapache2-mod-authnz-external 3.2.4-2+squeeze1build0.10.04.1 libapache2-mod-php5 5.3.2-1ubuntu4.10 Thanks At a high-level the update process looks like: package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end But at a lower level def install_package(name, version) run_command_with_systems_locale( :command = "apt-get -q -y#{expand_options(@new_resource.options)} install #{name}=#{version}", :environment = { "DEBIAN_FRONTEND" = "noninteractive" } ) end def upgrade_package(name, version) install_package(name, version) end So Chef is using "install" to do "update". This sort of moves the question around to "how does apt-get safe-upgrade" remember to re-install libapache-mod-php5? The exact sequence of packages that triggered this was: apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common But the code is attempting to run checks to make sure the packages in that list are installed already before attempting to "upgrade" them. case node[:platform] when 'debian', 'centos', 'fedora', 'redhat', 'scientific', 'ubuntu' # first primitive way is to define the updates in the recipe # data bags will be used later %w/ apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common /.each{ |package_name| Chef::Log.debug("is #{package_name} among local packages available for changes?") next unless node[:packages][:changes].keys.include?(package_name) Chef::Log.debug("is #{package_name} available for upgrade?") next unless node[:packages][:changes][package_name][:action] == 'upgrade' package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end tag('upgraded') } # after upgrading everything, run yum cache updater if tagged?('upgraded') # Remove old orphaned dependencies and kernel images and kernel headers etc. # Remove cached deb files. case node[:platform] when 'ubuntu' execute 'apt-get -y autoremove' execute 'apt-get clean' # Re-check what updates are available soon. when 'centos', 'fedora', 'redhat', 'scientific' node[:packages][:last_time_we_looked_at_yum] = 0 end untag('upgraded') end end But it's clear that it fails since the dpkg.log has 2011-11-14 11:22:25 install apache2-mpm-worker 2.2.14-5ubuntu8.7 on a system which does not currently have apache2-mpm-worker. I will have to discuss this with the author, thanks again.

    Read the article

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • Memory read/write access efficiency

    - by wolfPack88
    I've heard conflicting information from different sources, and I'm not really sure which one to believe. As such, I'll post what I understand and ask for corrections. Let's say I want to use a 2D matrix. There are three ways that I can do this (at least that I know of). 1: int i; char **matrix; matrix = malloc(50 * sizeof(char *)); for(i = 0; i < 50; i++) matrix[i] = malloc(50); 2: int i; int rowSize = 50; int pointerSize = 50 * sizeof(char *); int dataSize = 50 * 50; char **matrix; matrix = malloc(dataSize + pointerSize); char *pData = matrix + pointerSize - rowSize; for(i = 0; i < 50; i++) { pData += rowSize; matrix[i] = pData; } 3: //instead of accessing matrix[i][j] here, we would access matrix[i * 50 + j] char *matrix = malloc(50 * 50); In terms of memory usage, my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient, for the reasons below: 3: There is only one pointer and one allocation, and therefore, minimal overhead. 2: Once again, there is only one allocation, but there are now 51 pointers. This means there is 50 * sizeof(char *) more overhead. 1: There are 51 allocations and 51 pointers, causing the most overhead of all options. In terms of performance, once again my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient. Reasons being: 3: Only one memory access is needed. We will have to do a multiplication and an addition as opposed to two additions (as in the case of a pointer to a pointer), but memory access is slow enough that this doesn't matter. 2: We need two memory accesses; once to get a char *, and then to the appropriate char. Only two additions are performed here (once to get to the correct char * pointer from the original memory location, and once to get to the correct char variable from wherever the char * points to), so multiplication (which is slower than addition) is not required. However, on modern CPUs, multiplication is faster than memory access, so this point is moot. 1: Same issues as 2, but now the memory isn't contiguous. This causes cache misses and extra page table lookups, making it the least efficient of the lot. First and foremost: Is this correct? Second: Is there an option 4 that I am missing that would be even more efficient?

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • Useful Excel keyboard shortcuts

    - by Ben Lings
    What keyboard shortcuts do you use in Excel? Things I've discovered recently and found very useful are: Shift + Space - select the current row Ctrl + Space - select the current column Ctrl + Shift + Space - select the block of contiguous cell Ctrl + + - Insert (as in the context menu). If the current row is selected, will insert a new row. Ctrl + - - Delete (as in the context menu). If the current row is selected, will delete entire row. What (apart from the normal cut, copy, paste, etc) do you use? Ctrl + 1 to open the Format dialog. Shift + F2 to add/edit a cell comment. Shift + F2, followed by Esc to select the current cell comment, which can then be moved around with the arrow keys (????) or deleted by pressing Del. Ctrl + ???? to move to the last non-blank cell in a series. This is usually the edge of a table, but not if you have blank cells in the path. Pressing End followed by an arrow key does the same thing. Alt + F11 to open the VBA editor. Alt + = to start a SUM() formula and go straight to selecting cells to be summed. Ctrl + G or F5 to jump to a cell by typing its coordinates (e.g. C3) Ctrl + Home to jump to the top left, usually A1 unless you you are in a frozen split view, in which case it will jump to the top left of the "data" area. Ctrl + ; and Ctrl + Shift + ; to insert the current date and time, respectively. I know Ben Lings already posted this one, but I find it indispensable.

    Read the article

  • Wiping Deleted Directory Entries and Defragmenting Directories

    - by Synetech inc.
    Hi, I have seen plenty of apps that wipe free space on a disk (usually by creating a file that is as big as the remaining space) or defragment a file (usually by using the MoveFile API to copy it to a new contiguous area). What I have not seen however is a program that wipes the deleted directory entries. That is, when a file is deleted, its information (name, dates, etc.) remain in the directory, but are simply marked as empty. That leaves all kinds of information in a directory entry, and also wastes space since (at least on FAT drives), the directory may be using several clusters. For example, if a directory once had a lot of files, it will be expanded to use another cluster which could be anywhere on the disk. This means that the directory is fragmented, and may be using more clusters than needed, possibly with 100’s of unused (ie, “deleted file”) entries between active files. Does anyone know of a program that can defragment/consolidate directories (ie, wipe unused entries, and move active entries together)? (I would really rather not have to resort to writing my own yet again.) Thanks a lot. EDIT Sorry, I should have said, Windows and/or DOS, for FAT*/NTFS.

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • Intermittent Trouble Entering Hibernate on WinXP

    - by kquinn
    My personal desktop, running 32-bit Windows XP SP2 (with 4GB RAM, 2.75GB addressable, swap disabled, hiberfil.sys existing and contiguous on C:\; SP3 is not installed because SP2 has been working fine and I do not want to re-qualify with SP3 just for sheer perversity) typically gets hibernated at night. For a long time this worked great, but recently the machine has had trouble entering hibernation. Sometimes when I press my power button (configured to hibernate), the box will start the procedure for hibernating (i.e., go to the blue "Windows XP" background logo and display a message about entering hibernation), but before displaying the usual blue-on-black hibernation progress bar it will drop back to the desktop. No error messages appear, on screen or in the system log. The only record of unsuccessful hibernation attempts in the system log, which proudly proclaims that "The Windows Image Acquisition (WIA) service entered the running state." once per failed hibernation attempt. The problem is almost certainly resource related: if I then close one or more applications which are running, and repeat the exact same process, the machine will hibernate perfectly. There does not appear to be a reliable high-water mark for virtual or physical memory use, below which the machine is guaranteed to hibernate; it's different every time (though typically, below about 1.1–1.4 GB memory usage seems to be where hibernate succeeds most often). Memory may not even be the relevant resource; as far as I know, it could also be handles or sockets. This behavior is relatively recent: it has only started in the last few months; before then, I could hibernate reliably no matter what the current resource use of the system. This machine claims to have hotfix Q909095 installed, but since the symptoms of my problem match KB909095 rather well, I'm suspicious if this fix is actually working as intended. Any ideas on how to fix this or where to start debugging?

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • Routing public IPs (each a /32) through a VPN to another server

    - by Lee S
    Hopefully the title makes sense; I have a server currently in a colo facility, with many IP addresses routed to it. They are individual IPs and not in a contiguous block. Due to vastly improved connectivity (fibre) at home I am slowly bringing my infrastructure in-house for managability and eventually, cost savings. What I would like to do though is use the IP addresses allocated to my existing server, at home. I have an IP block allocated to me on my new ISP connection, but for a couple of reasons I'd like to make use of the colo ones for now: Ease of transition - lots of domains, dns, hard-coded IPs in programs, etc. Connectivity fallback. If my primary line goes down and switches to fallback 1 (dsl) or fallback 2 (4G), I lose access to the ISP-allocated IP block of IPs that are only presented on the primary WAN interface. What I'd like to achieve is my home virtualisation server (Proxmox/Debian-based) "dials in" to the colo server in the colo facility (also Proxmox/Debian) via VPN or similar, and gets to make use of the IP addresses that currently terminate on the colo box. If the primary connection to my ISP goes down and one of the fallback routes kicks in, the VPN tunnel will just time out and then be re-established on the backup connection instead. I'm sure this is doable, but I have no idea how. I'm not afraid to get my hands dirty, I just don't really know where to start?

    Read the article

  • Restoring MBR, partition table, and boot sector of memory card without data loss ("USBC")

    - by Synetech
    Abstract I have a FAT32 memory card that when inserted into a computer causes Windows to prompt to format it. The card is definitely not supposed to be blank and has a bunch of files on it. Symptoms Using a hex-editor/disk-viewer, I examined the card and found that several sectors/clusters have been overwritten with something that has a signature of USBC at the start of the sector. Specifically, the master boot record (and partition table) is gone (hence Windows thinking the card is blank and needing to be formatted), as are the boot sectors (they have the USBC signature and a volume label of NO NAME and partition type of FAT32). Fortunately, it looks like both copies of the FAT are almost entirely intact (a few FAT entries at the start of a cluster here and there seem to be overwritten by USBC). The root directory is also nearly intact—I can see the volume label entry and subdirectory listings, but one sector is overwritten. (There are no more instances of USBC after the last one in the FAT2.) Hypothesis These observations seem to indicate some sort of virus that erases a few key filesystem structures, and then overwrites a few extra sectors here and there. Googling it seems to corroborate the idea of a virus, except that others report a file called USBC which does not apply here, and in fact, could not be possible since there is no filesystem to even see files. I cannot find any information about a virus with these symptoms, nor a removal tool. (I can't help but wonder if it is actually due to an autorun virus prevention tool.) Question I can likely fix the FAT corruption since they are mostly contiguous chains and maybe even the lost sector of the root directory, but does anyone know of a convenient way to restore or (re)create the MBR/partition table and boot sectors (without formatting or overwriting the data)?

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • RFC regarding WAM

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What do you think of the following revised list for Whitespace's assembly instructions? I'm still thinking outside of the box somewhat and trying to come up with a better mnemonic set than last time. When the previous interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a very short amount of time. Hopefully, the instructions make more sense once someone new to programmings thinks about them.

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • Find position of a node within a nodeset using xpath

    - by Phil Nash
    After playing around with position() in vain I was googling around for a solution and arrived at this older stackoverflow question which almost describes my problem. The difference is that the nodeset I want the position within is dynamic, rather than a contiguous section of the document. To illustrate I'll modify the example from the linked question to match my requirements. Note that each <b> element is within a different <a> element. This is the critical bit. <root <a <bzyx</b </a <a <bwvu</b </a <a <btsr</b </a <a <bqpo</b </a </root Now if I queried, using XPath for a/b I'd get a nodeset of the four <b> nodes. I want to then find the position within that nodeset of the node that contains the string 'tsr'. The solution in the other post breaks down here: count(a/b[.='tsr']/preceding-sibling::*)+1 returns 1 because preceding-sibling is navigating the document rather than the context node-set. Is it possible to work within the context nodeset?

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • How do I supply extra info to IApplicationSettingsProvider class?

    - by joebeazelman
    Perhaps this question has been asked before in a different way, but I haven’t been able to find it. I have one or more plugin adapter assemblies in my application all having the type IPlugin, for instance. Each adapter has its own settings structures stored in a common directory. Whether they are stored in one contiguous file or in separate ones doesn’t matter. Each adapter can have one or more settings associated with it. The settings will have both a name and the Plugin it will be used for. How would I create such a configuration system using the following requirements: I want to use .NETs built in settings system and avoid writing one from scratch The host application will be responsible for locating the plugin settings and passing it to the plugin Each plugin will be responsible for reading and writing its own settings to separate concerns. The host application should call Plugin.Save(thePath) and it does its thing. All settings are user scoped So far, I realize that I would need to write my own SettingsProvider, but the provider seems to work in isolation in that there’s no way to pass it parameters such as the path of the plugin directory and the name of the settings. All of the example code I've seen has the provider getting the data from the runtime environment.

    Read the article

  • RFC: Whitespace's Assembly Mnemonics

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What is the general consensus on the following revised list for Whitespace's assembly instructions? They definitely come from thinking outside of the box and trying to come up with a better mnemonic set than last time. When the previous python interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a few hours. Hopefully, the instructions make more sense to those new to programming jargon.

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >