Search Results

Search found 8687 results on 348 pages for 'per ersson'.

Page 64/348 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • SQL Complicated Group / Join by Category

    - by Mike Silvis
    I currently have a database structure with two important tables. 1) Food Types (Fruit, Vegetables, Meat) 2) Specific Foods (Apple, Oranges, Carrots, Lettuce, Steak, Pork) I am currently trying to build a SQL statement such that I can have the following. Fruit < Apple, Orange Vegetables < Carrots, Lettuce Meat < Steak, Port I have tried using a statement like the following Select * From Food_Type join (Select * From Foods) as Foods on Food_Type.Type_ID = Foods.Type_ID but this returns every Specific Food, while I only want the first 2 per category. So I basically need my subquery to have a limit statement in it so that it finds only the first 2 per category. However if I simply do the following Select * From Food_Type join (Select * From Foods LIMIT 2) as Foods on Food_Type.Type_ID = Foods.Type_ID My statement only returns 2 results total.

    Read the article

  • Geocode multiple addresses

    - by ace2600
    I need to geocode around 200 addresses per request and do around 3,000 request per day. I will be doing this server side using PHP. I have looked at http://stackoverflow.com/questions/98449/how-to-convert-an-address-to-a-latlon, but could not find a way for either Google Maps HTTP Request API or the Yahoo Geocoding REST API (cannot use JavaScript as this is server side) to send multiple addresses. I will be primarily using this to sort addresses based on the coordinates and if the API supports, fill in any missing data for each address. The addresses will be in the United States only. The addresses to geocode will be from user input, so be free form, like "street address, postal code", or "city, state", etc. Accuracy is not too important for the coordinates, say within five hundred feet or so. Is there a free API to handle this? Is there a way to get Yahoo or Google to do multiple locations in a request?

    Read the article

  • VS2008: Disable asking whether to reload files changed outside the IDE

    - by SealedSun
    Hi, I have a Visual Studio 2008 project where some code files are generated with each build (a parser, integrated via MSBuild aka editing the *.csproj file). VS does not know about the generated nature of these files (i.e. they are not the result of a "Custom Tool). So they "change" with every build, naturally. And VS2008 asks me after every build if I would like to reload those files: This file has been modified outside the source Do you want to reload it? That would be ok if I had one of those files opened and in front of me, but I get these modal dialogs even with none of the code files opened. So my question is: Is there a way to disable this dialog, per project, per solution or globally? Thanks!

    Read the article

  • Publishing similar apps in AppStore

    - by John
    Hello guys, I have some questions that I think some one here can help me ... I have published 5 apps in AppStore that brings news to the users, for different countries. One app per country, because no one want's to read the news of other countrys. Now, I have submitted 3 more apps to publish and Apple reject it, because they say that I need to use "In App Purchase" because the apps are similar ... I have seen many similar apps published by the same user .. so, my question is, how can I publish many similar apps without "In App Purchase"? I have used the same "bundle Id" for all of my apps .. Is it wrong? I need the create one bundle Id per app to get this work and the ok by Apple? Thanks you very much and sorry for my limited english!!

    Read the article

  • AVG time spent on multiple rows SQL-server?

    - by seo20
    I have a table tblSequence with 3 cols in MS SQL: ID, IP, [Timestamp] Content could look like this: ID IP [Timestamp] -------------------------------------------------- 4347 62.107.95.103 2010-05-24 09:27:50.470 4346 62.107.95.103 2010-05-24 09:27:45.547 4345 62.107.95.103 2010-05-24 09:27:36.940 4344 62.107.95.103 2010-05-24 09:27:29.347 4343 62.107.95.103 2010-05-24 09:27:12.080 ID is unique, there can be n number of IP's. Would like to calculate the average time spent per IP. in a single row Know you can do something like this: SELECT CAST(AVG(CAST(MyTable.MyDateTimeFinish - MyTable.MyDateTimeStart AS float)) AS datetime) But how on earth do I find the first and last entry of my unique IP row so I can have a start and finish time? I'M stuck. Would like to calculate the average time spent per IP. in a single row

    Read the article

  • Processes sharing cores on Ubuntu system

    - by muckabout
    My coworkers and I share an 8-core server running Ubuntu for our batch processes. I tend to run 4 processes at a time, each of which consumes 100% CPU per core when nothing else is running. When a coworker runs his processes (typically about 4 at a time), his also get 100% per. However, when both of us run ours (he always goes first), his still get 100% and mine seem to divide the remaining processing power and linger in the 10-40% range. I even reniced his process to a lower value and it did not change. What are the issues that may cause this?

    Read the article

  • Select the first row in a join of two tables in one statement

    - by Oscar Cabrero
    hi i need to select only the first row from a query that joins tables A and B, on table B exist multiple records with same name. there are not identifiers in any of the two tables. i cannt change the scheme either because i do not own the DB TABLE A NAME TABLE B NAME DATA1 DATA2 Select Distinct A.NAME,B.DATA1,B.DATA2 From A Inner Join B on A.NAME = B.NAME this gives me NAME DATA1 DATA2 sameName 1 2 sameName 1 3 otherName 5 7 otherName 8 9 but i need to retrieve only one row per name NAME DATA1 DATA2 sameName 1 2 otherName 5 7 i was able to do this by adding the result into a temp table with a identity column and the select the Min Id per name. the problem here is that i require to do this in one single statement. this is a DB2 database thanks

    Read the article

  • Factorial Algorithms in different languages

    - by Brad Gilbert
    I want to see all the different ways you can come up with, for a factorial subroutine, or program. The hope is that anyone can come here and see if they might want to learn a new language. Ideas: Procedural Functional Object Oriented One liners Obfuscated Oddball Bad Code Polyglot Basically I want to see an example, of different ways of writing an algorithm, and what they would look like in different languages. Please limit it to one example per entry. I will allow you to have more than one example per answer, if you are trying to highlight a specific style, language, or just a well thought out idea that lends itself to being in one post. The only real requirement is it must find the factorial of a given argument, in all languages represented. Be Creative! Recommended Guideline: # Language Name: Optional Style type - Optional bullet points Code Goes Here Other informational text goes here I will ocasionally go along and edit any answer that does not have decent formatting.

    Read the article

  • [grails] attaching multiple files to a domain class

    - by Emyr
    I've seen various Grails plugins which allow easier handling of file uploads, however these tend only to support a single file per form-submit. I'd like a multi-attach form where as soon as you pick one file, an extra field and button is added using JS (various sites do it like this). Do you know of any good plugins which provide elegant uploading of multiple files without excessive coding? A progress bar either per-file of for the whole process would also be very nice. I don't know to what extent I can allow GORM to handle a java.io.File field (or in this case a Collection<File>).

    Read the article

  • How to notice unusual news activity

    - by ??iu
    Suppose you were able keep track of the news mentions of different entities, like say "Steve Jobs" and "Steve Ballmer". What are ways that could you tell whether the amount of mentions per entity per a given time period was unusual relative to their normal degree of frequency of appearance? I imagine that for a more popular person like Steve Jobs an increase of like 50% might be unusual (an increase of 1000 to 1500), while for a relatively unknown CEO an increase of 1000% for a given day could be possible (an increase of 2 to 200). If you didn't have a way of scaling that your unusualness index could be dominated by unheard-ofs getting their 15 minutes of fame.

    Read the article

  • Installing CXF with Eclipse 3.5

    - by shinynewbike
    As per http://help.eclipse.org/helios/topic/org.eclipse.jst.ws.cxf.doc.user/reference/preferences.html The CXF 2.x preferences can be accessed via Window > Preferences... > Web Services > CXF 2.x Preferences from the top level menu. but I dont see the option CXF 2.x Preferences under Web Services, though I have chosen JavaEE perspective. any ideas how to enable this? Sorry for such a simple question. I also cannot do not see CXF as a Project Facet as per http://help.eclipse.org/helios/index.jsp?topic=/org.eclipse.jst.ws.jaxws.doc.user/gettingstarted/requirements.html I know this is something to do with getting Eclipse aware of CXF libraries, but cant find the tutorial for this.

    Read the article

  • How to explain to a client that you've gone over-budget and you'll need more money/time to deliver w

    - by General Tapioca
    My situation is that I have agreed on a per-project proposal with the client. The proposal is vague, but still names functionality in a way that can be argued as to whether it's included or not, while leaving some room for interpretation. I originally pressed as much as I could to get a per-month contract, arguing that the project is mostly non-predictable, but the client refused. Being a small company, I had to fold and signed a contract on an estimate based on my group's estimations. At this point we have reached completion on about 85% of the features (we think) but we ran out of budget. We have been working for almost two years with this client in previous contracts, and we have delivered a good product that they are happy with, so we have a good standing relationship. More info: -There has been a bit of scope-creep, but I don't think enough for me to hide behind that argument -We've been delivering partial releases about monthly. -We don't have systematic user-testing in place.

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Layered Architecture and Static Methods

    - by BigBoss
    I need suggestion for three layered architecture I am planning to implement. Scenario Am Working in ASP.NET c# 3.5 Environment. DLHelper: Methods to get data from database. DAL : Contains Methods which collects data from database with help of DLHelper classes. Most of the methods in this layer are not referencing any page level object, hence can be declared static. BL : Same as DAL Layer, most of the methods are not referencing any page other page level object, hence can be declared static. UI Layer: As per above scenario UI Layer call to BL Layer is like BLClass.Method - DALClass.Method Question I would like to know is this standardise way to do it. As per discussion with my co-worker, we should have corresponding object of BL/DAL layer. But am still looking for more convencing answer.

    Read the article

  • counting twice in a query, once using restrictions

    - by Andrew Heath
    Given the following tables: Table1 [class] [child] math boy1 math boy2 math boy3 art boy1 Table2 [child] [glasses] boy1 yes boy2 yes boy3 no If I want to query for number of children per class, I'd do this: SELECT class, COUNT(child) FROM Table1 GROUP BY class and if I wanted to query for number of children per class wearing glasses, I'd do this: SELECT Table1.class, COUNT(table1.child) FROM Table1 LEFT JOIN Table2 ON Table1.child=Table2.child WHERE Table2.glasses='yes' GROUP BY Table1.class but what I really want to do is: SELECT class, COUNT(child), COUNT(child wearing glasses) and frankly I have no idea how to do that in only one query. help?

    Read the article

  • logging problems for swing applications

    - by mengmenger
    what kind of logging frame work or API to use for swing applications which is used by multiple users in Unix. Is it possible to log all verbose/exception in one file per day or event one user one file per day? Since the user can open the same application with multiple instance. I also have another solution is to save the exceptions into database. But if I miss the excetpions, those will not be saved in DB. anybody has better solutions? Thank you very much!

    Read the article

  • Sum in array with match value

    - by user325504
    Dear all, I would like to do a simple sum per salesid in php - mysql after cross calculation between date (2 table) to get the real time commission, all the value already come out correctly but I got problem with final sum per sales id. salesid-commission aa0001 - 1000 bb0001 - 500 aa0001 - 200 bb0001 - 50 I already try with few sample in this web but still cannot meet the correct result. I cannot do the sum in mysql because of some reason (need calculation with other table) the result will be: aa0001 - 1200 bb0001 - 550 I aprreciated for any help to complated the test. Thank you so much.

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Autofac: Reference from a SingleInstance'd type to a HttpRequestScoped

    - by Michael Wagner
    I've got an application where a shared object needs a reference to a per-request object. Shared: Engine | Per Req: IExtensions() | Request If i try to inject the IExtensions directly into the constructor of Engine, even as Lazy(Of IExtension), I get a "No scope matching [Request] is visible from the scope in which the instance was requested." exception when it tries to instantiate each IExtension. How can I create a HttpRequestScoped instance and then inject it into a shared instance? Would it be considered good practice to set it in the Request's factory (and therefore inject Engine into RequestFactory)?

    Read the article

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • mod_perl memory leak

    - by marghi
    Hello, I recently discovered that one of our sites has a memory leak in it, it's very strange because it happened all of the sudden. I've used GTop to measure the memory size per process and it tells me that the real value is somewhere around 65 MB (on the server) per request and and additional 5 MB shared. I tried preloading the modules in the startup.pl file a indicated in the performance tuning article for mod_perl. Nothing happened if fact the shared memory decreased down to 3.7 MB, in this situation I thought that my application is leaking memory do before any line of code got executed I measured the memory just to find out that the total value is in fact 64 MB, my questions are: Is there a default preallocation of memory for each process? Is there a configuration issue? Is mod_perl leaking memory ? Thank you very much.

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • CouchDB Versioning / Auditing

    - by Cory
    I'm attempting to use CouchDB for a system that requires full auditing of all data operations. Because of its built in revision-tracking, couch seemed like an ideal choice. But then I read in the O'Reilly textbook that "CouchDB does not guarantee that older versions are kept around." I can't seem to find much more documentation on this point, or how couch deals with its revision-tracking internally. Is there any way to configure couch either on a per-database, or per-document level to keep all versions around forever? If so, how?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >