Search Results

Search found 5785 results on 232 pages for 'atomic compare and swap'.

Page 11/232 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • [C#] Is variable assignment and reading atomic operation (threading)

    - by AStrangerGuy
    I was unable to find any reference to this in the documentations... Is assigning to a double (or any other simple type, including boolean) an atomic operation viewed from the perspective of threads? double value = 0; public void First() { while(true) { value = (new Random()).NextDouble(); } } public void Second() { while(true) { Console.WriteLine(value); } } In this code sample, first method is called in one thread, and the second in another. Can the second method get a messed up value if it gets its execution during assignment to the variable in another thread? I don't care if I recieve the old value, it's only important to receive a valid value (not one where 2 out of 8 bytes are set). I know it's a stupid question, but I want to be sure, cause I don't know how CLR actually sets the variables. Thanks

    Read the article

  • Atomic INSERT/SELECT in HSQLDB

    - by PartlyCloudy
    Hello, I have the following hsqldb table, in which I map UUIDs to auto incremented IDs: SHORT_ID (BIG INT, PK, auto incremented) | UUID (VARCHAR, unique) Create command: CREATE TABLE table (SHORT_ID BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, UUID VARCHAR(36) UNIQUE) In order to add new pairs concurrently, I want to use the atomic MERGE INTO statement. So my (prepared) statement looks like this: MERGE INTO table USING (VALUES(CAST(? AS VARCHAR(36)))) AS v(x) ON ID_MAP.UUID = v.x WHEN NOT MATCHED THEN INSERT VALUES v.x When I execute the statement (setting the placeholder correctly), I always get a Caused by: org.hsqldb.HsqlException: row column count mismatch Could you please give me a hint, what is going wrong here? Thanks in advance.

    Read the article

  • How to move an element in a sorted list and keep the CouchDb write "atomic"

    - by karlthorwald
    I have elements of a list in couchdb documents. Let's say these are 3 elements in 3 documents: { "id" : "783587346", "type" : "aList", "content" : "joey", "sort" : 100.0 } { "id" : "358734ff6", "type" : "aList", "content" : "jill", "sort" : 110.0 } { "id" : "abf587346", "type" : "aList", "content" : "jack", "sort" : 120.0 } A view retrieves all "aList" documents and displays them sorted by "sort". Now I want to move the elements, when I want to move "jack" to the middle, I could do this atomic in one write and change it's sort key to 105.0. The view now returns the documents in the new sort order. After a lot of sorting I could end up with sort keys like 50.99999 and 50.99998 after some years and in extreme situations run out of digits? What can you recommend, is there a better way to do this? I'd rather keep the elements in seperate documents. Different users might edit different elements in parallel (which also can get tricky). Maybe there is a much better way?

    Read the article

  • JPA atomic query/save for multithreaded app

    - by TofuBeer
    I am in the midst of changing my JPA code around to make use of threads. I have a separate entity manager and transaction for each thread. What I used to have (for the single threaded environment) was code like: // get object from the entity manager X x = getObjectX(jpaQuery); if(x == null) { x = new X(); x.setVariable(foo); entityManager.persist(x); } With that code in the multi threaded environment I am getting duplicate keys since, I assume, getObjectX returns null for a thread, then that thread is swapped out, the next thread calls getObjextX, also getting null, and then both threads will create and persist a new X(). Short of adding in synchronization, is there an atomic way to get/save-if-doesn't-exist a value with JPA or should I rethink my approach EDIT: I am using the latest Eclipselink and MySql 5.1

    Read the article

  • How to disable cryptswap?

    - by mit
    How can I disable cryptswap? I would like an unencrypted swap like before. This is on an ubuntu 9.10 system. It worked. First I removed the lines from /etc/fstab and /etc/crypttab. But it was not possible (and maybe not necessary?) to use the command sudo cryptsetup remove crytswap1 Before rebooting it was not possible (because cryptswap1 was still in use) and after rebooting cryptswap1 was already inactive. I removed cryptsetup from the system afterwards: sudo aptitude remove cryptsetup

    Read the article

  • Boot-Repair after messing up NTFS partition

    - by QuietThud
    I posted this question explaining what happened after I tried to create a new swap partition on a Win/Ubuntu dual-boot machine. I have since created a live-boot USB and installed Boot-Repair. I had it "recommend repairs", after which it tried repairing the wubi filesystems, which as far as I'm aware was not necessary. I'm not sure where to go from here; I don't care very much about backing up my files, I just want to be able to boot the machine. In the Advanced Options the "Repair Windows boot files" box is uncheckable both GRUB tabs are unclickable (I do have GRUB installed) Here is my Pastebinit with the details from the Boot-Repair. Please be as explicit as possible, as I am proving to be disproportionately bad at this type of task. Thank you! P.S. I keep seeing: cryptsetup: WARNING: failed to detect canonical device of overlayfs cryptsetup: WARNING: could not determine root device from /etc/fstab TestDisk: GPARTED:

    Read the article

  • Ubuntu swappiness

    - by Viswanath Kuchibhotla
    I have a laptop with 4 GB RAM and i3 processor. It runs very fast when I use windows, but it keeps slowing down on my Ubuntu when I use it continuously. I noticed that 500mb+ swap is getting used even if only 20% of RAM is only used, and I have a doubt that this is the reason for the slowness. I have already set the swappiness value to 10. Then how else can I change it? I spend most of my time in Ubuntu so this is very important for me.

    Read the article

  • Installing ubuntu 12.04 LTS alongwith windows xp in two different HDDs

    - by chachu
    I have two HDDs. First HDD has four partitions with Windows XP in the first partition Second has two partitions and 15GB unallocated space. I tried to install Ubuntu 12.04 LTS in the second HDD by using Unallocated space. I tried for 6 times. Each time it got installed, but after first restart, it directly boots Windows XP in the first HDD and no boot options appear. Every time I found that the Ubuntu installation used unallocated space and made two partitions one EXT3 and other Linux swap. I don't know what went wrong. During Installation, Ubuntu detected Windows XP. Can anybody help me?

    Read the article

  • Need advise about compare NSDate

    - by RAGOpoR
    im developing Alarm Clock i want to compare a time now and setTime is it possible to compare in minute only. My Problem is NSDate will compare in second example 9:38:50 are not equal 9:38:00 how can i compare in minute ? is it possible thanks you for all advise.

    Read the article

  • "How to: Multiple jQuery Image Swap Galleries on the Same Page?" / ASAP

    - by Robert
    I have multiple image galleries that use the following code (simply view the demo at the following link). http://www.sohtanaka.com/web-design/fancy-thumbnail-hover-effect-w-jquery/ However, when one clicks on the thumbnails of one gallery to change the large image preview - all the galleries change automatically to that large preview as well. In short, the question is: "How do I seperate each gallery from each other so when the user clicks on the thumbnails of one specific gallery the large image preview of that gallery changes and not any other?"

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • google app engine atomic section???

    - by bokertov
    hi, Say you retrieve a set of records from the datastore (something like: select * from MyClass where reserved='false'). how do i ensure that another user doesn't set the reserved is still false? I've looked in the Transaction documentation and got shocked from google's solution which is to catch the exception and retry in a loop. Any solution that I'm missing - it's hard to believe that there's no way to have an atomic operation in this environment. (btw - i could use 'syncronize' inside the servlet but i think it's not valid as there's no way to ensure that there's only one instance of the servlet object, isn't it? same applies to static variable solution) Any idea on how to solve??? (here's the google solution: http://code.google.com/appengine/docs/java/datastore/transactions.html#Entity_Groups look at: Key k = KeyFactory.createKey("Employee", "k12345"); Employee e = pm.getObjectById(Employee.class, k); e.counter += 1; pm.makePersistent(e); This requires a transaction because the value may be updated by another user after this code fetches the object, but before it saves the modified object. Without a transaction, the user's request will use the value of counter prior to the other user's update, and the save will overwrite the new value. With a transaction, the application is told about the other user's update. If the entity is updated during the transaction, then the transaction fails with an exception. The application can repeat the transaction to use the new data. THANKS!

    Read the article

  • Cannot select a node here: the context item is an atomic value

    - by user348810
    While i execute this code it shownt the following error Cannot select a node here: the context item is an atomic value,so that i can't sum up the fundunits what is the problem ? why i can't able to sum up <xsl:variable name="VAR_FUNDNAME" select="distinct-values(/SJPDATA/WEALTHSTAT[DOCUMENTTYPE=$MYDCTTYPE]/CLIENTINFO[CLIENTID=$MYCLIENT]/ancestor::*/PORTFOLIO/PENSIONS[CLIENTREF=$MYCLIENTTYPE][GROUPING=$MYGROUPINGVALUE]/PENSIONBREAKDOWN/FUNDNAME)"/> <xsl:for-each select="$VAR_FUNDNAME"> <xsl:variable name="VAR_CURFUNDNAME" select="."/> <myvar><xsl:value-of select="$VAR_CURFUNDNAME"/></myvar> <xsl:if test="(/SJPDATA/WEALTHSTAT[DOCUMENTTYPE=$MYDCTTYPE]/CLIENTINFOCLIENTID=$MYCLIENT]/ancestor::*/PORTFOLIO/PENSIONS[CLIENTREF=$MYCLIENTTYPE][GROUPING=$MYGROUPINGVALUE]/PENSIONBREAKDOWN[FUNDNAME=string($VAR_CURFUNDNAME)][UNITTYPE='Acc'])"/> <ASSETVALUATIONDATE><xsl:value-of select="min(/SJPDATA/WEALTHSTAT[DOCUMENTTYPE=$MYDCTTYPE]/CLIENTINFO[CLIENTID=$MYCLIENT]/ancestor::*/PORTFOLIO/PENSIONS[CLIENTREF=$MYCLIENTTYPE][GROUPING=$MYGROUPINGVALUE]/PENSIONBREAKDOWN[FUNDNAME=string($VAR_CURFUNDNAME)][UNITTYPE='Acc']/string(ASSETVALUATIONDATE))"/></ASSETVALUATIONDATE> <PLANNUMBER></PLANNUMBER> <FUNDNAME><xsl:value-of select="$VAR_CURFUNDNAME"/></FUNDNAME> <FUNDUNITS><xsl:value-of select="string(sum(/SJPDATA/WEALTHSTAT[DOCUMENTTYPE=$MYDCTTYPE]/CLIENTINFO[CLIENTID=$MYCLIENT]/ancestor::*/PORTFOLIO/PENSIONS[CLIENTREF=$MYCLIENTTYPE][GROUPING=$MYGROUPINGVALUE]/PENSIONBREAKDOWN[FUNDNAME=string($VAR_CURFUNDNAME)][UNITTYPE='Acc']/FUNDUNITS))"/></FUNDUNITS> </xsl:for-each>

    Read the article

  • Encode two integers into colour values and compare them in a HLSL shader

    - by Ben Slinger
    I am writing a 2D point and click adventure game in Monogame, and I'd like to be able to create an image mask for every room which defines which parts of the background a character can walk behind, and at which Y value a character needs to be at for the background to be drawn above the character. I haven't done any shader work before but after doing some reading I thought the following solution should work: Create a mask for the room with different walk behind areas painted in a colour that defines the baseline Y value (Walk Behind Mask) Render all objects to a RenderTarget2D (Base Texture) Render all objects to a different RenderTarget2D, but changing every pixel of each object to a colour that defines its Y value (Position Mask) Pass these two textures plus the image mask into the shader, and for each pixel compare the colour of the image mask to the colour of the Position Mask to the Walk Behind Mask - if the Position Mask pixel is larger (thus lower on the screen and closer to the camera) than the Walk Behind Mask, draw the pixel from the Base Texture, otherwise draw a transparent pixel (allowing the background to show through). I've got it mostly working, but I'm having trouble packing and unpacking the Y values into colours and retrieving them correctly in the shader. Here are some code examples of how I'm doing it so far: (When drawing to the Position Mask RenderTarget2D) Color posColor = new Color(((int)Position.Y >> 16) & 255, ((int)Position.Y >> 8) & 255, (int)Position.Y & 255); So as far as I can tell, this should be taking the first 3 bytes of the position integer and encoding them into a 4 byte colour (ignoring the alpha as the 4th byte). This seems to work fine, as when my character is at Y = 600, the resulting Color from this is: {[Color: R=0, G=2, B=88, A=255, PackedValue=4283957760]}. I then have an area in my Walk Behind Mask that I only want the character to be displayed behind if his Y value is lower than 655, so I've painted it with R=0, G=2, B=143, A=255. Now, I think I have the shader OK as well, here's what I have: sampler BaseTexture : register(s0); sampler MaskTexture : register(s1); sampler PositionTexture : register(s2); float4 mask( float2 coords : TEXCOORD0 ) : COLOR0 { float4 color = tex2D(BaseTexture, coords); float4 maskColor = tex2D(MaskTexture, coords); float4 positionColor = tex2D(PositionTexture, coords); float maskCompare = (maskColor.r * pow(2,24)) + (maskColor.g * pow(2,16)) + (maskColor.b * pow(2,8)); float positionCompare = (positionColor.r * pow(2,24)) + (positionColor.g * pow(2,16)) + (positionColor.b * pow(2,8)); return positionCompare < maskCompare ? float4(0,0,0,0) : color; } technique Technique1 { pass NoEffect { PixelShader = compile ps_3_0 mask(); } } This isn't working, however - currently all characters are displayed behind the walk behind area, regardless of their Y value. I tried printing out some debug info by grabbing the pixel from both the Position Mask and the Walk Under Mask under the current mouse position, and it seems like maybe the colours aren't being rendered to the Position Mask correctly? When calculating the colour in that code above I'm getting R=0, G=2, B=88, A=255, but when I mouseover my character I get R=0, G=0, B=30, A=255. Any ideas what I'm doing wrong? It seems like maybe I'm losing some information when rendering to the RenderTarget2D, but I'm now knowledgeable enough to figure out what's happening. Also, I should probably ask, is this an efficient way to do this? Will there be a performance impact? Edit: Whoops, turns out there was a bug that I'd introduced myself, I was drawing out the Position Mask with the position Color, left over from some early testing I was doing. So this solution is working perfectly, though I'm still interested in whether this is an efficient solution performance wise.

    Read the article

  • CUDA compare arrays

    - by user315511
    Hello. Trying to make an app that will compare 1-to-multiple bitmaps. there is one reference bitmap and multiple other bitmaps. Result from each compare should be new bitmap with diffs. Maybe comparing bitmaps rather as textures than arrays? My biggest problem is making kernel accept more than one input pointer, and how to compare the data.. extern "C" __global__ void compare(float *odata, float *idata, int width, int height) works and following does not (i call the function with enough params) extern "C" __global__ void compare(float *odata, float *idata, float *idata2, int width, int height)

    Read the article

  • Python - compare nested lists and append matches to new list?

    - by Seafoid
    Hi, I wish to compare to nested lists of unequal length. I am interested only in a match between the first element of each sub list. Should a match exist, I wish to add the match to another list for subsequent transformation into a tab delimited file. Here is an example of what I am working with: x = [['1', 'a', 'b'], ['2', 'c', 'd']] y = [['1', 'z', 'x'], ['4', 'z', 'x']] match = [] def find_match(): for i in x: for j in y: if i[1] == j[1]: match.append(j) return match This results in a series of empty lists. Is it better to use tuples and/or tuples of tuples for the purposes of comparison? Any help is greatly appreciated. Regards, Seafoid.

    Read the article

  • swap on assembly

    - by lego69
    I wrote swap on assembly, but I'm not sure that my code is right, this is the code swap: mov r1, -(sp) mov (sp) r1 mov 2(sp) (sp) mov r1 2(sp) mov (sp)+, r1 rts pc swap receives pointer from stack

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • How to compare two lists with duplicated items in one list?

    - by eladc
    I need to compare list_a against many others. my problem starts when there's a duplicated item in the other lists (two k's in other_b). my goal is to filter out all the lists with the same items (up to three matching items). list_a = ['j','k','a','7'] other_b = ['k', 'j', 'k', 'q'] other_c = ['k','k','9','k'] >>>filter(lambda x: not x in list_a,other_b) ['q'] I need a way that would return ['k', 'q'], because 'k' appears only once in list_a. comparing list_a and other_c with set() isn't good for my purpose since it will return only one element: k. while I need ['k','9','k'] I hope I was clear enough. Thank you

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >