Search Results

Search found 3089 results on 124 pages for 'lock up'.

Page 89/124 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Should I make my MutexLock volatile?

    - by sje397
    I have some code in a function that goes something like this: void foo() { { // scope the locker MutexLocker locker(&mutex); // do some stuff.. } bar(); } The function call bar() also locks the mutex. I am having an issue whereby the program crashes (for someone else, who has not as yet provided a stack trace or more details) unless the mutex lock inside bar is disabled. Is it possible that some optimization is messing around with the way I have scoped the locker instance, and if so, would making it volatile fix it? Is that a bad idea? Thanks.

    Read the article

  • how to configure hibernate not to update @Version on each access to entity

    - by radai
    i have a simple query that returns an entity, and when i look at hibernate SQL output i see that when i execute this query hibernate updates the @Version field (on each consecutive read the @version field is updated). i dont modify anything in the entity i fetch, and i dont pass is as an argument to either persist or merge. this effectively means every read i make turns into a read+write. i've tried setting the lock mode t oboth NONE (jpa 2) and READ (jpa 1) to no avail. is there any way to achieve this? if so, is there any way to set this as the default behavior in persistence.xml in some way ? im using jpa2 over hibernate 3.6

    Read the article

  • How to maipulate the shell output in php

    - by Mirage
    I am trying to write php script which does some shell functions like reporting. So i am starting with diskusage report I want in following format drive path ------------total-size --------free-space Nothing else My script is $output = shell_exec('df -h -T'); echo "<pre>$output</pre>"; and its ouput is like below Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext3 92G 6.6G 81G 8% / none devtmpfs 3.9G 216K 3.9G 1% /dev none tmpfs 4.0G 176K 4.0G 1% /dev/shm none tmpfs 4.0G 1.1M 4.0G 1% /var/run none tmpfs 4.0G 0 4.0G 0% /var/lock none tmpfs 4.0G 0 4.0G 0% /lib/init/rw /dev/sdb1 ext3 459G 232G 204G 54% /media/Server /dev/sdb2 fuseblk 466G 254G 212G 55% /media/BACKUPS /dev/sda5 fuseblk 738G 243G 495G 33% /media/virtual_machines How can i convert that ouput into my forn\matted output

    Read the article

  • No method error in controller create action

    - by user2799827
    I have read a number of Q&As on SO in search of some help on this but have so far not solved my issue. I am trying to teach myself ruby/rails, and as a test project, I want to create a list of tvshows and a list of characters where each tvshow has_many characters and each character belongs_to a specific show. I am sure I am doing something basic incorrectly. Any assistance would be greatly appreciated. here is the characters controller: class CharactersController < ApplicationController before_action :set_character, only: [:show, :edit, :update, :destroy] # GET /characters # GET /characters.json def index @characters = Character.all end # GET /characters/1 # GET /characters/1.json def show end # GET /characters/new def new @character = Character.new end # GET /characters/1/edit def edit end # POST /characters # POST /characters.json def create @character = @tvshow.characters.create(params[:character]) respond_to do |format| if @character.save format.html { redirect_to @character, notice: 'Character was successfully created.' } format.json { render action: 'show', status: :created, location: @character } else format.html { render action: 'new' } format.json { render json: @character.errors, status: :unprocessable_entity } end end end # PATCH/PUT /characters/1 # PATCH/PUT /characters/1.json def update respond_to do |format| if @character.update(character_params) format.html { redirect_to @character, notice: 'Character was successfully updated.' } format.json { head :no_content } else format.html { render action: 'edit' } format.json { render json: @character.errors, status: :unprocessable_entity } end end end # DELETE /characters/1 # DELETE /characters/1.json def destroy @character.destroy respond_to do |format| format.html { redirect_to characters_url } format.json { head :no_content } end end private # Use callbacks to share common setup or constraints between actions. def set_character @character = Character.find(params[:id]) end # Never trust parameters from the scary internet, only allow the white list through. def character_params params.require(:character).permit(:first_name, :last_name, :bio) end end character model: class Character < ActiveRecord::Base belongs_to :tvshow default_scope -> { order('created_at DESC') } validates :tvshow_id, presence: true end tvshow model: class Tvshow < ActiveRecord::Base has_many :characters, dependent: :destroy end error gets returned when I attempt to create a character. here is the full trace: app/controllers/characters_controller.rb:27:in `create' actionpack (4.0.0) lib/action_controller/metal/implicit_render.rb:4:in `send_action' actionpack (4.0.0) lib/abstract_controller/base.rb:189:in `process_action' actionpack (4.0.0) lib/action_controller/metal/rendering.rb:10:in `process_action' actionpack (4.0.0) lib/abstract_controller/callbacks.rb:18:in `block in process_action' activesupport (4.0.0) lib/active_support/callbacks.rb:413:in `_run__1211653665462320621__process_action__callbacks' activesupport (4.0.0) lib/active_support/callbacks.rb:80:in `run_callbacks' actionpack (4.0.0) lib/abstract_controller/callbacks.rb:17:in `process_action' actionpack (4.0.0) lib/action_controller/metal/rescue.rb:29:in `process_action' actionpack (4.0.0) lib/action_controller/metal/instrumentation.rb:31:in `block in process_action' activesupport (4.0.0) lib/active_support/notifications.rb:159:in `block in instrument' activesupport (4.0.0) lib/active_support/notifications/instrumenter.rb:20:in `instrument' activesupport (4.0.0) lib/active_support/notifications.rb:159:in `instrument' actionpack (4.0.0) lib/action_controller/metal/instrumentation.rb:30:in `process_action' actionpack (4.0.0) lib/action_controller/metal/params_wrapper.rb:245:in `process_action' activerecord (4.0.0) lib/active_record/railties/controller_runtime.rb:18:in `process_action' actionpack (4.0.0) lib/abstract_controller/base.rb:136:in `process' actionpack (4.0.0) lib/abstract_controller/rendering.rb:44:in `process' actionpack (4.0.0) lib/action_controller/metal.rb:195:in `dispatch' actionpack (4.0.0) lib/action_controller/metal/rack_delegation.rb:13:in `dispatch' actionpack (4.0.0) lib/action_controller/metal.rb:231:in `block in action' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:80:in `call' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:80:in `dispatch' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:48:in `call' actionpack (4.0.0) lib/action_dispatch/journey/router.rb:71:in `block in call' actionpack (4.0.0) lib/action_dispatch/journey/router.rb:59:in `each' actionpack (4.0.0) lib/action_dispatch/journey/router.rb:59:in `call' actionpack (4.0.0) lib/action_dispatch/routing/route_set.rb:655:in `call' rack (1.5.2) lib/rack/etag.rb:23:in `call' rack (1.5.2) lib/rack/conditionalget.rb:35:in `call' rack (1.5.2) lib/rack/head.rb:11:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/params_parser.rb:27:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/flash.rb:241:in `call' rack (1.5.2) lib/rack/session/abstract/id.rb:225:in `context' rack (1.5.2) lib/rack/session/abstract/id.rb:220:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/cookies.rb:486:in `call' activerecord (4.0.0) lib/active_record/query_cache.rb:36:in `call' activerecord (4.0.0) lib/active_record/connection_adapters/abstract/connection_pool.rb:626:in `call' activerecord (4.0.0) lib/active_record/migration.rb:369:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call' activesupport (4.0.0) lib/active_support/callbacks.rb:373:in `_run__2792846465963916895__call__callbacks' activesupport (4.0.0) lib/active_support/callbacks.rb:80:in `run_callbacks' actionpack (4.0.0) lib/action_dispatch/middleware/callbacks.rb:27:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/reloader.rb:64:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/remote_ip.rb:76:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call' railties (4.0.0) lib/rails/rack/logger.rb:38:in `call_app' railties (4.0.0) lib/rails/rack/logger.rb:21:in `block in call' activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `block in tagged' activesupport (4.0.0) lib/active_support/tagged_logging.rb:25:in `tagged' activesupport (4.0.0) lib/active_support/tagged_logging.rb:67:in `tagged' railties (4.0.0) lib/rails/rack/logger.rb:21:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/request_id.rb:21:in `call' rack (1.5.2) lib/rack/methodoverride.rb:21:in `call' rack (1.5.2) lib/rack/runtime.rb:17:in `call' activesupport (4.0.0) lib/active_support/cache/strategy/local_cache.rb:83:in `call' rack (1.5.2) lib/rack/lock.rb:17:in `call' actionpack (4.0.0) lib/action_dispatch/middleware/static.rb:64:in `call' railties (4.0.0) lib/rails/engine.rb:511:in `call' railties (4.0.0) lib/rails/application.rb:97:in `call' rack (1.5.2) lib/rack/lock.rb:17:in `call' rack (1.5.2) lib/rack/content_length.rb:14:in `call' rack (1.5.2) lib/rack/handler/webrick.rb:60:in `service' /Users/dariusgoore/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/httpserver.rb:138:in `service' /Users/dariusgoore/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/httpserver.rb:94:in `run' /Users/dariusgoore/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/server.rb:191:in `block in start_thread'

    Read the article

  • One account, multiple users, multiple shopping cart in a web application

    - by lemotdit
    I received a somewhat unusual request (imo) for a transactional web site. I have to implement the possibility of having multiple shopping cart for the same user. Those really are shopping carts, not order templates. I.E: A store with several departments ordering under the same account, but with a different person placing orders for a specific department only. Having more than one user per account is not an option since it would involve 'too much' management from the stores owner and the admins. Anyone had to deal with this before? The option so far is to have names for shopping cart, and a dropdown list or something alike after login to choose the cart with some kind of 'busy flag' to lock the cart if it's in use in another session.

    Read the article

  • How can I kill MySQL queries every 60 seconds in Windows?

    - by Ethan Allen
    I want to check my MySQL server every minute and kill queries that have run longer than 150 seconds. The main reason I want to do this is because I don't want queries from certain people to lock up the DB for everyone else. I know this is not the ultimate solution to the problem, but at least it's a fallback in case something goes wrong with a query. I don't have a slave DB (this is just an at-home project). I'd like to schedule a script to run that does this for me. I'm unfamiliar with Perl or Ruby and I need it done on my Windows 2008 Server box. I've looked into creating a simple cmd line script, but that doesn't seem to be possible. I know currently I can do something like this but I have to do it manually: mysqladmin processlist mysqladmin kill Anyone have any ideas or examples on how I could do this?

    Read the article

  • Fast way to pass a simple java object from one thread to another

    - by Adal
    I have a callback which receives an object. I make a copy of this object, and I must pass it on to another thread for further processing. It's very important for the callback to return as fast as possible. Ideally, the callback will write the copy to some sort of lock-free container. I only have the callback called from a single thread and one processing thread. I only need to pass a bunch of doubles to the other thread, and I know the maximum number of doubles (around 40). Any ideas? I'm not very familiar with Java, so I don't know the usual ways to pass stuff between threads.

    Read the article

  • Is this query safe in SQL Server?

    - by xaw
    I have this SQL update query: UPDATE table1 SET table1.field1 = 1 WHERE table1.id NOT IN (SELECT table2.table1id FROM table2); Other portions of the application can add records to table2 which use the field table1id to reference table1. The goal here is to remove records from table1 which aren't referenced by table2. Does SQL Server automatically lock table2 with this kind of query so that a new record can't be added to table2 while executing this query? I've also considered: UPDATE table1 SET field1 = 1 WHERE 0 = (SELECT COUNT(*) FROM table2 WHERE table1.id = table2.table1id); Which seems possibly safer, but much slower (because a SELECT would be called on each row of table1 instead of just one select for the NOT IN)

    Read the article

  • How should I store Dynamically Changing Data into Server Cache?

    - by Scott
    Hey all, EDIT: Purpose of this Website: Its called Utopiapimp.com. It is a third party utility for a game called utopia-game.com. The site currently has over 12k users to it an I run the site. The game is fully text based and will always remain that. Users copy and paste full pages of text from the game and paste the copied information into my site. I run a series of regular expressions against the pasted data and break it down. I then insert anywhere from 5 values to over 30 values into the DB based on that one paste. I then take those values and run queries against them to display the information back in a VERY simple and easy to understand way. The game is team based and each team has 25 users to it. So each team is a group and each row is ONE users information. The users can update all 25 rows or just one row at a time. I require storing things into cache because the site is very slow doing over 1,000 queries almost every minute. So here is the deal. Imagine I have an excel spreadsheet with 100 columns and 5000 rows. Each row has two unique identifiers. One for the row it self and one to group together 25 rows a piece. There are about 10 columns in the row that will almost never change and the other 90 columns will always be changing. We can say some will even change in a matter of seconds depending on how fast the row is updated. Rows can also be added and deleted from the group, but not from the database. The rows are taken from about 4 queries from the database to show the most recent and updated data from the database. So every time something in the database is updated, I would also like the row to be updated. If a row or a group has not been updated in 12 or so hours, it will be taken out of Cache. Once the user calls the group again via the DB queries. They will be placed into Cache. The above is what I would like. That is the wish. In Reality, I still have all the rows, but the way I store them in Cache is currently broken. I store each row in a class and the class is stored in the Server Cache via a HUGE list. When I go to update/Delete/Insert items in the list or rows, most the time it works, but sometimes it throws errors because the cache has changed. I want to be able to lock down the cache like the database throws a lock on a row more or less. I have DateTime stamps to remove things after 12 hours, but this almost always breaks because other users are updating the same 25 rows in the group or just the cache has changed. This is an example of how I add items to Cache, this one shows I only pull the 10 or so columns that very rarely change. This example all removes rows not updated after 12 hours: DateTime dt = DateTime.UtcNow; if (HttpContext.Current.Cache["GetRows"] != null) { List<RowIdentifiers> pis = (List<RowIdentifiers>)HttpContext.Current.Cache["GetRows"]; var ch = (from xx in pis where xx.groupID == groupID where xx.rowID== rowID select xx).ToList(); if (ch.Count() == 0) { var ck = GetInGroupNotCached(rowID, groupID, dt); //Pulling the group from the DB for (int i = 0; i < ck.Count(); i++) pis.Add(ck[i]); pis.RemoveAll((x) => x.updateDateTime < dt.AddHours(-12)); HttpContext.Current.Cache["GetRows"] = pis; return ck; } else return ch; } else { var pis = GetInGroupNotCached(rowID, groupID, dt);//Pulling the group from the DB HttpContext.Current.Cache["GetRows"] = pis; return pis; } On the last point, I remove items from the cache, so the cache doesn't actually get huge. To re-post the question, Whats a better way of doing this? Maybe and how to put locks on the cache? Can I get better than this? I just want it to stop breaking when removing or adding rows.

    Read the article

  • Threading Problems in ActionScript 2.0?

    - by yar
    Is it possible to have concurrency problems (thread competition) in an onEnterFrame method in ActionScript 2.0? I have written this cheesy code as a guard: if (!busy) { // I suspect some threading problems: is that even possible in flash busy = true; movePanels(); busy = false; } but this is no assurance against thread competition. If so, how can I do a basic semaphore/lock? Note: I suspect threading problems in my app, but if they're impossible, I'll check my code differently.

    Read the article

  • Mysql Master Slave Replication on Large Database table (how to sync initial data)

    - by Brian Lovett
    We have a production server and a dev server. We have found that backups are nearly impossible on the production server because of the query volume we experience. So, we're looking at setting up replication with our dev server being the slave. This is ideal because we can afford to lock the tables on that server and additionally it will be nice to have up to date data for the developers. Now, the issues. The production server can't really be taken down or locked at this point, at least not easily. We have a high query volume and fairly large 30+ GB innodb tables. Both servers are running all innodb and are also both on mysql 5.1. What can we do to sync the data initially to get replication started? I've tried a few options, but so far, none have worked.

    Read the article

  • DispatcherOperations.Wait()

    - by Mark
    What happens if you call dispatcherOperation.Wait() on an operation that has already completed? Also, the docs say that it returns a DispatcherOperationStatus, but wouldn't that always be Completed since it (supposedly) doesn't return until it's done? I was trying to use it like this: private void Update() { while (ops.Count > 0) ops.Dequeue().Wait(); } public void Add(T item) { lock (sync) { if (dispatcher.CheckAccess()) { list.Add(item); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, item)); } else { ops.Enqueue(dispatcher.BeginInvoke(new Action<T>(Add), item)); } } } I'm using this in WPF, so all the Add operations have to occur on the UI thread, but I figured I could basically just queue them up without having to wait for it to switch threads, and then just call Update() before any read operations to ensure that the list is up to date, but my program started hanging.

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • Preventing dictionary attacks on a web application

    - by Kevin Pang
    What's the best way to prevent a dictionary attack? I've thought up several implementations but they all seem to have some flaw in them: Lock out a user after X failed login attempts. Problem: easy to turn into a denial of service attack, locking out many users in a short amount of time. Incrementally increase response time per failed login attempt on a username. Problem: dictionary attacks might use the same password but different usernames. Incrementally increase response time per failed login attempt from an IP address. Problem: easy to get around by spoofing IP address. Incrementally increase response time per failed login attempt within a session. Problem: easy to get around by creating a dictionary attack that fires up a new session on each attempt.

    Read the article

  • Devise role based routing

    - by teknull
    I have an app with multiple users. Each user as a theoretical role (user, client, etc). I've designed a view/controller for each user type. I want to be able to login each type of user do a different root url and lock them to it. Originally I was going to add a column to Users in Devise called role and so I can differentiate the users. The problem I'm having is how to say in routes.rb if current_user.role == "client" root :to = 'controller#index' Once they are logged in to the page I also want to keep them from being able to visit any of my other paths ie: domain.com/calls domain.com/units I've been looking into cancan to run alongside Devise but I'm not sure if this is the answer.

    Read the article

  • How to detect an active x component that does not respond any more?

    - by koschi
    My application is written in C++ and uses the Qt framework. I use the QAxWidget class to access an active x component. Now I need some kind of mechanism that notifies my application each time the active x component has crashed or does not respond any more (due to dead lock or infinite loop). (1) can easily been done by watching the external process of the active x component. But maybe there is a more elegant approach? But how can (2) be implemented?

    Read the article

  • How do I prevent Window resizing when the Workstation is Locked then Unlocked?

    - by Terry
    We have an application that is run in multi-monitor environments. Users normally have the application dialog spread out to span multiple mointors. If the user locks the workstation, and then unlocks it, our application is told to resize. Our users find this behavior frustrating, as they then spend some time restoring the previous layout. We're not yet sure whether it is the graphics driver requesting the resize or Windows. Hopefully through this question, it will become clearer which component is responsible, Popular applications like (File) Explorer and Firefox behave the same way in this setup. To replicate just: open Explorer (Win+E) drag the Explorer window to being horizontally larger than 1 screen lock workstation (Win+L), unlock the application should now resize to being solely on 1 screen How do I prevent Window resizing when the Workstation is Locked then Unlocked? Will we need to code in checks for (un)locking? Is there another mechanism we're not aware of?

    Read the article

  • Connection Timeout Linked Server

    - by Wade73
    I have created a linked server from one SQL Server 2005 to Another 2005. When I run an update query through the SQL Server Management Studio (SSMS), it runs in under a second. If I run the query through a asp webpage it times out. I ran SQL Profiler to see if I noticed anything as well as the Activity Monitor in SSMS and all I found was that a lock was being created (Wait type LOCK_M_U), but I can't find what is locking it. Any help would be appreciated. Wade

    Read the article

  • .NET "Timer" would block other method calls?

    - by Ricky
    Hi guys: In ASP.NET 3.5, we suspect a delegate triggering by a "Timer" will block other method calls. From logs, some function calls will wait for the finishing of the delegate and continue to work. Is it true? If yes, what workaround can I do? PS: The delegate contains codes to use WCF to retrieve data and the following code private void Replace<T>(ref IList<T> src, IList<T> des) { lock(src) { while (src.Count > 0) { GC.SuppressFinalize(src.ElementAt(0)); src.RemoveAt(0); } GC.SuppressFinalize(src); src = des; } } Thanks a lot.

    Read the article

  • Cocoa multhithreads, locks don't work

    - by Igor
    I have a threadMethod which shows in console robotMotorsStatus every 0.5 sec. But when I try to change robotMotorsStatus in changeRobotStatus method I receive an exception. Where I need to put locks in that program. #import "AppController.h" @implementation AppController extern char *robotMotorsStatus; - (IBAction)runThread:(id)sender { [self performSelectorInBackground:@selector(threadMethod) withObject:nil]; } - (void)threadMethod { char string_to_send[]="QFF001100\r"; //String prepared to the port sending (first inintialization) string_to_send[7] = robotMotorsStatus[0]; string_to_send[8] = robotMotorsStatus[1]; while(1){ [theLock lock]; usleep(500000); NSLog (@"Robot status %s", robotMotorsStatus); [theLock unlock]; } } - (IBAction)changeRobotStatus:(id)sender { robotMotorsStatus[0]='1'; }

    Read the article

  • [MySQL, InnoDb] Rating place

    - by Pavel
    I'm trying to generate rating place table using following receipt http://stackoverflow.com/questions/1776821/assign-places-in-the-rating-mysql-php but my database is high loaded. I tried not to create table, but use MEMORY TABLE and update it using following SQL query insert into tops (uid) select uid from users order by exp desc; but got the following MySQL error Deadlock found when trying to get lock; try restarting transaction because there are too many queries until SQL select is being executed. How to solve this problem? P.S. CREATE TABLE tops as SELECT work almost fine except high server load... up to load average: 50 if tops are non-memory table. My table users has near 4.5 millions of rows. Thanks for any advices.

    Read the article

  • why i'm permanently logged off my be

    - by Fixus
    Hello i have a problem with BE in my system. I'm permanently and automaticly logged out it after some time. 10-15 seconds, sometimes faster, sometimes slower. It's not connected with beeing idle cause I can be logged off even when I'm checking page tree or saving records I've set lock ip on 1 but it didn't help. The problem is most common under Firefox or Chome, what is strange under internet explorer i don't see it as often Other strange thing is that I see it on my live version but when I'm working on my local copy it doen not occure TYPO3 version 4.6.4

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • SQL Server (TSQL) - Is it possible to EXEC statements in parallel?

    - by Investor5555
    SQL Server 2008 R2 Here is a simplified example: EXECUTE sp_executesql N'PRINT ''1st '' + convert(varchar, getdate(), 126) WAITFOR DELAY ''000:00:10''' EXECUTE sp_executesql N'PRINT ''2nd '' + convert(varchar, getdate(), 126)' The first statement will print the date and delay 10 seconds before proceeding. The second statement should print immediately. The way T-SQL works, the 2nd statement won't be evaluated until the first completes. If I copy and paste it to a new query window, it will execute immediately. The issue is that I have other, more complex things going on, with variables that need to be passed to both procedures. What I am trying to do is: Get a record Lock it for a period of time while it is locked, execute some other statements against this record and the table itself Perhaps there is a way to dynamically create a couple of jobs? Anyway, I am looking for a simple way to do this without having to manually PRINT statements and copy/paste to another session. Is there a way to EXEC without wait / in parallel?

    Read the article

  • Python Threading, loading one thread after another

    - by Michael
    Hi, I'm working on a media player and am able to load in a single .wav and play it. As seen in the code below. foo = wx.FileDialog(self, message="Open a .wav file...", defaultDir=os.getcwd(), defaultFile="", style=wx.FD_MULTIPLE) foo.ShowModal() queue = foo.GetPaths() self.playing_thread = threading.Thread(target=self.playFile, args=(queue[0], 'msg')) self.playing_thread.start() But the problem is, when I try to make the above code into a loop for multiple .wav files. Such that while playing_thread.isActive == True, create and .start() the thread. Then if .isActive == False, pop queue[0] and load the next .wav file. Problem is, my UI will lock up and I'll have to terminate the program. Any ideas would be appreciated.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >