Search Results

Search found 2823 results on 113 pages for 'isolation frameworks'.

Page 78/113 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Subscription website architecture questions + SQL Server & .NET

    - by chopps
    Hey Guys, I have a few questions about the architecture of a subscription service I am about to embark on and I am looking for some feedback on how best to set it up. I won’t have a large amount of customers as Basecamp, maybe a few hundred and was wondering what would be a solid architecture for setting up the customer sites. I’m running SQL Server and .NET on a dedicated machine. Should create a new database for each customer as to have control and isolation of data or keep them all in one database? I am also thinking of creating a sub-domain for each customer as well so modifications can be made to each site as needed. The customer URLs would look like this: https://customer1.foobar.com https://customer2.foobar.com I am going to have the ability to ‘plug-in’ reports that will be uploaded to the site so each customer can customize as needed. Off the top of my head this necessitates having each sub domain on its own code-base for the uploading of these reports. So on the main site the customer would sign up for their new subscription and I would programmatically create a new directory for the customer from the main code base and then create a sub domain pointing to the new directory for the customer and then finally their database. Does this sound about right? Am I on the right track? How do other such sites accomplish the same thing? Thanks for letting me bend your ear for a bit on this.

    Read the article

  • Running ASP / ASP.NET markup outside of a web application (perhaps with MVC)

    - by Frank Schwieterman
    Is there a way to include some aspx/ascx markup in a DLL and use that to generate text dynamically? I really just want to pass a model instance to a view and get the produced html as a string. Similar to what you might do with an XSLT transform, except the transform input is a CLR object rather than an XML document. A second benefit is using the ASP.NET code-behind markup which is known by most team members. One way to achieve this would be to load the MVC view engine in-process and perhaps have it use an ASPX file from a resource. It seems like I could call into just the ViewEngine somehow and have it generate a ViewEngineResult. I don't know ASP.NET MVC well enough though to know what calls to make. I don't think this would be possible with classic ASP or ASP.NET as the control model is so tied to the page model, which doesn't exist in this case. Using something like SparkViewEngine in isolation would be cool too, though not as useful since other team members wouldn't know the syntax. At that point I might as well use XSLT (yes I am looking for a clever way to avoid XSLT).

    Read the article

  • Dependency Injection and decoupling of software layers

    - by cs31415
    I am trying to implement Dependency Injection to make my app tester friendly. I have a rather basic doubt. Data layer uses SqlConnection object to connect to a SQL server database. SqlConnection object is a dependency for data access layer. In accordance with the laws of dependency injection, we must not new() dependent objects, but rather accept them through constructor arguments. Not wanting to upset the DI gods, I dutifully create a constructor in my DAL that takes in SqlConnection. Business layer calls DAL. Business layer must therefore, pass in SqlConnection. Presentation layer calls Business layer. Hence it too, must pass in SqlConnection to business layer. This is great for class isolation and testability. But didn't we just couple the UI and Business layers to a specific implementation of the data layer which happens to use a relational database? Why do the Presentation and Business layers need to know that the underlying data store is SQL? What if the app needs to support multiple data sources other than SQL server (such as XML files, Comma delimited files etc.) Furthermore, what if I add another object upon which my data layer is dependent on (say, a second database). Now, I have to modify the upper layers to pass in this new object. How can I avoid this merry-go-round and reap all the benefits of DI without the pain?

    Read the article

  • What guidelines should be followed when using an unstable/testing/stable branching scheme?

    - by Elliot
    My team is currently using feature branches while doing development. For each user story in our sprint, we create a branch and work it in isolation. Hence, according to Martin Fowler, we practice Continuous Building, not Continuous Integration. I am interested in promoting an unstable/testing/stable scheme, similar to that of Debian, so that code is promoted from unstable = testing = stable. Our definition of done, I'd recommend, is when unit tests pass (TDD always), minimal documentation is complete, automated functional tests pass, and feature has been demo'd and accepted by PO. Once accepted by the PO, the story will be merged into the testing branch. Our test developers spend most of their time in this branch banging on the software and continuously running our automated tests. This scares me, however, because commits from another incomplete story may now make it into the testing branch. Perhaps I'm missing something because this seems like an undesired consequence. So, if moving to a code promotion strategy to solve our problems with feature branches, what strategy/guidelines do you recommend? Thanks.

    Read the article

  • Working around "one executable per project" in Visual C# for many small test programs

    - by Kevin Ivarsen
    When working with Visual Studio in general (or Visual C# Express in my particular case), it looks like each project can be configured to produce only one output - e.g. a single executable or a library. I'm working on a project that consists of a shared library and a few application, and I already have one project in my solution for each of those. However, during development I find it useful to write small example programs that can run one small subsystem in isolation (at a level that doesn't belong in the unit tests). Is there a good way to handle this in Visual Studio? I'd like to avoid adding several dozen separate projects to my solution for each small test program I write, especially when these programs will typically be less than 100 lines of code. I'm hoping to find something that lets me continue to work in Visual Studio and use its build system (rather than moving to something like NAnt). I could foresee the answer being something like: A way of setting this up in Visual Studio that I haven't found yet A GUI like NUnit's graphical runner that searches an assembly for classes with defined Main() functions that you can select and run A command line tool that lets you specify an assembly and a class with a Main function to run

    Read the article

  • Usage of Assert.Inconclusive

    - by Johannes Rudolph
    Hi, Im wondering how someone should use Assert.Inconclusive(). I'm using it if my Unit test would be about to fail for a reason other than what it is for. E.g. i have a method on a class that calculates the sum of an array of ints. On the same class there is also a method to calculate the average of the element. It is implemented by calling sum and dividing it by the length of the array. Writing a Unit test for Sum() is simple. However, when i write a test for Average() and Sum() fails, Average() is likely to fail also. The failure of Average is not explicit about the reason it failed, it failed for a reason other than what it should test for. That's why i would check if Sum() returns the correct result, otherwise i Assert.Inconclusive(). Is this to be considered good practice? What is Assert.Inconclusive intended for? Or should i rather solve the previous example by means of an Isolation Framework?

    Read the article

  • 'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

    - by Simon_Weaver
    I have a LINQ to SQL query that generates the following SQL : exec sp_executesql N'SELECT COUNT(*) AS [value] FROM [dbo].[SessionVisit] AS [t0] WHERE ([t0].[VisitedStore] = @p0) AND (NOT ([t0].[Bot] = 1)) AND ([t0].[SessionDate] > @p1)',N'@p0 int,@p1 datetime', @p0=1,@p1='2010-02-15 01:24:00' (This is the actual SQL taken from SQL Profiler on SQL Server 2008.) The query plan generated when I run this SQL from within Query Analyser is perfect. It uses an index containing VisitedStore, Bot, SessionDate. The query returns instantly. However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning. What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different. I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems. Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!

    Read the article

  • Dealing with SQLException with spring,hibernate & Postgres

    - by mad
    Hi im working on a project using HibernateDaoSUpport from my Daos from Spring & spring-ws & hibernate & postgres who will be used in a national application (means a lot of users) Actually, every exception from hibernate is automatically transformed into some specific Spring dataAccesException. I have a table with a keyword on the dabatase & a unique constraint on the keywords : no duplicate keywords is allowed. I have found twows ways to deal with with that in the Insert Dao: 1- Check for the duplicate manually (with a select) prior to doing your insert. I means that the spring transaction will have a SERIALIZABLE isolation level. The obvious drawback is that we have now 2 queries for a simple insert.Advantage: independent of the database 2-let the insert gone & catch the SqlException & convert it to a userfriendly message & errorcode to the final consumer of our webservices. Solution 2: Spring has developped a way to translate specific exeptions into customized exceptions. see http://www.oracle.com/technology/pub/articles/marx_spring.html In my case i would have a ConstraintViolationException. Ideally i would like to write a custom SQLExceptionTranslator to map the duplicate word constraint in the database with a DuplicateWordException. But i can have many unique constraints on the same table. So i have to get the message of the SQLEXceptions in order to find the name of the constraint declared in the create table "uq_duplicate-constraint" for example. Now i have a strong dependency with the database. Thanks in advance for your answers & excuse me for my poor english (it is not my mother tongue)

    Read the article

  • Are bad data issues that common?

    - by Water Cooler v2
    I've worked for clients that had a large number of distinct, small to mid-sized projects, each interacting with each other via properly defined interfaces to share data, but not reading and writing to the same database. Each had their own separate database, their own cache, their own file servers/system that they had dedicated access to, and so they never caused any problems. One of these clients is a mobile content vendor, so they're lucky in a way that they do not have to face the same problems that everyday business applications do. They can create all those separate compartments where their components happily live in isolation of the others. However, for many business applications, this is not possible. I've worked with a few clients, one of whose applications I am doing the production support for, where there are "bad data issues" on an hourly basis. Yeah, it's that crazy. Some data records from one of the instances (lower than production, of course) would have been run a couple of weeks ago, and caused some other user's data to get corrupted. And then, a data script will have to be written to fix this issue. And I've seen this happening so much with this client that I have to ask. I've seen this happening at a moderate rate with other clients, but this one just seems to be out of order. If you're working with business applications that share a large amount of data by reading and writing to/from the same database, are "bad data issues" that common in your environment?

    Read the article

  • Building "isolated" and "automatically updated" caches (java.util.List) in Java.

    - by Aidos
    Hi Guys, I am trying to write a framework which contains a lot of short-lived caches created from a long-living cache. These short-lived caches need to be able to return their entier contents, which is a clone from the original long-living cache. Effectively what I am trying to build is a level of transaction isolation for the short-lived caches. The user should be able to modify the contents of the short-lived cache, but changes to the long-living cache should not be propogated through (there is also a case where the changes should be pushed through, depending on the Cache type). I will do my best to try and explain: master-cache contains: [A,B,C,D,E,F] temporary-cache created with state [A,B,C,D,E,F] 1) temporary-cache adds item G: [A,B,C,D,E,F] 2) temporary-cache removes item B: [A,C,D,E,F] master-cache contains: [A,B,C,D,E,F] 3) master-cache adds items [X,Y,Z]: [A,B,C,D,E,F,X,Y,Z] temporary-cache contains: [A,C,D,E,F] Things get even harder when the values in the items can change and shouldn't always be updated (so I can't even share the underlying object instances, I need to use clones). I have implemented the simple approach of just creating a new instance of the List using the standard Collection constructor on ArrayList, however when you get out to about 200,000 items the system just runs out of memory. I know the value of 200,000 is excessive to iterate, but I am trying to stress my code a bit. I had thought that it might be able to somehow "proxy" the list, so the temporary-cache uses the master-cache, and stores all of it's changes (effectively a Memento for the change), however that quickly becomes a nightmare when you want to iterate the temporary-cache, or retrieve an item at a specific index. Also given that I want some modifications to the contents of the list to come through (depending on the type of the temporary-cache, whether it is "auto-update" or not) and I get completly out of my depth. Any pointers to techniques or data-structures or just general concepts to try and research will be greatly appreciated. Cheers, Aidos

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

  • Java sockets: multiple client threads on same port on same machine?

    - by espcorrupt
    I am new to Socket programming in Java and was trying to understand if the below code is not a wrong thing to do. My question is: Can I have multiple clients on each thread trying to connect to a server instance in the same program and expect the server to read and write data with isolation between clients" public class Client extends Thread { ... void run() { Socket socket = new Socket("localhost", 1234); doIO(socket); } } public class Server extends Thread { ... void run() { // serverSocket on "localhost", 1234 Socket clientSock = serverSocket.accept(); executor.execute(new ClientWorker(clientSock)); } } Now can I have multiple Client instances on different threads trying to connect on the same port of the current machine? For example, Server s = new Server("localhost", 1234); s.start(); Client[] c = new Client[10]; for (int i = 0; i < c.length; ++i) { c.start(); }

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • How to understand existing projects

    - by John
    Hi. I am a trainee developer and have been writing .NET applications for about a year now. Most of the work I have done has involved building new applications (mainly web apps) from scratch and I have been given more or less full control over the software design. This has been a great experience however, as a trainee developer my confidence about whether the approaches I have taken are the best is minimal. Ideally I would love to collaborate with more experienced developers (I find this the best was I learn) however in the company I work for developers tend to work in isolation (a great shame for me). Recently I decided that a good way to learn more about how experienced developers approach their design might be to explore some open source projects. I found myself a little overwhelmed by the projects I looked at. With my level of experience it was hard to understand the body of code I faced. My question is slight fuzzy one. How do developers approach the task of understanding a new medium to large scale project. I found myself pouring over lots of code and struggling to see the wood for the trees. At any one time I felt that I could understand a small portion of the system but not see how its all fits together. Do others get this same feeling? If so what approaches do you take to understanding the project? Do you have any other advice about how to learn design best practices? Any advice will be very much appreciated. Thank you.

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • Is it possible to persist two profiles with the Profile Provider Model?

    - by NickGPS
    I have a website that needs to store two sets of user data into separate data stores. The first set of data is used by the SiteCore CMS and holds information about the user. The second set of data is used by a personalisation application that stores its own user data. The reason they aren't stored together in the same profile object is because the personalisation application is used across multiple websites that do not all use SiteCore. I am able to create multiple Profile Providers - I currently have one from SiteCore and a custom provider that I have written and these both work in isolation. The problem exists when you try to configure both in the same web.config file. It seems you can only specify a single Profile object in the web.config file, rather than one for each provider. This means that regardless of which provider is being used the .Net framework sends through the profile object that is specified in the "inherits" parameter in the profile section of the web.config file. My questions are - Is it possible to specify a different Profile object for each Profile Provider? If so, how and where is this specified? Thanks, Nick

    Read the article

  • Executing a process in windows server 2003 and ii6 from code - permissions error

    - by kurupt_89
    I have a problem executing a process from our testing server. On my localhost using windows XP and iis5.1 I changed the machine.config file to have the line - I then changed the login for iis to log on as local system account and allow server to interact with desktop. This fixed my problem executing a process from code in xp. When using the same method on windows server 2003 (using iis6 isolation mode) the process does not get executed. Here is the code to execute the process (I have tested the inputs to iecapt through the command line and an image is generated) - public static void GenerateImageToDisk(string ieCaptPath, string url, string path, int delay) { url = FixUrl(url); ieCaptPath = FixPath(ieCaptPath); string arguments = @"--url=""{0}"" --out=""{1}"" --min-width=0 --delay={2}"; arguments = string.Format(arguments, url, path, delay); ProcessStartInfo ieCaptProcessStartInfo = new ProcessStartInfo(ieCaptPath + "IECapt.exe"); ieCaptProcessStartInfo.RedirectStandardOutput = true; ieCaptProcessStartInfo.WindowStyle = ProcessWindowStyle.Hidden; ieCaptProcessStartInfo.UseShellExecute = false; ieCaptProcessStartInfo.Arguments = arguments; ieCaptProcessStartInfo.WorkingDirectory = ieCaptPath; Process ieCaptProcess = Process.Start(ieCaptProcessStartInfo); ieCaptProcess.WaitForExit(600000); ieCaptProcess.Close(); }

    Read the article

  • Constant isolate of hovered elements

    - by nailer
    I'm trying to make an element isolation tool, where: All elements are shaded Selected elements, while hovered, are not shaded Originally, looking at the image lightbox implementations, I thought of appending an overlay to the document, then raising the z-index of elements upon hover. However this technique does not work in this case, as the overlay blocks additional mouse hovers: $(function() { window.alert('started'); $('<div id="overlay" />').hide().appendTo('body').fadeIn('slow'); $("p").hover( function () { $(this).css( {"z-index":5} ); }, function () { $(this).css( {"z-index":0} ); } ); Alternatively, JQueryTools has an 'expose' and 'mask' tool, which I have tried with the code below: $(function() { $("a").click(function() { alert("Hello world!"); }); // Mask whole page $(document).mask("#222"); // Mask and expose on however / unhover $("p").hover( function () { $(this).expose(); }, function () { $(this).mask(); } ); }); Hovering does not work unless I disable the initial page masking. Any thoughts of how best to achieve this, with plain JQuery, JQuery tools expose, or some other technique? Thankyou!

    Read the article

  • Query with UDF works in Access but gives Undefined function in expression (Err 3085) in Excel

    - by ronwest
    I have an Access table with a date/time field. I wanted to make a composite Key field out of the date/time field and 3 other text fields in the same format as the matching Key field in another database. So I concatenated the 3 text fields and wrote a User-Defined-Function in a Module to output the date field as a string in the format "YYYYMMDD". Public Function YYYYMMDD(dteDate As Date) As String YYYYMMDD = Format(dteDate, "YYYYMMDD") End Function I can then successfully run my queries in Access and it all works fine. But when I set up some DAO code in Excel and try to run the query that works fine within Access... db.Execute "qryMake_tblValsDailyAccount" ...Excel gives me the "Undefined function in expression. (Error 3085)" error. To me this is a bug in Excel and/or Access, because the (Excel) client shouldn't need to know anything about the internal calculations that normally take place perfectly in the (Access) server when in isolation. Excel should send the querydef (name with no parameters) to the server, let the server do its work then receive the answers. Why does it need to get involved with a function internal to the server? Does anyone know a way around this?

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • While in a transaction, how can reads to an affected row be prevented until the transaction is done?

    - by Mahn
    I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation: BEGIN WORK; SELECT * FROM users WHERE userID=1; UPDATE users SET credits=100 WHERE userID=1; COMMIT; I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values? The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues. Note that I don't want to lock the entire table for reads, just the specific row. Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.

    Read the article

  • Difference between SET autocommit=1 and START TRANSACTION in mysql (Have I missed something?)

    - by tkolar
    Hey there, I am reading up on transactions in mysql and am not sure whether I have grasped something specific correctly, and I want to be sure I understood that correctly, so here goes. I know what a transaction is supposed to do, I'm just not sure whether I understood the statement semantics or not. So, my question is, is anything wrong, (and, if that is the case, what is wrong) with the following: By default, autocommit mode is enabled in mysql. Now, SET autocommit=0; will begin a transaction, SET autocommit=1; will implicitly commit. It is possible to COMMIT; as well as ROLLBACK;, in both of which cases autocommit is still set to 0 afterwards (and a new transaction is implicitly started). START TRANSACTION; will basically SET autocommit=0; until a COMMIT; or ROLLBACK; takes place. In other words, START TRANSACTION; and SET autocommit=0; are equivalent, except for the fact that START TRANSACTION; does the equivalent of implicitly adding a SET autocommit=0; after COMMIT; or ROLLBACK; If that is the case, I don't understand http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html#isolevel_serializable - seeing as having an isolation level implies that there is a transaction, meaning that autocommit should be off anyway? And if there is another difference (other than the one described above) between beginning a transaction and setting autocommit, what is it? Thanks a lot in advance for your help!

    Read the article

  • Cannot exclude a path from basic auth when using a front controller script

    - by Adam Monsen
    I have a small PHP/Apache2 web application wherein I'd like to do two seemingly incompatible operations: Route all requests through a single PHP script (a "front controller", if you will) Secure everything except API calls with HTTP basic authentication I can satisfy either requirement just fine in isolation, it's when I try to do both at once that I am blocked. For no good reason I'm trying to accomplish these requirements solely with Apache configuration. Here are the requirements stated as an example. A GET request for this URL: http://basic/api/listcars?max=10 should be sent through front.php without requiring basic auth. front.php will get /api/listcars?max=10 and do whatever it needs to with that. Here's what I think should work. In my /etc/hosts I added 127.0.0.1 basic and I am using this Apache config: <Location /> AuthType Basic AuthName "Home Secure" AuthUserFile /etc/apache2/passwords require valid-user </Location> <VirtualHost *:80> ServerName basic DocumentRoot /var/www/basic <Directory /var/www/basic> <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /front.php/$1 [QSA,L] </IfModule> </Directory> <Location /api> Order deny,allow Allow from all Satisfy any </Location> </VirtualHost> But I still always get a HTTP 401: Authorization Required response. I can make it work by changing <Location /api> into <Location ~ /api> but this allows more than I want to past basic auth. I also tried changing the <Directory /var/www/basic> section into <Location />, but this doesn't work either (and it results in some strange values for PATH_TRANSLATED being passed to the script). I searched around and found many examples of selective exclusion of basic auth, but none that also incorporated a front controller. I could certainly do something like handle basic auth in the front controller, but if I can have Apache do that instead I'll be able to keep all authentication logic out of my PHP code. A friend suggested splitting this into two vhosts, which I know also works. This used to be two separate vhosts, actually. I'm using Apache 2.2.22 / PHP 5.3.10 on Ubuntu 12.04.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >