Search Results

Search found 3635 results on 146 pages for 'concurrent collections'.

Page 104/146 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • How to convert this procedural programming to object-oriented programming?

    - by manus91
    I have a source code that is needed to be converted by creating classes, objects and methods. So far, I've just done by converting the initial main into a separate class. But I don't know what to do with constructor and which variables are supposed to be private. This is the code : import java.util.*; public class Card{ private static void shuffle(int[][] cards){ List<Integer> randoms = new ArrayList<Integer>(); Random randomizer = new Random(); for(int i = 0; i < 8;) { int r = randomizer.nextInt(8)+1; if(!randoms.contains(r)) { randoms.add(r); i++; } } List<Integer> clonedList = new ArrayList<Integer>(); clonedList.addAll(randoms); Collections.shuffle(clonedList); randoms.addAll(clonedList); Collections.shuffle(randoms); int i=0; for(int r=0; r < 4; r++){ for(int c=0; c < 4; c++){ cards[r][c] = randoms.get(i); i++; } } } public static void play() throws InterruptedException { int ans = 1; int preview; int r1,c1,r2,c2; int[][] cards = new int[4][4]; boolean[][] cardstatus = new boolean[4][4]; boolean gameover = false; int moves; Scanner input = new Scanner(System.in); do{ moves = 0; shuffle(cards); System.out.print("Enter the time(0 to 5) in seconds for the preview of the answer : "); preview = input.nextInt(); while((preview<0) || (preview>5)){ System.out.print("Invalid time!! Re-enter time(0 - 5) : "); preview = input.nextInt(); } preview = 1000*preview; System.out.println(" "); for (int i =0; i<4;i++){ for (int j=0;j<4;j++){ System.out.print(cards[i][j]); System.out.print(" "); } System.out.println(""); System.out.println(""); } Thread.sleep(preview); for(int b=0;b<25;b++){ System.out.println(" "); } for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ System.out.print("*"); System.out.print(" "); cardstatus[r][c] = false; } System.out.println(""); System.out.println(" "); } System.out.println(""); do{ do{ System.out.print("Please insert the first card row : "); r1 = input.nextInt(); while((r1<1) || (r1>4)){ System.out.print("Invalid coordinate!! Re-enter first card row : "); r1 = input.nextInt(); } System.out.print("Please insert the first card column : "); c1 = input.nextInt(); while((c1<1) || (c1>4)){ System.out.print("Invalid coordinate!! Re-enter first card column : "); c1 = input.nextInt(); } if(cardstatus[r1-1][c1-1] == true){ System.out.println("The card is already flipped!! Select another card."); System.out.println(""); } }while(cardstatus[r1-1][c1-1] != false); do{ System.out.print("Please insert the second card row : "); r2 = input.nextInt(); while((r2<1) || (r2>4)){ System.out.print("Invalid coordinate!! Re-enter second card row : "); r2 = input.nextInt(); } System.out.print("Please insert the second card column : "); c2 = input.nextInt(); while((c2<1) || (c2>4)){ System.out.print("Invalid coordinate!! Re-enter second card column : "); c2 = input.nextInt(); } if(cardstatus[r2-1][c2-1] == true){ System.out.println("The card is already flipped!! Select another card."); } if((r1==r2)&&(c1==c2)){ System.out.println("You can't select the same card twice!!"); continue; } }while(cardstatus[r2-1][c2-1] != false); r1--; c1--; r2--; c2--; System.out.println(""); System.out.println(""); System.out.println(""); for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ if((r==r1)&&(c==c1)){ System.out.print(cards[r][c]); System.out.print(" "); } else if((r==r2)&&(c==c2)){ System.out.print(cards[r][c]); System.out.print(" "); } else if(cardstatus[r][c] == true){ System.out.print(cards[r][c]); System.out.print(" "); } else{ System.out.print("*"); System.out.print(" "); } } System.out.println(" "); System.out.println(" "); } System.out.println(""); if(cards[r1][c1] == cards[r2][c2]){ System.out.println("Cards Matched!!"); cardstatus[r1][c1] = true; cardstatus[r2][c2] = true; } else{ System.out.println("No cards match!!"); } Thread.sleep(2000); for(int b=0;b<25;b++){ System.out.println(""); } for(int r=0;r<4;r++){ for(int c=0;c<4;c++){ if(cardstatus[r][c] == true){ System.out.print(cards[r][c]); System.out.print(" "); } else{ System.out.print("*"); System.out.print(" "); } } System.out.println(""); System.out.println(" "); } System.out.println(""); System.out.println(""); System.out.println(""); gameover = true; for(int r=0;r<4;r++){ for( int c=0;c<4;c++){ if(cardstatus[r][c]==false){ gameover = false; break; } } if(gameover==false){ break; } } moves++; }while(gameover != true); System.out.println("Congratulations, you won!!"); System.out.println("It required " + moves + " moves to finish it."); System.out.println(""); System.out.print("Would you like to play again? (1=Yes / 0=No) : "); ans = input.nextInt(); }while(ans == 1); } } The main class is: import java.util.*; public class PlayCard{ public static void main(String[] args) throws InterruptedException{ Card game = new Card(); game.play(); } } Should I simplify the Card class by creating other classes? Through this code, my javadoc has no constructtor. So i need help on this!

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • Java: design for using many executors services and only few threads

    - by Guillaume
    I need to run in parallel multiple threads to perform some tests. My 'test engine' will have n tests to perform, each one doing k sub-tests. Each test result is stored for a later usage. So I have n*k processes that can be ran concurrently. I'm trying to figure how to use the java concurrent tools efficiently. Right now I have an executor service at test level and n executor service at sub test level. I create my list of Callables for the test level. Each test callable will then create another list of callables for the subtest level. When invoked a test callable will subsequently invoke all subtest callables test 1 subtest a1 subtest ...1 subtest k1 test n subtest a2 subtest ...2 subtest k2 call sequence: test manager create test 1 callable test1 callable create subtest a1 to k1 testn callable create subtest an to kn test manager invoke all test callables test1 callable invoke all subtest a1 to k1 testn callable invoke all subtest an to kn This is working fine, but I have a lot of new treads that are created. I can not share executor service since I need to call 'shutdown' on the executors. My idea to fix this problem is to provide the same fixed size thread pool to each executor service. Do you think it is a good design ? Do I miss something more appropriate/simple for doing this ?

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • hosting a high traffic facebook app (game)

    - by z3cko
    we are currently developing a high traffic facebook application. all the traffic will be within one month, where there are 500.000 to 1.000.000 expected users. after that month, the game is over and we have a winner - so the app will be archived. we are currently planning to develop the application with ruby on rails and searching for hosting options that can deal with the traffic. the problem is not so much the users, but the peak values: we will have around 500.000 requests coming daily within a short timeframe (lets say within 3 minutes in the worst case) we are expecting 500.000 to 1.000.000 users of the application, with peaks at 1:00pm (timezone GMT+1), where most (up to 80% of the users) will send most of the requests. the requests are from 11th of june to 11.july - after that, the app/game is closed/over. we are currently developing an aggressive caching mechanism - currently we are thinking about 2 or 3 small apps/webservices, that will handle the load. the load is distributed as follows: a) main application, cached data (11 screens, 200k each) b) voting: every day until 1:00pm (timezone GMT+1) - every user votes with about 10k data sent, high concurrent peak values! questions: is there any specific application setup that is recommendable? are there any hosting partners that can be recommended? thanks!

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Concurrency problem with arrays (Java)

    - by Johannes
    For an algorithm I'm working on I tried to develop a blacklisting mechanism that can blacklist arrays in a specific way: If "1, 2, 3" is blacklisted "1, 2, 3, 4, 5" is also considered blacklisted. I'm quite happy with the solution I've come up with so far. But there seem to be some serious problems when I access a blacklist from multiple threads. The method "contains" (see code below) sometimes returns true, even if an array is not blacklisted. This problem does not occur if I only use one thread, so it most likely is a concurrency problem. I've tried adding some synchronization, but it didn't change anything. I also tried some slightly different implementations using java.util.concurrent classes. Any ideas on how to fix this? public class Blacklist { private static final int ARRAY_GROWTH = 10; private final Node root = new Node(); private static class Node{ private volatile Node[] childNodes = new Node[ARRAY_GROWTH]; private volatile boolean blacklisted = false; public void blacklist(){ this.blacklisted = true; this.childNodes = null; } } public void add(final int[] array){ synchronized (root) { Node currentNode = this.root; for(final int edge : array){ if(currentNode.blacklisted) return; else if(currentNode.childNodes.length <= edge) { currentNode.childNodes = Arrays.copyOf(currentNode.childNodes, edge + ARRAY_GROWTH); } if(currentNode.childNodes[edge] == null) { currentNode.childNodes[edge] = new Node(); } currentNode = currentNode.childNodes[edge]; } currentNode.blacklist(); } } public boolean contains(final int[] array){ synchronized (root) { Node currentNode = this.root; for(final int edge : array){ if(currentNode.blacklisted) return true; else if(currentNode.childNodes.length <= edge || currentNode.childNodes[edge] == null) return false; currentNode = currentNode.childNodes[edge]; } return currentNode.blacklisted; } } }

    Read the article

  • Callers block until getFoo() has a value ready?

    - by Sean Owen
    I have a Java Thread which exposes a property which other threads want to access: class MyThread extends Thread { private Foo foo; ... Foo getFoo() { return foo; } ... public void run() { ... foo = makeTheFoo(); ... } } The problem is that it takes some short time from the time this runs until foo is available. Callers may call getFoo() before this and get a null. I'd rather they simply block, wait, and get the value once initialization has occurred. (foo is never changed afterwards.) It will be a matter of milliseconds until it's ready, so I'm comfortable with this approach. Now, I can make this happen with wait() and notifyAll() and there's a 95% chance I'll do it right. But I'm wondering how you all would do it; is there a primitive in java.util.concurrent that would do this, that I've missed? Or, how would you structure it? Yes, make foo volatile. Yes, synchronize on an internal lock Object and put the check in a while loop until it's not null. Am I missing anything?

    Read the article

  • What are salesforce.com and Apex like as an application development platform?

    - by mhollers
    I have recently discovered that salesforce.com is much more than an online CRM after coming across a Morrison's Case Study in which they develop a works management application. I've been trying it out with a view to recreating our own Works Management system on the platform. My background is in Microsoft and .Net, and the obvious 1st choice would be asp.net. However, there's only really myself with .net experience and my manager with a more legacy Synergy programming background, and I am self taught and am looking at evaluating other RAD options (eg Ironspeed). the nature of the business is in the main 2-5 concurrent construction type contracts that run for 3-5 yrs each, each requiring 15-50 system users. Traditionally we have used our character based Works Mangement system for everything and tweaked it for each contract. The Salesforce licensing model on the face of it suits this sort of flexibilty, but I'm worried about the development flexibilty/learning curve and all the issues that surround lock-in. There doesn't seem to be much neutral sober analysis of the platform on the web that isn't salesforce's own material/blogs Has anyone any experience of developing an application on salesforce as compared to the more 'traditional' .Net route?

    Read the article

  • RequestBuilder timeouts and browser connection limits per domain.

    - by WesleyJohnson
    This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think? Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds. We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?). Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR? I hope that's clear, if not please let me know and I'll try to explain more.

    Read the article

  • Help! I'm a Haskell Newbie

    - by Darknight
    I've only just dipped my toe in the world of Haskell as part of my journey of programming enlightenment (moving on from, procedural to OOP to concurrent to now functional). I've been trying an online Haskell Evaluator. However I'm now stuck on a problem: Create a simple function that gives the total sum of an array of numbers. In a procedural language this for me is easy enough (using recursion) (c#) : private int sum(ArrayList x, int i) { if (!(x.Count < i + 1)) { int t = 0; t = x.Item(i); t = sum(x, i + 1) + t; return t; } } All very fine however my failed attempt at Haskell was thus: let sum x = x+sum in map sum [1..10] this resulted in the following error (from that above mentioned website): Occurs check: cannot construct the infinite type: a = a -> t Please bear in mind I've only used Haskell for the last 30 minutes! I'm not looking simply for an answer but a more explanation of it. Thanks in advanced.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • How can I merge two Linq IEnumerable<T> queries without running them?

    - by makerofthings7
    How do I merge a List<T> of TPL-based tasks for later execution? public async IEnumerable<Task<string>> CreateTasks(){ /* stuff*/ } My assumption is .Concat() but that doesn't seem to work: void MainTestApp() // Full sample available upon request. { List<string> nothingList = new List<string>(); nothingList.Add("whatever"); cts = new CancellationTokenSource(); delayedExecution = from str in nothingList select AccessTheWebAsync("", cts.Token); delayedExecution2 = from str in nothingList select AccessTheWebAsync("1", cts.Token); delayedExecution = delayedExecution.Concat(delayedExecution2); } /// SNIP async Task AccessTheWebAsync(string nothing, CancellationToken ct) { // return a Task } I want to make sure that this won't spawn any task or evaluate anything. In fact, I suppose I'm asking "what logically executes an IQueryable to something that returns data"? Background Since I'm doing recursion and I don't want to execute this until the correct time, what is the correct way to merge the results if called multiple times? If it matters I'm thinking of running this command to launch all the tasks var AllRunningDataTasks = results.ToList(); followed by this code: while (AllRunningDataTasks.Count > 0) { // Identify the first task that completes. Task<TableResult> firstFinishedTask = await Task.WhenAny(AllRunningDataTasks); // ***Remove the selected task from the list so that you don't // process it more than once. AllRunningDataTasks.Remove(firstFinishedTask); // TODO: Await the completed task. var taskOfTableResult = await firstFinishedTask; // Todo: (doen't work) TrustState thisState = (TrustState)firstFinishedTask.AsyncState; // TODO: Update the concurrent dictionary with data // thisState.QueryStartPoint + thisState.ThingToSearchFor Interlocked.Decrement(ref thisState.RunningDirectQueries); Interlocked.Increment(ref thisState.CompletedDirectQueries); if (thisState.RunningDirectQueries == 0) { thisState.TimeCompleted = DateTime.UtcNow; } }

    Read the article

  • Gridview Datasource Server error

    - by salvationishere
    I am developing a C# VS 2008 and SQL Server 2008 website. However, I get the below error now when I first run this: The DataSourceID of 'GridView1' must be the ID of a control of type IDataSource. A control with ID 'AdventureWorks3.mdf' could not be found What is causing this error? Here is my default.aspx file. I have configured GridView1 to use my AdventureWorks3.mdf file, stored in my App_Data folder. Do I need to add this folder name to this ASPX file? <%@ Page Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" Runat="Server"> <asp:Panel runat="server" ID="AuthenticatedMessagePanel"> <asp:Label runat="server" ID="WelcomeBackMessage"></asp:Label> <table> <tr > <td> <asp:Label ID="tableLabel" runat="server" Font-Bold="True" Text="Select target table:"></asp:Label> </td> <td> <asp:Label ID="inputLabel" runat="server" Font-Bold="True" Text="Select input file:"></asp:Label> </td></tr> <tr><td valign="top"> <asp:Label ID="feedbackLabel" runat="server"></asp:Label> <asp:GridView ID="GridView1" runat="server" style="WIDTH: 400px;" CellPadding="4" ForeColor="#333333" GridLines="None" onselectedindexchanged="GridView1_SelectedIndexChanged" AutoGenerateSelectButton="True" DataSourceID="AdventureWorks3.mdf" > <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> </td> <td valign="top"> <input id="uploadFile" type="file" size="26" runat="server" name="uploadFile" title="UploadFile" class="greybar" enableviewstate="True" /> </td></tr> </table> </asp:Panel> <asp:Panel runat="Server" ID="AnonymousMessagePanel"> <asp:HyperLink runat="server" ID="lnkLogin" Text="Log In" NavigateUrl="~/Login.aspx"> </asp:HyperLink> </asp:Panel> </asp:Content> Or what about my ASPX.CS file? Is this the problem? using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Collections.Generic; using System.IO; using System.Drawing; using System.ComponentModel; using System.Data.SqlClient; using ADONET_namespace; using System.Security.Principal; //using System.Windows; public partial class _Default : System.Web.UI.Page //namespace AddFileToSQL { //protected System.Web.UI.HtmlControls.HtmlInputFile uploadFile; protected System.Web.UI.HtmlControls.HtmlInputButton btnOWrite; protected System.Web.UI.HtmlControls.HtmlInputButton btnAppend; protected System.Web.UI.WebControls.Label Label1; protected static string inputfile = ""; public static string targettable; public static string selection; // Number of controls added to view state protected int default_NumberOfControls { get { if (ViewState["default_NumberOfControls"] != null) { return (int)ViewState["default_NumberOfControls"]; } else { return 0; } } set { ViewState["default_NumberOfControls"] = value; } } protected void uploadFile_onclick(object sender, EventArgs e) { } protected void Load_GridData() { GridView1.DataSource = ADONET_methods.DisplaySchemaTables(); GridView1.DataBind(); } protected void btnOWrite_Click(object sender, EventArgs e) { if (uploadFile.PostedFile.ContentLength > 0) { feedbackLabel.Text = "You do not have sufficient access to overwrite table records."; } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void btnAppend_Click(object sender, EventArgs e) { string fullpath = Page.Request.PhysicalApplicationPath; string path = uploadFile.PostedFile.FileName; if (File.Exists(path)) { // Create a file to write to. try { StreamReader sr = new StreamReader(path); string s = ""; while (sr.Peek() > 0) s = sr.ReadLine(); sr.Close(); } catch (IOException exc) { Console.WriteLine(exc.Message + "Cannot open file."); return; } } if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Session["Message"] = inputfile; Response.Redirect("DataMatch.aspx"); } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void Page_Load(object sender, EventArgs e) { if (Request.IsAuthenticated) { WelcomeBackMessage.Text = "Welcome back, " + User.Identity.Name + "!"; // Reference the CustomPrincipal / CustomIdentity CustomIdentity ident = User.Identity as CustomIdentity; if (ident != null) WelcomeBackMessage.Text += string.Format(" You are the {0} of {1}.", ident.Title, ident.CompanyName); AuthenticatedMessagePanel.Visible = true; AnonymousMessagePanel.Visible = false; //if (!Page.IsPostBack) //{ // Load_GridData(); //} } else { AuthenticatedMessagePanel.Visible = false; AnonymousMessagePanel.Visible = true; } } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { GridViewRow row = GridView1.SelectedRow; targettable = row.Cells[2].Text; } }

    Read the article

  • issue with vhdl structural coding

    - by user3699982
    The code below is a simple vhdl structural architecture, however, the concurrent assignment to the signal, comb1, is upsetting the simulation with the outputs (tb_lfsr_out) and comb1 becoming undefined. Please, please help, thank you, Louise. library IEEE; use IEEE.STD_LOGIC_1164.all; entity testbench is end testbench; architecture behavioural of testbench is CONSTANT clock_frequency : REAL := 1.0e9; CONSTANT clock_period : REAL := (1.0/clock_frequency)/2.0; signal tb_master_clk, comb1: STD_LOGIC := '0'; signal tb_lfsr_out : std_logic_vector(2 DOWNTO 0) := "111"; component dff port ( q: out STD_LOGIC; d, clk: in STD_LOGIC ); end component; begin -- Clock/Start Conversion Generator tb_master_clk <= (NOT tb_master_clk) AFTER (1 SEC * clock_period); comb1 <= tb_lfsr_out(0) xor tb_lfsr_out(2); dff6: dff port map (tb_lfsr_out(2), tb_lfsr_out(1), tb_master_clk); dff7: dff port map (tb_lfsr_out(1), tb_lfsr_out(0), tb_master_clk); dff8: dff port map (tb_lfsr_out(0), comb1, tb_master_clk); end behavioural;

    Read the article

  • List with non-null elements ends up containing null. A synchronization issue?

    - by Alix
    Hi. First of all, sorry about the title -- I couldn't figure out one that was short and clear enough. Here's the issue: I have a list List<MyClass> list to which I always add newly-created instances of MyClass, like this: list.Add(new MyClass()). I don't add elements any other way. However, then I iterate over the list with foreach and find that there are some null entries. That is, the following code: foreach (MyClass entry in list) if (entry == null) throw new Exception("null entry!"); will sometimes throw an exception. I should point out that the list.Add(new MyClass()) are performed from different threads running concurrently. The only thing I can think of to account for the null entries is the concurrent accesses. List<> isn't thread-safe, after all. Though I still find it strange that it ends up containing null entries, instead of just not offering any guarantees on ordering. Can you think of any other reason? Also, I don't care in which order the items are added, and I don't want the calling threads to block waiting to add their items. If synchronization is truly the issue, can you recommend a simple way to call the Add method asynchronously, i.e., create a delegate that takes care of that while my thread keeps running its code? I know I can create a delegate for Add and call BeginInvoke on it. Does that seem appropriate? Thanks.

    Read the article

  • Can a Java HashMap's size() be out of sync with its actual entries' size ?

    - by trix
    I have a Java HashMap called statusCountMap. Calling size() results in 30. But if I count the entries manually, it's 31 This is in one of my TestNG unit tests. These results below are from Eclipse's Display window (type code - highlight - hit Display Result of Evaluating Selected Text). statusCountMap.size() (int) 30 statusCountMap.keySet().size() (int) 30 statusCountMap.values().size() (int) 30 statusCountMap (java.util.HashMap) {40534-INACTIVE=2, 40526-INACTIVE=1, 40528-INACTIVE=1, 40492-INACTIVE=3, 40492-TOTAL=4, 40513-TOTAL=6, 40532-DRAFT=4, 40524-TOTAL=7, 40526-DRAFT=2, 40528-ACTIVE=1, 40524-DRAFT=2, 40515-ACTIVE=1, 40513-DRAFT=4, 40534-DRAFT=1, 40514-TOTAL=3, 40529-DRAFT=4, 40515-TOTAL=3, 40492-ACTIVE=1, 40528-TOTAL=4, 40514-DRAFT=2, 40526-TOTAL=3, 40524-INACTIVE=2, 40515-DRAFT=2, 40514-ACTIVE=1, 40534-TOTAL=3, 40513-ACTIVE=2, 40528-DRAFT=2, 40532-TOTAL=4, 40524-ACTIVE=3, 40529-ACTIVE=1, 40529-TOTAL=5} statusCountMap.entrySet().size() (int) 30 What gives ? Anyone has experienced this ? I'm pretty sure statusCountMap is not being modified at this point. There are 2 methods (lets call them methodA and methodB) that modify statusCountMap concurrently, by repeatedly calling incrementCountInMap. private void incrementCountInMap(Map map, Long id, String qualifier) { String key = id + "-" + qualifier; if (map.get(key) == null) { map.put(key, 0); } synchronized (map) { map.put(key, map.get(key).intValue() + 1); } } methodD is where I'm getting the issue. methodD has a TestNG @dependsOnMethods = { "methodA", "methodB" } so when methodD is executing, statusCountMap is pretty much static already. I'm mentioning this because it might be a bug in TestNG. I'm using Sun JDK 1.6.0_24. TestNG is testng-5.9-jdk15.jar Hmmm ... after rereading my post, could it be because of concurrent execution of outside-of-synchronized-block map.get(key) == null & map.put(key,0) that's causing this issue ?

    Read the article

  • How to provide warnings during validation in ASP.NET MVC?

    - by Alex
    Sometimes user input is not strictly invalid but can be considered problematic. For example: A user enters a long sentence in a single-line Name field. He probably should have used the Description field instead. A user enters a Name that is very similar to that of an existing entity. Perhaps he's inputting the same entity but didn't realize it already exists, or some concurrent user has just entered it. Some of these can easily be checked client-side, some require server-side checks. What's the best way, perhaps something similar to DataAnnotations validation, to provide warnings to the user in such cases? The key here is that the user has to be able to override the warning and still submit the form (or re-submit the form, depending on the implementation). The most viable solution that comes to mind is to create some attribute, similar to a CustomValidationAttribute, that may make an AJAX call and would display some warning text but doesn't affect the ModelState. The intended usage is this: [WarningOnFieldLength(MaxLength = 150)] [WarningOnPossibleDuplicate()] public string Name { get; set; } In the view: @Html.EditorFor(model => model.Name) @Html.WarningMessageFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) So, any ideas?

    Read the article

  • A question of long-running and disruptive branches

    - by Matt Enright
    We are about to begin prototyping a new application that will share some existing infrastructure assemblies with an existing application, and also involve a significant subset of the existing domain model. Parts of the domain model will likely undergo some serious changes for this new application, and the endgame for all of this, once the new application has been fully specified and is launch-ready is that we would like to re-unify the models of the two applications (as well as share a database, link functionality, etc.), but for the duration of development, prototyping, etc, we will be using a separate database so that we can change things without worrying about impact to development or use of the existing application. Since it is a prototype, there will be a pretty long window during which serious changes or rearchitecturing can occur as product management experiments with different workflows, different customer bases are surveyed, and we try and keep up. We have already made a Subversion branch, so as to not impact concurrent development on the mature application, and are toying with 2 potential ways of moving forward with this: Use the svn branch as the sole mechanism of separation. Make our changes to the existing domain models, and evaluate their impact on the existing application (and make requisite changes to ProjectA) when we have established that our long-running side branch is stable enough for re-entry to trunk. "Fork" the shared code (temporarily): Copy ProjectA.Entities to NewProject.Entities, and treat all of the NewProject code as self-contained. When all of the perturbations around the model have died down and we feel satisfied, manually re-integrate the changes (as granular or sweeping as warranted) back into ProjectA.Entities, updating ProjectA to use the improved models at each step (this can take place either before or after the subversion merge has occurred). The subversion merge will then not handle recombination of any of the heavy changes here. Note: the "fork" method only applies to the code we see significant changes in store for, and whose modification will break ProjectA - shared infrastructure stuff for example, we would just modify in place (on our branch) and let the merge sort out. Development is hard, go shopping. Naturally, after not coming to an agreement, we're turning it over to the oracle of power that is SO. Any experience with any of these methods, pain points to watch out for, something new entirely?

    Read the article

  • SYN receives RST,ACK very frequently

    - by user1289508
    Hi Socket Programming experts, I am writing a proxy server on Linux for SQL Database server running on Windows. The proxy is coded using bsd sockets and in C, and it is working just fine. When I use a database client (written in JAVA, and running on a Linux box) to fire queries (with a concurrency of 100 or more) directly to the Database server, not experiencing connection resets. But through my proxy I am experiencing many connection resets. Digging deeper I came to know that connection from 'DB client' to 'Proxy' always succeeds but when the 'Proxy' tries to connect to the DB server the connection fails, due to the SYN packet getting RST,ACK. That was to give some background. The question is : Why does sometimes SYN receives RST,ACK? 'DB client(linux)' to 'Server(windows)' ---- Works fine 'DB client(linux) to 'Proxy(Linux)' to 'Server(windows)' ----- problematic I am aware that this can happen in "connection refused" case but this definitely is not that one. SYN flooding might be another scenario, but that does not explain fine behavior while firing to Server directly. I am suspecting some socket option setting may be required, that the client does before connecting and my proxy does not. Please put some light on this. Any help (links or pointers) is most appreciated. Additional info: Wrote a C client that does concurrent connections, which takes concurrency as an argument. Here are my observations: - At 5000 concurrency and above, some connects failed with 'connection refused'. - Below 2000, it works fine. But the actual problem is observed even at a concurrency of 100 or more. Note: The problem is time dependent sometimes it never comes at all and sometimes it is very frequent and DB client (directly to server) works fine at all times .

    Read the article

  • Limiting object allocation over multiple threads

    - by John
    I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache. A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache. Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance? Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation: void ResultCallback( Result result, Request *request ) { lock totalResultsCached lock cachedLimit if( totalResultsCached + 1 > cachedLimit ) { unlock cachedLimit unlock totalResultsCached //cancel the request return; } ++totalResultsCached; unlock cachedLimit unlock totalResultsCached request.add(result) } void SendResults( int resultsToSend, Request *request ) { while ( resultsToSend > 0 ) { send(request.remove()) lock totalResultsCached --totalResultsCached unlock totalResultsCached --resultsToSend; } }

    Read the article

  • While in a transaction, how can reads to an affected row be prevented until the transaction is done?

    - by Mahn
    I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation: BEGIN WORK; SELECT * FROM users WHERE userID=1; UPDATE users SET credits=100 WHERE userID=1; COMMIT; I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values? The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues. Note that I don't want to lock the entire table for reads, just the specific row. Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >