Search Results

Search found 27440 results on 1098 pages for 'table lock'.

Page 50/1098 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Lucene.net create+lock errors in ASP.NET

    - by acidzombie24
    I have an issue with Lucene.net. It throws a lock exception. After poking around i notice these things. My code below works in an app bit when calling in Application_Start i get a NoSuchDirectoryException. Not closing the writer (as my code doesnt do below) i WILL get a LockObtainFailedException with the message Lock obtain timed out: SimpleFSLock@<FULL_PATH> from either app or asp.net These thread hinted when spawning threads they get less permissions then i do (but! my main thread has problems as well...) and one solution is to impersonate IIS. I am using visual studios 2010. I am not sure how full blown it is but my attempt to impersonate it failed. So my question is how do i have lucene create the directory and not throw an exception if dont close the writer for some reason (such as power going out)? http://stackoverflow.com/questions/2341163/why-is-my-lucene-index-getting-locked/2499285#2499285 http://stackoverflow.com/questions/1123517/lucene-net-and-i-o-threading-issue/1123981#1123981 static IndexWriter writer = null; static void lucene_init() { bool create = false; string dirname = "LuceneIndex_z"; if (System.IO.Directory.Exists(dirname) == false) create = true; var directory = FSDirectory.GetDirectory(dirname); var analyzer = new StandardAnalyzer(); writer = new IndexWriter(directory, analyzer, create); }

    Read the article

  • Lucene.net create+lock errors in ASP.NET

    - by acidzombie24
    I have an issue with Lucene.net. It throws a lock exception. After poking around i notice these things. My code below works in an app bit when calling in Application_Start i get a NoSuchDirectoryException. Not closing the writer (as my code doesnt do below) i WILL get a LockObtainFailedException with the message Lock obtain timed out: SimpleFSLock@<FULL_PATH> from either app or asp.net These thread hinted when spawning threads they get less permissions then i do (but! my main thread has problems as well...) and one solution is to impersonate IIS. I am using visual studios 2010. I am not sure how full blown it is but my attempt to impersonate it failed. So my question is how do i have lucene create the directory and not throw an exception if dont close the writer for some reason (such as power going out)? http://stackoverflow.com/questions/2341163/why-is-my-lucene-index-getting-locked/2499285#2499285 http://stackoverflow.com/questions/1123517/lucene-net-and-i-o-threading-issue/1123981#1123981 static IndexWriter writer = null; static void lucene_init() { bool create = false; //I now use a full path. I still get NoSuchDirectoryException //string dirname = "LuceneIndex_z"; if (System.IO.Directory.Exists(dirname) == false) create = true; var directory = FSDirectory.GetDirectory(dirname); var analyzer = new StandardAnalyzer(); writer = new IndexWriter(directory, analyzer, create); }

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Using SQL dB column as a lock for concurrent operations in Entity Framework

    - by Sid
    We have a long running user operation that is handled by a pool of worker processes. Data input and output is from Azure SQL. The master Azure SQL table structure columns are approximated to [UserId, col1, col2, ... , col N, beingProcessed, lastTimeProcessed ] beingProcessed is boolean and lastTimeProcessed is DateTime. The logic in every worker role is: public void WorkerRoleMain() { while(true) { try { dbContext db = new dbContext(); // Read foreach (UserProfile user in db.UserProfile .Where(u => DateTime.UtcNow.Subtract(u.lastTimeProcessed) > TimeSpan.FromHours(24) & u.beingProcessed == false)) { user.beingProcessed = true; // Modify db.SaveChanges(); // Write // Do some long drawn processing here ... ... ... user.lastTimeProcessed = DateTime.UtcNow; user.beingProcessed = false; db.SaveChanges(); } } catch(Exception ex) { LogException(ex); Sleep(TimeSpan.FromMinutes(5)); } } // while () } With multiple workers processing as above (each with their own Entity Framework layer), in essence beingProcessed is being used a lock for MutEx purposes Question: How can I deal with concurrency issues on the beingProcessed "lock" itself based on the above load? I think read-modify-write operation on the beingProcessed needs to be atomic but I'm open to other strategies. Open to other code refinements too.

    Read the article

  • JDBC transaction dead-lock solution required?

    - by user49767
    It's a scenario described my friend and challenged to find solution. He is using Oracle database and JDBC connection with read committed as transaction isolation level. In one of the transaction, he updates a record and executes selects statement and commits the transaction. when everything happening within single thread, things are fine. But when multiple requests are handled, dead-lock happens. Thread-A updates a record. Thread B updates another record. Thread-A issues select statement and waits for Thread-B's transaction to complete the commit operation. Thread-B issues select statement and waits for Thread-A's transaction to complete the commit operation. Now above causes dead-lock. Since they use command pattern, the base framework allows to issue commit only once (at the end of all the db operation), so they are unable to issue commit immediately after select statement. My argument was Thread-A supposed to select all the records which are committed and hence should not be issue. But he said that Thread-A will surely wait till Thread-B commits the record. is that true? What are all the ways, to avoid the above issue? is it possible to change isolation-level? (without changing underlying java framework) Little information about base framework, it is something similar to Struts action, their each and every request handled by one action, transaction begins before execution and commits after execution.

    Read the article

  • MFC/CCriticalSection: Simple lock situation hangs

    - by raph.amiard
    I have to program a simple threaded program with MFC/C++ for a uni assignment. I have a simple scenario in wich i have a worked thread which executes a function along the lines of : UINT createSchedules(LPVOID param) { genProgThreadVal* v = (genProgThreadVal*) param; // v->searcherLock is of type CcriticalSection* while(1) { if(v->searcherLock->Lock()) { //do the stuff, access shared object , exit clause etc.. v->searcherLock->Unlock(); } } PostMessage(v->hwnd, WM_USER_THREAD_FINISHED , 0,0); delete v; return 0; } In my main UI class, i have a CListControl that i want to be able to access the shared object (of type std::List). Hence the locking stuff. So this CList has an handler function looking like this : void Ccreationprogramme::OnLvnItemchangedList5(NMHDR *pNMHDR, LRESULT *pResult) { LPNMLISTVIEW pNMLV = reinterpret_cast<LPNMLISTVIEW>(pNMHDR); if((pNMLV->uChanged & LVIF_STATE) && (pNMLV->uNewState & LVNI_SELECTED)) { searcherLock.Lock(); // do the stuff on shared object searcherLock.Unlock(); // do some more stuff } *pResult = 0; } The searcherLock in both function is the same object. The worker thread function is passed a pointer to the CCriticalSection object, which is a member of my dialog class. Everything works but, as soon as i do click on my list, and so triggers the handler function, the whole program hangs indefinitely.I tried using a Cmutex. I tried using a CSingleLock wrapping over the critical section object, and none of this has worked. What am i missing ?

    Read the article

  • Rails Heroku Gemfile.lock error - push rejected (open source)

    - by KJ50
    I am trying to push my open source RoR application to Heroku but I'm having an issue making the initial push. I have read many similar questions, but none of those answers has helped to solve my problem. I have tried bundle update and bundle install various times. I also have tried removing and then re-committing my Gemfile.lock file, however I get this same error still... $ git push heroku master Counting objects: 5199, done. Compressing objects: 100% (3086/3086), done. Writing objects: 100% (5199/5199), 4.57 MiB | 131 KiB/s, done. Total 5199 (delta 3418), reused 3152 (delta 1962) -----> Removing .DS_Store files -----> Ruby app detected -----> Compiling Ruby/NoLockfile ! ! Gemfile.lock required. Please check it in. ! ! Push rejected, failed to compile Ruby app To [email protected]:frozen-springs-4725.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:frozen-springs-4725.git' Since my application uses MongoDB with MongoMapper, I suspect that I have some configuration incorrect. My code can be found here on Github (I'm currently working on the heroku branch). Feel free to clone our repository and try it yourself. If anyone has any insight which could help me resolve this issue I would be very thankful!

    Read the article

  • MySQL replication ignore data changes but not table structure changes

    - by Ed Manet
    Is there a way to setup MySQL replication so that CREATE TABLE and ALTER TABLE statements get replicated but INSERT, DELETE and UPDATE statements do not? I've got replication working fine and have several tables that are ignored as per the requirements. But we have a requirement that the slaves have an empty copy of the ignored table. We create those empty copies before we start replicating. Since the table is ignored, table structure changes don't get passed down from the master to the slave's empty copy. I know it's a strange requirement.

    Read the article

  • Does CHECK TABLE add read/write locks?

    - by Ztyx
    Hi, Yesterday I ran CHECK TABLE on a table that is read very frequently. I scanned the MySQL documentation for CHECK TABLE for any mentions of "lock" (and found none) and also noticed that only SELECT privilege was required to run the command. I therefor concluded that the command did not do any read lock and was safe to run even in production. Sadly, running the command took 1 minute and 37 seconds and seemed to block all read access. My question is therefor, does CHECK TABLE do any read lock? Any other reason why I experienced a read block on the table? Thanks

    Read the article

  • Traspose matrix-style table to 3 columns in Excel

    - by polarbear2k
    I have a matrix-style table in excel where B1:Z1 are column headings and A2:A99 are row headings. I would like to convert this table to a 3 column table (column heading, row heading, cell value). It does not matter in what order the new table is. A B C D A B C A B C 1 H1 H2 H3 1 H1 R1 V1 1 H1 R1 V1 2 R1 V1 V2 V3 => 2 H1 R2 V4 or 2 H2 R1 V2 3 R2 V4 V5 V6 3 H1 R3 V7 3 H3 R1 V3 4 R3 V7 V8 V9 4 H2 R1 V2 4 H1 R2 V4 5 H2 R2 V5 5 H2 R2 V5 6 H2 R3 V8 6 H3 R2 V6 7 H3 R1 V3 7 H1 R3 V7 8 H3 R2 V6 8 H2 R3 V8 9 H3 R3 V9 9 H3 R3 V8 I've been playing around with the OFFSET function to create the whole table but I feel like a combination of TRANSPOSE and V/HLOOKUP is required. Thanks

    Read the article

  • Truncate text to fit table cell without wrapping using css or jquery

    - by Tauren
    I want the text in one of the columns of a table to not wrap, but to just truncate so that it fits within the current size of the table cell. I don't want the table cell to change size, as I need the table to be exactly 100% the width of the container. This is because the table with 100% width is inside of a positioned div with overflow: auto (it's actually inside of a jquery UI.Layout panel). I tried both overflow: hidden and the text still wrapped. I tried white-space: nowrap, but it stretched the table wider than 100% and added a horizontal scroll bar. div.container { position: absolute; overflow: auto; /* user can slide resize bars to change the width & height */ width: 600px; height: 300px; } table { width: 100% } td.nowrap { overflow: hidden; white-space: nowrap; } <div class="container"> <table> <tr> <td>From</td> <td>Subject</td> <td>Date</td> </tr> <tr> <td>Bob Smith</td> <td class="nowrap"> <strong>Message subject</strong> <span>This is a preview of the message body and could be long.</span> </td> <td>2010-03-30 02:18AM</td> </tr> </table> </div> Is there a way using css to solve this? If I had a fixed table cell size, then overflow:hidden would truncate anything that flows over, but I can't used a fixed size as I want the table to stretch with the UI.Layout panel size. If not, then how would I solve this with jquery? My use case is similar to the gmail interface, where an email subject is bolded and the beginning of the message body is shown, but then truncated to fit.

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • Converting table columns to key value pairs

    - by TomD1
    I am writing a PL/SQL procedure that loads some data from Schema A into Schema B. They are both very different schemas and I can't change the structure of Schema B. Columns in various tables in Schema A (joined together in a view) need to be inserted into Schema B as key=value pairs in 2 columns in a table, each on a separate row. For example, an employee's first name might be present as employee.firstname in Schema A, but would need to be entered in Schema B as: id=>1, key=>'A123', value=>'Smith' There are almost 100 keys, with the potential for more to be added in future. This means I don't really want to hardcode any of these keys. Sample code: create table schema_a_employees ( emp_id number(8,0), firstname varchar2(50), surname varchar2(50) ); insert into schema_a_employees values ( 1, 'James', 'Smith' ); insert into schema_a_employees values ( 2, 'Fred', 'Jones' ); create table schema_b_values ( emp_id number(8,0), the_key varchar2(5), the_value varchar2(200) ); I thought an elegant solution would most likely involve a lookup table to determine what value to insert for each key, and doesn't involve effectively hardcoding dozens of similar statements like.... insert into schema_b_values ( 1, 'A123', v_firstname ); insert into schema_b_values ( 1, 'B123', v_surname ); What I'd like to be able to do is have a local lookup table in Schema A that lists all the keys from Schema B, along with a column that gives the name of the column in the table in Schema A that should be used to populate, e.g. key "A123" in Schema B should be populated with the value of the column "firstname" in Schema A, e.g. create table schema_a_lookup ( the_key varchar2(5), the_local_field_name varchar2(50) ); insert into schema_a_lookup values ( 'A123', 'firstname' ); insert into schema_a_lookup values ( 'B123', 'surname' ); But I'm not sure how I could dynamically use values from the lookup table to tell Oracle which columns to use. So my question is, is there an elegant solution to populate schema_b_values table with the data from schema_a_employees without hardcoding for every possible key (i.e. A123, B123, etc)? Cheers.

    Read the article

  • How do you update table row background color in IE using Javascript/CSS ?

    - by giulio
    I have a table that i want to "highlight" during onmouseover/onmouseout. I already know this is required in IE but not in other browsers. I have managed to detect the events triggering and this TR tag effectively works. (Note that the originating class "contentTableRow" doesn't seem to be causing any issues.) class="contentTableRow" onclick="openForm('SomeID');" onmouseover="highlight('someRowID', true);" onmouseout="highlight('someRowID', false);" id="someRowID" All is fine and dandy, the "highlight" function fires and actually sets the appropriate class. It's just that IE won't process the CSS class name change. Here is a snippet of the CSS I am using to make the change. .HighlightOn { cursor:pointer; background-color: #D1DFFF; } .HighlightOff { background-color: #E1EEFE; } I can see that the Class names are getting updated when I debug it, and also check it in Firebug. But it seems that IE doesn't like this usage of classes with a TR tag.. Is it the way I am structuring the class for Tables ? Any advice ?

    Read the article

  • Why do Firefox and Opera ignore max-width inside of display: table-cell?

    - by brad
    The following code displays correctly in Chrome or IE (the image is 200px wide). In Firefox and Opera the max-width style is ignored completely. Why does this happen and is there a good work around? Also, which way is most standards compliant? Note One possible work around for this particular situation is to set max-width to 200px. However, this is a rather contrived example. I'm looking for a strategy for a variable width container. <!doctype html> <html> <head> <style> div { display: table-cell; padding: 15px; width: 200px; } div img { max-width: 100%; } </style> </head> <body> <div> <img src="http://farm4.static.flickr.com/3352/4644534211_b9c887b979.jpg" /> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec facilisis ante, facilisis posuere ligula feugiat ut. Fusce hendrerit vehicula congue. at ligula dolor. Lorem ipsum dolor sit amet, consectetur adipiscing elit. leo metus, aliquam eget convallis eget, molestie at massa. </p> </div> </body> </html> [Update] As stated by mVChr below, the w3.org spec states that max-width does not apply to inline elements. I've tried using div img { max-width: 100%; display: block; }, but it does not seem to correct the issue.

    Read the article

  • Using JQuery to insert new cells in a table row?

    - by Michael Smith
    I know that JQuery is a very powerful library and was just wondering if it had the following capability that I really need. Lets say I need to insert new cells into a table row, I know how to do this basic task, but I need to insert my cells in a highly unusual way due to some of the requirements that are needed for the new cells. I need to be able to insert cells a certain distance into the row, For example, if a row was 1000pixels wide, is there a feature in JQuery that would allow me to insert the cell 250pixels into the row and have a cell width of 50pixels and insert another cell 500pixels into the row with a cell width of 100pixels. I know how to set a cells width using JQuery, just not distance into a row. The values wont ever be the exact same as above though because they are actually read from a database, so for example, one cell would have the following values: CELL_01 $start=100; $finish=150; the above would mean a new cell is needed that needs to be inserted 100pixels into the row and has a width of 50pixels, I just cant seem to find a way to implement this feature into my application. How could I accomplish this task? Sorry for such a strange question, but i just cant seem to get this working. Thanks!

    Read the article

  • Grouping Rows with Client Side HTML Table Sorting

    - by Alan Storm
    Are there any existing table sorting libraries, or is there a way to configure tablesorter, to sort every two rows? Alternatly, is there a better way to semantially express my table such that standard row sorting will work. I have an html table that looks something like this <table> <thead> <tr> <th>Header 1</th> <th>Header 2</th> </tr> </thead> <tbody> <tr> <td>Some Data: 1</td> <td>Some More Data:1 </td> </tr> <tr> <td colspan="2">Some text about the above data that puts it in context and visually spans under both of the cells above so as not to create a weird looking table</td> </tr> <tr> <td>Some Data: 2</td> <td>Some More Data: 2</td> </tr> <tr> <td colspan="2">Some text about the above data 2 set that puts it in context and visually spans under both of the cells above so as not to create a weird looking table</td> </tr> </tbody> </table> I'm looking for a way to sort the table such that the table is sorted by the data rows, but the row with the colspan travels with its data and is not sorted separately.

    Read the article

  • MySQL multidimensional arrays...

    - by jay
    What is the best way to store data that is dynamic in nature using MySQL? Let's say I have a table in which one item is "dynamic". For some entries I need to store one value, but for others it could be one hundred values. For example let's say I have the following simple table: CREATE TABLE manager ( name char(50), worker_1_name(50), worker_2_name(50), ... worker_N_name(50) ); Clearly, this is not an ideal way to set up a database. Because I have to accommodate the largest group that a manager could potentially have, I am wasting a lot of space in the database. What I would prefer is to have a table that I can use as a member of another table (like I would do in C++ through inheritance) that can be used by the "manager" table to handle the variable number of employees. It might look something like this. CREATE TABLE manager ( name char(50), underlings WORKERS ); CREATE TABLE WORKERS ( name char(50), ); I would like to be able to add a variable number of workers to each manager. Is this possible or am I constrained to enumerating all the possible number of employees even though I will use the full complement only rarely?

    Read the article

  • How do I link one listview to another to create a football league table?

    - by Richard Nixon
    Hi I am creating a football system in windows forms c#. I have one listview where data is entered. I have 4 columns, 2 with team names linked to combo boxes and 2 with the scores linked to numericupdown controls. There are 3 buttons to add the results, Remove and clear. the code is below: private void addButton_Click(object sender, EventArgs e) { { ListViewItem item = new ListViewItem(comboBox1.SelectedItem.ToString()); item.SubItems.Add(numericUpDown1.Value.ToString()); item.SubItems.Add(numericUpDown2.Value.ToString()); item.SubItems.Add(comboBox2.SelectedItem.ToString()); listView1.Items.Add(item); } } private void clearButton_Click(object sender, EventArgs e) { listView1.Items.Clear(); } private void removeButton_Click(object sender, EventArgs e) { foreach (ListViewItem itemSelected in listView1.SelectedItems) { listView1.Items.Remove(itemSelected); } } I have another listview that i want to link the first one to. The second one is a usual english football league table and i want to use maths to add up the games played and the points etc. please help. cheers

    Read the article

  • How can I use a single-table inheritance and single controller to make this more DRY?

    - by Angela
    I have three models, Calls, Emails, and Letters and those are basically templates of what gets sent to individuals, modeled as Contacts. When a Call is made, a row in model in ContactCalls gets created. If an Email is sent, an entry in ContactEmails is made. Each has its own controller: contact_calls_controller.rb and contact_emails_controller.rb. I would like to create a single table inheritance called ContactEvents which has types Calls, Emails, and Letters. But I'm not clear how I pass the type information or how to consolidate the controllers. Here's the two controllers I have, as you can see, there's alot of duplication, but some differences that needs to be preserved. In the case of letter and postcards (another Model), it's even more so. class ContactEmailsController < ApplicationController def new @contact_email = ContactEmail.new @contact_email.contact_id = params[:contact] @contact_email.email_id = params[:email] @contact = Contact.find(params[:contact]) @company = Company.find(@contact.company_id) contacts = @company.contacts.collect(&:full_name) contacts.each do |contact| @colleagues = contacts.reject{ |c| [email protected]_name } end @email = Email.find(@contact_email.email_id) @contact_email.subject = @email.subject @contact_email.body = @email.message @email.message.gsub!("{FirstName}", @contact.first_name) @email.message.gsub!("{Company}", @contact.company_name) @email.message.gsub!("{Colleagues}", @colleagues.to_sentence) @email.message.gsub!("{NextWeek}", (Date.today + 7.days).strftime("%A, %B %d")) @contact_email.status = "sent" end def create @contact_email = ContactEmail.new(params[:contact_email]) @contact = Contact.find_by_id(@contact_email.contact_id) @email = Email.find_by_id(@contact_email.email_id) if @contact_email.save flash[:notice] = "Successfully created contact email." # send email using class in outbound_mailer.rb OutboundMailer.deliver_campaign_email(@contact,@contact_email) redirect_to todo_url else render :action => 'new' end end AND: class ContactCallsController < ApplicationController def new @contact_call = ContactCall.new @contact_call.contact_id = params[:contact] @contact_call.call_id = params[:call] @contact_call.status = params[:status] @contact = Contact.find(params[:contact]) @company = Company.find(@contact.company_id) @contact = Contact.find(@contact_call.contact_id) @call = Call.find(@contact_call.call_id) @contact_call.title = @call.title contacts = @company.contacts.collect(&:full_name) contacts.each do |contact| @colleagues = contacts.reject{ |c| [email protected]_name } end @contact_call.script = @call.script @call.script.gsub!("{FirstName}", @contact.first_name) @call.script.gsub!("{Company}", @contact.company_name ) @call.script.gsub!("{Colleagues}", @colleagues.to_sentence) end def create @contact_call = ContactCall.new(params[:contact_call]) if @contact_call.save flash[:notice] = "Successfully created contact call." redirect_to contact_path(@contact_call.contact_id) else render :action => 'new' end end

    Read the article

  • table subtraction challenge

    - by Valentin
    I have a challenge that I haven’t overcome in the last two days using Stored Procedures and SQL 2008. I took several approaches but must fell short. One appraoch very interesting was using a table substraction. It’s really all about table subtraction. I was wondering if you could help me crack this one. Here is the challenge: Two tables 1Testdb y 2Testdb. My first step was to select ID relationships ([2Testdb].Acc_id) on table 2Testdb for one given individual ([2Testdb].Bus_id). Then query table 1Testdb for records not mathcing my original selection from 2Testdb. But other approaches are welcome. Data and Structures: USE [Challengedb] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[1Testdb]( [Acc_id] [uniqueidentifier] NULL [Name] [Varchar(10)] NULL ) ON [PRIMARY] GO CREATE TABLE [dbo].[2Testdb]( [Acc_id] [uniqueidentifier] NULL, [Bus_id] [uniqueidentifier] NULL ) ON [PRIMARY] GO Records on 1Testdb: 34455F60-9474-4521-804E-66DB39A579F3, John C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete DC711615-3BE4-4B31-9EF2-B1314185CA62, Dave E3AAB073-2398-476D-828B-92829F686A4C, Adam Records on 2Testdb: (Relationship table, ex. Friend relationships) Record #1: DC711615-3BE4-4B31-9EF2-B1314185CA62, 34455F60-9474-4521-804E-66DB39A579F3 Record #2: E3AAB073-2398-476D-828B-92829F686A4C, 34455F60-9474-4521-804E-66DB39A579F3 Record # 3: DC711615-3BE4-4B31-9EF2-B1314185CA62, E3AAB073-2398-476D-828B-92829F686A4C Record # 4: E3AAB073-2398-476D-828B-92829F686A4C, DC711615-3BE4-4B31-9EF2-B1314185CA62 Challenge: Select from table 1Testdb only those records distinct that may not have a relationship with John [34455F60-9474-4521-804E-66DB39A579F3] on table 2Testdb. Expected result should be (Who does John doesn’t have relationship with?): C23523F6-2309-4F58-BB3F-EF7486C7AF8B, Pete Thank you, Valentin

    Read the article

  • Get count of rows in each table while having more than 1 tables

    - by sneha khan
    I have more then one tables on same page and want to add a line show count of each table as below. I tried something but it gives sum of count of all table rows. <table> <tr> <td>Some data</td> <td>More data</td> </tr> <tr> <td>Some data</td> <td>More data</td> </tr> </table> <table> <tr> <td>Some data</td> <td>More data</td> </tr> <tr> <td>Some data</td> <td>More data</td> </tr> </table> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> $().ready(function(){ //I want to add a line after each table showing each table row count $("table").after(??? + " rows found."); }); </script>

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Environment

    - by Johannes
    I want to program a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both engines will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • Use MySQL trigger to update another table when duplicate key found

    - by Jason
    Been scratching my head on this one, hoping one of you kind people and direct me towards solving this problem. I have a mysql table of customers, it contains a lot of data, but for the purpose of this question, we only need to worry about 4 columns 'ID', 'Firstname', 'Lastname', 'Postcode' Problem is, the table contains a lot of duplicated customers. A new table is being created where each customer is unique and for us, we decide a unique customer is based on 'Firstname', 'Lastname' and 'Postcode' However, (this is the important bit) we need to ensure each new "unique" customer record also can be matched to the original multiple entries of that customer in the original table. I believe the best way to do this is to have a third table, that has 'NewUniqueID', 'OldCustomerID'. So we can search this table for 'NewUniqueID' = '123' and it would return multiple 'OldCustomerID' values where appropriate. I am hoping to make this work using a trigger and the on duplicate key syntax. So what would happen is as follows: An query is run taking the old customer table and inserting it in to the new unique table. (A standard Insert Select query) On duplicate key continue adding records, but add one entry in to the third table noting the 'NewUniqueID' that duped along with the 'OldCustomerID' of the record we were trying to insert. Hope this makes sense, my apologies if it isn't clear. I welcome and appreciate any thoughts on this one! Many thanks Jason

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >