Search Results

Search found 729 results on 30 pages for 't1'.

Page 4/30 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to search phrase queries in inverted index structure?

    - by Mehdi Amrollahi
    If we want to search a query like this "t1 t2 t3" (t1,t2 ,t3 must be queued) in an inverted index structure , which ways could we do ? 1-First we search the "t1" term and find all documents that contains "t1" , then do this work for "t2" and then "t3" . Then find documents that positions of "t1" , "t2" and "t3" are next to each other . 2-First we search the "t1" term and find all documents that contains "t1" , then in all documents that we found , we search the "t2" and next , in the result of this , we find documents that contains "t3" . thanks .

    Read the article

  • Why does this static factory method involving implied generic types, work?

    - by Cheeso
    Consider public class Tuple<T1, T2> { public Tuple(T1 v1, T2 v2) { V1 = v1; V2 = v2; } public T1 V1 { get; set; } public T2 V2 { get; set; } } public static class Tuple { // MAGIC!! public static Tuple<T1, T2> New<T1, T2>(T1 v1, T2 v2) { return new Tuple<T1, T2>(v1, v2); } } Why does the part labeled "MAGIC" in the above work? It allows syntax like Tuple.New(1, "2") instead of new Tuple<int, string>(1, "2"), but ... how and why? Why do I not need Tuple.New<int,string>(1, "2") ??

    Read the article

  • ?Oracle????SELECT????UNDO

    - by Liu Maclean(???)
    ????????Oracle?????(dirty read),?Oracle??????Asktom????????Oracle???????, ???undo??????????(before image)??????Consistent, ???????????????Oracle????????????? ????????? ??,??,Oracle?????????????RDBMS,???????????? ?????????2?????: _offline_rollback_segments or _corrupted_rollback_segments ?2?????????Oracle???????????ORA-600[4XXX]???????????????,???2??????Undo??Corruption????????????,?????2????????????????? ??????????????_offline_rollback_segments ? _corrupted_rollback_segments ?2?????: ???????(FORCE OPEN DATABASE) ????????????(consistent read & delayed block cleanout) ??????rollback segment??? ?????:???????Oracle????????,??????????2?????,?????????????!! _offline_rollback_segments ? _corrupted_rollback_segments ???????????: ??2???????Undo Segments(???/???)????????online ?UNDO$???????????OFFLINE??? ???instance??????????????????? ??????Undo Segments????????active transaction????????????dead??SMON???(????????SMON??(?):Recover Dead transaction) _OFFLINE_ROLLBACK_SEGMENTS(offline undo segment list)????(hidden parameter)?????: ???startup???open database???????_OFFLINE_ROLLBACK_SEGMENTS????Undo segments(???/???),?????undo segments????????alert.log???TRACE?????,???????startup?? ?????????????,?ITL?????undo segments?: ???undo segments?transaction table?????????????????? ???????????commit,?????CR??? ????undo segments????(???corrupted??,???missed??)???????????alert.log,??????? ?DML?????????????????????????????????CPU,????????????????????? _CORRUPTED_ROLLBACK_SEGMENTS(corrupted undo segment list)??????????: ?????startup?open database???_CORRUPTED_ROLLBACK_SEGMENTS????undo segments(???/???)???????? ???????_CORRUPTED_ROLLBACK_SEGMENTS???undo segments????????????commit,???undo segments???drop??? ??????????? ??????????????????,?????????????????? ??bootstrap???????????,?????????ORA-00704: bootstrap process failure??,???????????(???Oracle????:??ORA-00600:[4000] ORA-00704: bootstrap process failure????) ??????_CORRUPTED_ROLLBACK_SEGMENTS????????????????????,??????????????? Oracle???????TXChecker??????????? ???????2?????,??????????????_CORRUPTED_ROLLBACK_SEGMENTS?????SELECT????UNDO???????: SQL> alter system set event= '10513 trace name context forever, level 2' scope=spfile; System altered. SQL> alter system set "_in_memory_undo"=false scope=spfile; System altered. 10513 level 2 event????SMON ??rollback ??? dead transaction _in_memory_undo ?? in memory undo ?? SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. session A: SQL> conn maclean/maclean Connected. SQL> create table maclean tablespace users as select 1 t1 from dual connect by level exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processe ???????????,????current block, ????????,consistent gets??3? SQL> update maclean set t1=0; 501 rows updated. SQL> alter system checkpoint; System altered. ??session A?commit; ???? session: SQL> conn maclean/maclean Connected. SQL> SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 505 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? ?????????undo??CR?,???consistent gets??? 505 [oracle@vrh8 ~]$ ps -ef|grep LOCAL=YES |grep -v grep oracle 5841 5839 0 09:17 ? 00:00:00 oracleG10R25 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) [oracle@vrh8 ~]$ kill -9 5841 ??session A???Server Process????,???dead transaction ????smon?? select ktuxeusn, to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta from x$ktuxe where ktuxecfl = 'DEAD'; KTUXEUSN Time KTUXESIZ KTUXESTA ---------- -------------------- ---------- ---------------- 2 06-AUG-2012 09:20:45 7 ACTIVE ???1?active rollback segment SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 411 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ????? ????kill?? ???smon ??dead transaction , ???????????? ?????undo??????? ????active?rollback segment??? SQL> select segment_name from dba_rollback_segs where segment_id=2; SEGMENT_NAME ------------------------------ _SYSSMU2$ SQL> alter system set "_corrupted_rollback_segments"='_SYSSMU2$' scope=spfile; System altered. ? _corrupted_rollback_segments ?? ???2?rollback segment, ????????undo SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 228 recursive calls 0 db block gets 29 consistent gets 5 physical reads 116 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 1 rows processed SQL> / SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? consistent gets???3,?????????????????,??ITL???UNDO SEGMENTS?_corrupted_rollback_segments????,???????????COMMIT??,????UNDO? ???????,?????????????????????????(????????????????????),????????????????? ???? , ?????

    Read the article

  • Correct configuration of multiple Analytics trackers per page, spanning domains and subdomains

    - by Eliot Shepard
    My company publishes sites on a somewhat convoluted domain structure, and we're having trouble getting accurate numbers in Analytics when we have multiple trackers on the page. We publish under two brands (A, B). Each brand has a "national" site at A.com, B.com, as well as per-city "local" sites at eg. ny.A.com, la.A.com, sf.A.com, etc. Right now we're trying to track in these dimensions: Full network (A.com, ny.A.com, B.com, la.B.com, etc.) All sites in brand (A.com, ny.A.com, la.A.com, etc.) Inidividual site (ny.A.com) Here are the commands we're using on an individual site: _gaq.push( ['t0._setAccount', 'UA-XXXXXX-1'], // full network ['t0._setDomainName', 'none'], ['t0._setAllowLinker', true], ['t0._trackPageview'], ['t1._trackPageLoadTime'], ['t1._setAccount', 'UA-XXXXXX-2'], // brand ['t1._setDomainName', 'none'], ['t1._setAllowLinker', true], ['t1._trackPageview'], ['t1._trackPageLoadTime'], ['t2._setAccount', 'UA-XXXXXX-3'], // individual ['t2._setDomainName', 'none'], ['t2._setAllowLinker', true], ['t2._trackPageview'], ['t2._trackPageLoadTime'] ); We send the same commands to each account because we've had strange results when trackers were configured differently in the past. However, right now we're seeing inflated numbers for uniques on all three trackers. What is the correct way to configure this setup? Thanks for your time.

    Read the article

  • Java PHP posting using URLConnector, PHP file doesn't seem to receive parameters

    - by Emdiesse
    Hi there, I am trying to post some simple string data to my php script via a java application. My PHP script works fine when I enter the data myself using a web browser (newvector.php?x1=&y1=...) However using my java application the php file does not seem to pick up these parameters, in fact I don't even know if they are sending at all because if I comment out on of the parameters in the java code when I am writing to dta it doesn't actually return, you must enter 6 parameters. newvector.php if(!isset($_GET['x1']) || !isset($_GET['y1']) || !isset($_GET['t1']) || !isset($_GET['x2']) || !isset($_GET['y2']) || !isset($_GET['t2'])){ die("You must include 6 parameters within the URL: x1, y1, t1, x2, y2, t2"); } $x1 = $_GET['x1']; $x2 = $_GET['x2']; $y1 = $_GET['y1']; $y2 = $_GET['y2']; $t1 = $_GET['t1']; $t2 = $_GET['t2']; $insert = " INSERT INTO vectors( x1, x2, y1, y2, t1, t2 ) VALUES ( '$x1', '$x2', '$y1', '$y2', '$t1', '$t2' ) "; if(!mysql_query($insert, $conn)){ die('Error: ' . mysql_error()); } echo "Submitted Data x1=".$x1." y1=".$y1." t1=".$t1." x2=".$x2." y2=".$y2." t2=".$t2; include 'db_disconnect.php'; ?> The java code else if (action.equals("Play")) { for (int i = 0; i < 5; i++) { // data.size() String x1, y1, t1, x2, y2, t2 = ""; String date = "2010-04-03 "; String ms = ".0"; x1 = data.elementAt(i)[1]; y1 = data.elementAt(i)[0]; t1 = date + data.elementAt(i)[2] + ms; x2 = data.elementAt(i)[4]; y2 = data.elementAt(i)[3]; t2 = date + data.elementAt(i)[5] + ms; try { //Create Post String String dta = URLEncoder.encode("x1", "UTF-8") + "=" + URLEncoder.encode(x1, "UTF-8"); dta += "&" + URLEncoder.encode("y1", "UTF-8") + "=" + URLEncoder.encode(y1, "UTF-8"); dta += "&" + URLEncoder.encode("t1", "UTF-8") + "=" + URLEncoder.encode(t1, "UTF-8"); dta += "&" + URLEncoder.encode("x2", "UTF-8") + "=" + URLEncoder.encode(x2, "UTF-8"); dta += "&" + URLEncoder.encode("y2", "UTF-8") + "=" + URLEncoder.encode(y2, "UTF-8"); dta += "&" + URLEncoder.encode("t2", "UTF-8") + "=" + URLEncoder.encode(t2, "UTF-8"); System.out.println(dta); // Send Data To Page URL url = new URL("http://localhost/newvector.php"); URLConnection conn = url.openConnection(); conn.setDoOutput(true); OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream()); wr.write(dta); wr.flush(); // Get The Response BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream())); String line; while ((line = rd.readLine()) != null) { System.out.println(line); //you Can Break The String Down Here } wr.close(); rd.close(); } catch (Exception exc) { System.out.println("Hmmm!!! " + exc.getMessage()); } }

    Read the article

  • error in C# code

    - by user318068
    hi all . I have a problem with my code in C# . if i click in compiler button , I get the following errors 'System.Collections.Generic.LinkedList' does not contain a definition for 'removeFirst' and no extension method 'removeFirst' accepting a first argument of type 'System.Collections.Generic.LinkedList' could be found (are you missing a using directive or an assembly reference?). and 'System.Collections.Generic.LinkedList' does not contain a definition for 'addLast' and no extension method 'addLast' accepting a first argument of type 'System.Collections.Generic.LinkedList' could be found (are you missing a using directive or an assembly reference?) This is part of a simple program using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Hanoi { public class Sol { public LinkedList<int?> t1 = new LinkedList<int?>(); public LinkedList<int?> t2 =new LinkedList<int?>(); public LinkedList<int?> t3 =new LinkedList<int?>(); public int depth; public LinkedList<Sol> neighbors; public Sol(LinkedList<int?> t1, LinkedList<int?> t2, LinkedList<int?> t3) { this.t1 = t1; this.t2 = t2; this.t3 = t3; neighbors = new LinkedList<Sol>(); } public virtual void getneighbors() { Sol temp = this.copy(); Sol neighbor1 = this.copy(); Sol neighbor2 = this.copy(); Sol neighbor3 = this.copy(); Sol neighbor4 = this.copy(); Sol neighbor5 = this.copy(); Sol neighbor6 = this.copy(); if (temp.t1.Count != 0) { if (neighbor1.t2.Count != 0) { if (neighbor1.t1.First.Value < neighbor1.t2.First.Value) { neighbor1.t2.AddFirst(neighbor1.t1.RemoveFirst()); neighbors.AddLast(neighbor1); } } else { neighbor1.t2.AddFirst(neighbor1.t1.RemoveFirst()); neighbors.AddLast(neighbor1); } if (neighbor2.t3.Count != 0) { if (neighbor2.t1.First.Value < neighbor2.t3.First.Value) { neighbor2.t3.AddFirst(neighbor2.t1.RemoveFirst()); neighbors.AddLast(neighbor2); } } else I hope that you find someone to help me

    Read the article

  • how to join 3 relational tables

    - by orioncabbar
    Hello there, how to join 3 relational tables with the structure: t1 | id t2 | id | rating t3 | source_id | relation t3 stores the data of a field which t1 and t2 uses both. so source_id field can be t1's id or t2's id. input : t1 id output : t2 rating an example: **t1** id | --------- 42 | **t2** id | rating ------------- 37 | 9.2 **t3** id | source_id -------------- 42 | 1 37 | 1 26 | 2 23 | 1 what i want is to get 9.2 output with 42 input. can you do that in one sql query?

    Read the article

  • problem with join SQL Server 2000

    - by eyalb
    I have 3 tables - Items, Props, Items_To_Props i need to return all items that match all properties that i send example items 1 2 3 4 props T1 T2 T3 items_to_props 1 T1 1 T2 1 T3 2 T1 3 T1 when i send T1,T2 i need to get only item 1

    Read the article

  • Does Oracle re-hash the driving table for each join on the same table columns?

    - by thecoop
    Say you've got the following query on 9i: SELECT /*+ USE_HASH(t2 t3) */ * FROM table1 t1 -- this has lots of rows LEFT JOIN table2 t2 ON t1.col1 = t2.col1 AND t1.col2 = t2.col2 LEFT JOIN table3 t3 ON t1.col1 = t3.col1 AND t1.col2 = t3.col2 Due to 9i not having RIGHT OUTER HASH JOIN, it needs to hash table1 for both joins. Does it re-hash table1 between joining t2 and t3 (even though it's using the same join columns), or does it keep the same hash information for both joins?

    Read the article

  • need help to construct query

    - by Learner
    i have the following result and i would like to construct the select query from the following result in java, Please help me how to go about , tablename columnname size order employee name 25 1 employee sex 25 2 employee contactNumber 50 3 employee salary 25 4 address street 25 5 address country 25 6 from this i would like to construct query like select T1.name, T1.sex,T1.contactNumber, T1.salaryT2.street, T2.contry from tablename1[employee] T1, tablename2[address] T2 how to construt the above query in java, here table name can be N also the columname can be also N. Please help me to achieve the above. Thanks and Regards

    Read the article

  • Sorting linked lists in Pascal

    - by user3712174
    I'm doing my final project for Informatics class and I can't get my sorting procedure to work. Have a look at my program, specifically the bolded part (some things are in Croatian. - if you need something translated, let me know): type pokazivac=^slog; slog=record prezime_ime:string[30]; redni_broj:string[2]; fakultet:string[50]; bodovi:integer; sljedeci:pokazivac; end; var pocetni, trenutni, prethodni:pokazivac; i:integer; procedure racunaj; var i,a,c:integer; b,d,e,f,g,h,j:real; begin write('Postotak bodova (u decimalnom zapisu) koje ucenik ostvaruje na temelju prosjeka ocjena - '); readln(e); e:=e*1000/4; write('Prosjek ocjena u prvom razredu : '); readln(f); f:=f/5*e; write('Prosjek ocjena u drugom razredu : '); readln(g); g:=g/5*e; write('Prosjek ocjena u trecem razredu : '); readln(h); h:=h/5*e; write('Prosjek ocjena u cetvrtom razredu : '); readln(j); j:=j/5*e; d:=f+g+h+j; write('Broj predmeta (ne racunajuci hrvatski jezik, strani jezik i matematiku) koju je ucenik/ca polagao na maturi - '); readln(a); write('Postotak rijesnosti ispita iz hrvatskog jezika te zatim maksimum bodova koje je ucenik/ca mogao ostvariti - '); readln(b); readln(c); d:=d+b*c; write('Postotak rijesnosti ispita iz stranog jezika te zatim maksimum bodova koje je ucenik/ca mogao ostvariti - '); readln(b); readln(c); d:=d+(b*c); write('Postotak rijesnosti ispita iz matematike te zatim maksimum bodova koje je ucenik/ca mogao ostvariti - '); readln(b); readln(c); d:=d+(b*c); for i:=1 to a do begin writeln('Postotak rijesnosti dodatnog predmeta te zatim maksimum bodova koje je ucenik/ca mogao ostvariti - '); readln(b); readln(c); d:=d+(b*c); end; d:=round(d); writeln('Vas broj bodova je: ', d:4:2); write('Za nastavak pritisnite ENTER..'); readln; end; procedure unos; begin new(trenutni); write('Redni broj ucenika - ');readln(trenutni^.redni_broj); write('Prezime i ime - ');readln(trenutni^.prezime_ime); write('Naziv fakultet - ');readln(trenutni^.fakultet); write('Bodovi - ');readln(trenutni^.bodovi); trenutni^.sljedeci:=pocetni; pocetni:=trenutni; end; procedure ispis; begin writeln(); writeln('Lista popisanih ucenika:'); writeln(); trenutni:=pocetni; while trenutni<>NIL do begin with trenutni^do begin writeln('IME: ',prezime_ime); writeln('FAKULTET: ',fakultet); writeln('BODOVI: ',bodovi); writeln(); end; trenutni:=trenutni^.sljedeci; end; writeln(); write('Za nastavak pritisnite ENTER..'); readln; end; procedure brisi; var s:string; begin trenutni:= pocetni; prethodni:=pocetni; write('Redni broj ucenika kojeg zelite izbrisati - '); readln(s); while trenutni<>NIL do begin if trenutni^.redni_broj=s then begin prethodni^.sljedeci:=trenutni^.sljedeci; dispose(trenutni); break; end; trenutni:=trenutni^.sljedeci; end; end; procedure izmjeni; var s:string; begin trenutni:=pocetni; write('Redni broj ucenika cije podatke zelite izmijeniti - '); readln(s); while trenutni<> NIL do begin if trenutni^.redni_broj=s then begin write(trenutni^.prezime_ime, ' - '); readln(trenutni^.prezime_ime); write(trenutni^.fakultet, ' - '); readln(trenutni^.fakultet); write(trenutni^.bodovi, ' - '); readln(trenutni^.bodovi); break; end; trenutni:=trenutni^.sljedeci; end; end; **procedure sortiraj; var t1,t2,t:pokazivac; begin t1:=pocetni; while t1 <> NIL do begin t2:=t1^.sljedeci; while t2<>NIL do if t2^.bodovi<t1^.bodovi then begin new(t); t^.redni_broj:=t1^.redni_broj; t^.prezime_ime:=t1^.prezime_ime; t^.fakultet:=t1^.fakultet; t^.bodovi:=t1^.bodovi; t1^.redni_broj:=t2^.redni_broj; t1^.prezime_ime:=t2^.prezime_ime; t1^.fakultet:=t2^.fakultet; t1^.bodovi:=t2^.bodovi; t2^.redni_broj:=t^.redni_broj; t2^.prezime_ime:=t^.prezime_ime; t2^.fakultet:=t^.fakultet; t2^.bodovi:=t^.bodovi; dispose(t); end; t2:=t2^.sljedeci; end; t1:=t1^.sljedeci; write('Za nastavak pritisnite ENTER..'); readln; end;** begin pocetni:=NIL; trenutni:=NIL; writeln('******************************************'); writeln('**********DOBRODOSLI U FAX-O-MAT**********'); writeln('******************************************'); repeat writeln('1 - Racunaj broj bodova'); writeln('2 - Dodaj ucenika'); writeln('3 - Brisi ucenika'); writeln('4 - Ispis liste'); writeln('5 - Izmjeni podatke'); writeln('6 - Sortiraj listu prema broju bodova'); writeln('0 - Kraj'); readln(i); case i of 1:racunaj; 2:unos; 3:brisi; 4:ispis; 5:izmjeni; 6:sortiraj; end; until i=0; end. Either it crashes with a fatal error, or when I press the number 6, nothing happens. The pointer keeps blinking and I can't enter any more numbers.

    Read the article

  • Does it make sense to replace sub-queries by join?

    - by Roman
    For example I have a query like that. select col1 from t1 where col2>0 and col1 in (select col1 from t2 where col2>0) As far as I understand, I can replace it by the following query: select t1.col1 from t1 join (select col1 from t2 where col2>0) as t2 on t1.col1=t2.col1 where t1.col2>0 ADDED In some answers I see join in other inner join. Are both right? Or they are even identical?

    Read the article

  • Tuple in C# 4.0

    - by Jalpesh P. Vadgama
    C# 4.0 language includes a new feature called Tuple. Tuple provides us a way of grouping elements of different data type. That enables us to use it a lots places at practical world like we can store a coordinates of graphs etc. In C# 4.0 we can create Tuple with Create method. This Create method offer 8 overload like following. So you can group maximum 8 data types with a Tuple. Followings are overloads of a data type. Create(T1)- Which represents a tuple of size 1 Create(T1,T2)- Which represents a tuple of size 2 Create(T1,T2,T3) – Which represents a tuple of size 3 Create(T1,T2,T3,T4) – Which represents a tuple of size 4 Create(T1,T2,T3,T4,T5) – Which represents a tuple of size 5 Create(T1,T2,T3,T4,T5,T6) – Which represents a tuple of size 6 Create(T1,T2,T3,T4,T5,T6,T7) – Which represents a tuple of size 7 Create(T1,T2,T3,T4,T5,T6,T7,T8) – Which represents a tuple of size 8 Following are some example code for tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple); var t = System.Tuple.Create<int, string>(1, "Jalpesh"); Console.WriteLine(t); } } } Following is a output of above as expected. You can also access values insides Tuple with ItemN property. Where N represents particular number of item in tuple. Following is an example of it. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Here you can see I have printed items with Item1,Item2 and Item3 . Following is the output of above code.   Even we can create a nested tuple also following is code for nested tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create(1,"Jalpesh",new Tuple<string,string>("P","Vadgama")); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Following is a output above code as expected. As you can see there are unlimited possibilities we can do lots of things with Tuple. Hope you liked it. Stay tuned for more. Till then Happy Programming!!

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: RMAN run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraRacDBName=BBDB, CVOraRACDBClientName=BBDB)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; }2 3 4 allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • Delegate.CreateDelegate() and generics: Error binding to target method

    - by SDReyes
    I'm having problems creating a collection of delegate using reflection and generics. I'm trying to create a delegate collection from Ally methods, whose share a common method signature. public class Classy { public string FirstMethod<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> del ); public string SecondMethod<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> del ); public string ThirdMethod<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> del ); // And so on... } And the generics cooking: // This is the Classy's shared method signature public delegate string classyDelegate<out T1, in T2>( string id, Func<T1, int, IEnumerable<T2>> filter ); // And the linq-way to get the collection of delegates from Classy ( from method in typeof( Classy ).GetMethods( BindingFlags.Instance | BindingFlags.DeclaredOnly | BindingFlags.NonPublic ) let delegateType = typeof( classyDelegate<,> ) select Delegate.CreateDelegate( delegateType, method ) ).ToList( ); But the Delegate.CreateDelegate( delegateType, method ) throws an ArgumentException saying Error binding to target method. : / What am I doing wrong?

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraSID=BBPROD)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; } allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • What would you like to correct and/or improve in this java implementation of Chain Of Responsibility

    - by Maciek Kreft
    package design.pattern.behavioral; import design.pattern.behavioral.ChainOfResponsibility.*; public class ChainOfResponsibility { public static class Chain { private Request[] requests = null; private Handler[] handlers = null; public Chain(Handler[] handlers, Request[] requests){ this.handlers = handlers; this.requests = requests; } public void start() { for(Request r : requests) for (Handler h : handlers) if(h.handle(r)) break; } } public static class Request { private int value; public Request setValue(int value){ this.value = value; return this; } public int getValue() { return value; } } public static class Handler<T1> { private Lambda<T1> lambda = null; private Lambda<T1> command = null; public Handler(Lambda<T1> condition, Lambda<T1> command) { this.lambda = condition; this.command = command; } public boolean handle(T1 request) { if (lambda.lambda(request)) command.lambda(request); return lambda.lambda(request); } } public static abstract class Lambda<T1>{ public abstract Boolean lambda(T1 request); } } class TestChainOfResponsibility { public static void main(String[] args) { new TestChainOfResponsibility().test(); } private void test() { new Chain(new Handler[]{ // chain of responsibility new Handler<Request>( new Lambda<Request>(){ // command public Boolean lambda(Request condition) { return condition.getValue() >= 600; } }, new Lambda<Request>(){ public Boolean lambda(Request command) { System.out.println("You are rich: " + command.getValue() + " (id: " + command.hashCode() + ")"); return true; } } ), new Handler<Request>( new Lambda<Request>(){ public Boolean lambda(Request condition) { return condition.getValue() >= 100; } }, new Lambda<Request>(){ public Boolean lambda(Request command) { System.out.println("You are poor: " + command.getValue() + " (id: " + command.hashCode() + ")"); return true; } } ), }, new Request[]{ new Request().setValue(600), // chaining method new Request().setValue(100), } ).start(); } }

    Read the article

  • "Order By" in LINQ-to-SQL Causes performance issues

    - by panamack
    I've set out to write a method in my C# application which can return an ordered subset of names from a table containing about 2000 names starting at the 100th name and returning the next 20 names. I'm doing this so I can populate a WPF DataGrid in my UI and do some custom paging. I've been using LINQ to SQL but hit a snag with this long executing query so I'm examining the SQL the LINQ query is using (Query B below). Query A runs well: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) Query B takes 40 seconds: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) ORDER BY [t0].[name] When I add the ORDER BY [t0].[name] to the outer query it slows down the query. How can I improve the second query? This was my LINQ stuff Nick int sessionId = 1; int start = 100; int count = 20; // Query subjects with the shoot's session id var subjects = cldb.Subjects.Where<Subject>(s => s.Session_id == sessionId); // Filter as per params var orderedSubjects = subjects .OrderBy<Subject, string>( s => s.Col_zero ); var filteredSubjects = orderedSubjects .Skip<Subject>(start) .Take<Subject>(count);

    Read the article

  • Why is only the first shown window focusable

    - by Miha Markic
    Imagine the code below. Only the first window appears on the top, all of subsequent windows won't nor can they be programatically focused for some reason (they appear in the background). Any idea how to workaround this? BTW, static methods/properties are not allowed nor is any global property. [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Thread t1 = new Thread(CreateForm); t1.SetApartmentState(ApartmentState.STA); t1.Start(); t1.Join(); t1 = new Thread(CreateForm); t1.SetApartmentState(ApartmentState.STA); t1.Start(); t1.Join(); } private static void CreateForm() { using (Form f = new Form()) { System.Windows.Forms.Timer t = new System.Windows.Forms.Timer { Enabled = true, Interval = 2000 }; t.Tick += (s, e) => { f.Close(); t.Enabled = false; }; f.TopMost = true; Application.Run(f); } }

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Is this method a good aproach to get SQL values from C#?

    - by MadBoy
    I have this little method that i use to get stuff from SQL. I either call it with varSearch = "" or varSearch = "something". I would like to know if having method written this way is best or would it be better to split it into two methods (by overloading), or maybe i could somehow parametrize whole WHERE clausule? private void sqlPobierzKontrahentDaneKlienta(ListView varListView, string varSearch) { varListView.BeginUpdate(); varListView.Items.Clear(); string preparedCommand; if (varSearch == "") { preparedCommand = @" SELECT t1.[KlienciID], CASE WHEN t2.[PodmiotRodzaj] = 'Firma' THEN t2.[PodmiotFirmaNazwa] ELSE t2.[PodmiotOsobaNazwisko] + ' ' + t2.[PodmiotOsobaImie] END AS 'Nazwa' FROM [BazaZarzadzanie].[dbo].[Klienci] t1 INNER JOIN [BazaZarzadzanie].[dbo].[Podmioty] t2 ON t1.[PodmiotID] = t2.[PodmiotID] ORDER BY t1.[KlienciID]"; } else { preparedCommand = @" SELECT t1.[KlienciID], CASE WHEN t2.[PodmiotRodzaj] = 'Firma' THEN t2.[PodmiotFirmaNazwa] ELSE t2.[PodmiotOsobaNazwisko] + ' ' + t2.[PodmiotOsobaImie] END AS 'Nazwa' FROM [BazaZarzadzanie].[dbo].[Klienci] t1 INNER JOIN [BazaZarzadzanie].[dbo].[Podmioty] t2 ON t1.[PodmiotID] = t2.[PodmiotID] WHERE t2.[PodmiotOsobaNazwisko] LIKE @searchValue OR t2.[PodmiotFirmaNazwa] LIKE @searchValue OR t2.[PodmiotOsobaImie] LIKE @searchValue ORDER BY t1.[KlienciID]"; } using (var varConnection = Locale.sqlConnectOneTime(Locale.sqlDataConnectionDetails)) using (SqlCommand sqlQuery = new SqlCommand(preparedCommand, varConnection)) { sqlQuery.Parameters.AddWithValue("@searchValue", "%" + varSearch + "%"); using (SqlDataReader sqlQueryResult = sqlQuery.ExecuteReader()) if (sqlQueryResult != null) { while (sqlQueryResult.Read()) { string varKontrahenciID = sqlQueryResult["KlienciID"].ToString(); string varKontrahent = sqlQueryResult["Nazwa"].ToString(); ListViewItem item = new ListViewItem(varKontrahenciID, 0); item.SubItems.Add(varKontrahent); varListView.Items.AddRange(new[] {item}); } } } varListView.EndUpdate(); }

    Read the article

  • Please help fix and optimize this query

    - by user607217
    I am working on a system to find potential duplicates in our customers table (SQL 2005). I am using the built-in SOUNDEX value that our software computes when customers are added/updated, but I also implemented the double metaphone algorithm for better matching. This is the most-nested query I have created, and I can't help but think there is a better way to do it and I'd like to learn. In the inner-most query I am joining the customer table to the metaphone table I created, then finding customers that have identical pKey (primary phonetic key). I take that, union that with customers that have matching soundex values, and then proceed to score those matches with various text similarity functions. This is currently working, but I would also like to add a union of customers whose aKey (alternate phonetic key) match. This would be identical to "QUERY A" except to substitute on (c1Akey = c2Akey) for the join. However, when I attempt to include that, I get errors when I try to execute my query. Here is the code: --Create aggregate ranking select c1Name, c2Name, nDiff, c1Addr, c2Addr, aDiff, c1SSN, c2SSN, sDiff, c1DOB, c2DOB, dDiff, nDiff+aDiff+dDiff+sDiff as Score ,(sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 as [Rank] FROM ( --Create match scores for different fields SELECT c1Name, c2Name, c1Addr, c2Addr, c1SSN, c2SSN, c1LTD, c2LTD, c1DOB, c2DOB, dbo.Jaro(c1name, c2name) AS nDiff, dbo.JaroWinkler(c1addr, c2addr) AS aDiff, CASE WHEN c1dob = '1901-01-01' OR c2dob = '1901-01-01' OR c1dob = '1800-01-01' OR c2dob = '1800-01-01' THEN .5 ELSE dbo.SmithWaterman(c1dob, c2dob) END AS dDiff, CASE WHEN c1ssn = '000-00-0000' OR c2ssn = '000-00-0000' THEN .5 ELSE dbo.Jaro(c1ssn, c2ssn) END AS sDiff FROM -- Generate list of possible matches based on multiple phonetic matching fields ( select * from -- List of similar names from pKey field of ##Metaphone table --QUERY A BEGIN (select customers.custno as c1Custno, name as c1Name, haddr as c1Addr, ssn as c1SSN, lasttripdate as c1LTD, dob as c1DOB, soundex as c1Soundex, pkey as c1Pkey, akey as c1Akey from Customers WITH (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c1 JOIN (select customers.custno as c2Custno, name as c2Name, haddr as c2Addr, ssn as c2SSN, lasttripdate as c2LTD, dob as c2DOB, soundex as c2Soundex, pkey as c2Pkey, akey as c2Akey from Customers with (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c2 on (c1Pkey = c2Pkey) and (c1Custno < c2Custno) WHERE (c1Name <> 'PARENT, GUARDIAN') and c1soundex != c2soundex --QUERY A END union --List of similar names from pregenerated SOUNDEX field (select t1.custno, t1.name, t1.haddr, t1.ssn, t1.lasttripdate, t1.dob, t1.[soundex], 0, 0, t2.custno, t2.name, t2.haddr, t2.ssn, t2.lasttripdate, t2.dob, t2.[soundex], 0, 0 from Customers t1 WITH (nolock) join customers t2 with (nolock) on t1.[soundex] = t2.[soundex] and t1.custno < t2.custno where (t1.name <> 'PARENT, GUARDIAN')) ) as a ) as b where (sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 >= 7.5 order by [rank] desc, score desc Previously, I was using joins such as on c1.pkey = c2.pkey or c1.akey = c2.akey or c1.soundex = c2.soundex but the performance was horrendous, and using unions seems to be working a lot better. Out of 103K customers, tt is currently generating a list of 8.5M potential matches (based on the phonetic codes) in 2.25 minutes, and then taking another 2 to score, rank and filter those down to about 3000. So I am happy with the performance, I just can't help but think there is a better way to structure this, and I need help adding the extra union condition. Thanks!

    Read the article

  • Minecraft Style Chunk building problem

    - by David Torrey
    I'm having some problems with speed in my chunk engine. I timed it out, and in its current state it takes a total ~5 seconds per chunk to fill each face's list. I have a check to see if each face of a block is visible and if it is not visible, it skips it and moves on. I'm using a dictionary (unordered map) because it makes sense memorywise to just not have an entry if there is no block. I've tracked my problem down to testing if there is an entry, and accessing an entry if it does exist. If I remove the tests to see if there is an entry in the dictionary for an adjacent block, or if the block type itself is seethrough, it runs within about 2-4 milliseconds. so here's my question: Is there a faster way to check for an entry in a dictionary than .ContainsKey()? As an aside, I tried TryGetValue() and it doesn't really help with the speed that much. If I remove the ContainsKey() and keep the test where it does the IsSeeThrough for each block, it halves the time, but it's still about 2-3 seconds. It only drops to 2-4ms if I remove BOTH checks. Here is my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Runtime.InteropServices; using OpenTK; using OpenTK.Graphics.OpenGL; using System.Drawing; namespace Anabelle_Lee { public enum BlockEnum { air = 0, dirt = 1, } [StructLayout(LayoutKind.Sequential,Pack=1)] public struct Coordinates<T1> { public T1 x; public T1 y; public T1 z; public override string ToString() { return "(" + x + "," + y + "," + z + ") : " + typeof(T1); } } public struct Sides<T1> { public T1 left; public T1 right; public T1 top; public T1 bottom; public T1 front; public T1 back; } public class Block { public int blockType; public bool SeeThrough() { switch (blockType) { case 0: return true; } return false ; } public override string ToString() { return ((BlockEnum)(blockType)).ToString(); } } class Chunk { private Dictionary<Coordinates<byte>, Block> mChunkData; //stores the block data private Sides<List<Coordinates<byte>>> mVBOVertexBuffer; private Sides<int> mVBOHandle; //private bool mIsChanged; private const byte mCHUNKSIZE = 16; public Chunk() { } public void InitializeChunk() { //create VBO references #if DEBUG Console.WriteLine ("Initializing Chunk"); #endif mChunkData = new Dictionary<Coordinates<byte> , Block>(); //mIsChanged = true; GL.GenBuffers(1, out mVBOHandle.left); GL.GenBuffers(1, out mVBOHandle.right); GL.GenBuffers(1, out mVBOHandle.top); GL.GenBuffers(1, out mVBOHandle.bottom); GL.GenBuffers(1, out mVBOHandle.front); GL.GenBuffers(1, out mVBOHandle.back); //make new list of vertexes for each face mVBOVertexBuffer.top = new List<Coordinates<byte>>(); mVBOVertexBuffer.bottom = new List<Coordinates<byte>>(); mVBOVertexBuffer.left = new List<Coordinates<byte>>(); mVBOVertexBuffer.right = new List<Coordinates<byte>>(); mVBOVertexBuffer.front = new List<Coordinates<byte>>(); mVBOVertexBuffer.back = new List<Coordinates<byte>>(); #if DEBUG Console.WriteLine("Chunk Initialized"); #endif } public void GenerateChunk() { #if DEBUG Console.WriteLine("Generating Chunk"); #endif for (byte i = 0; i < mCHUNKSIZE; i++) { for (byte j = 0; j < mCHUNKSIZE; j++) { for (byte k = 0; k < mCHUNKSIZE; k++) { Random blockLoc = new Random(); Coordinates<byte> randChunk = new Coordinates<byte> { x = i, y = j, z = k }; mChunkData.Add(randChunk, new Block()); mChunkData[randChunk].blockType = blockLoc.Next(0, 1); } } } #if DEBUG Console.WriteLine("Chunk Generated"); #endif } public void DeleteChunk() { //delete VBO references #if DEBUG Console.WriteLine("Deleting Chunk"); #endif GL.DeleteBuffers(1, ref mVBOHandle.left); GL.DeleteBuffers(1, ref mVBOHandle.right); GL.DeleteBuffers(1, ref mVBOHandle.top); GL.DeleteBuffers(1, ref mVBOHandle.bottom); GL.DeleteBuffers(1, ref mVBOHandle.front); GL.DeleteBuffers(1, ref mVBOHandle.back); //clear all vertex buffers ClearPolyLists(); #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void UpdateChunk() { #if DEBUG Console.WriteLine("Updating Chunk"); #endif ClearPolyLists(); //prepare buffers //for every entry in mChunkData map foreach(KeyValuePair<Coordinates<byte>,Block> feBlockData in mChunkData) { Coordinates<byte> checkBlock = new Coordinates<byte> { x = feBlockData.Key.x, y = feBlockData.Key.y, z = feBlockData.Key.z }; //check for polygonson the left side of the cube if (checkBlock.x > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x - 1, checkBlock.y, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } //check for polygons on the right side of the cube if (checkBlock.x < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x + 1, checkBlock.y, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } if (checkBlock.y > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y - 1, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } //check for polygons on the right side of the cube if (checkBlock.y < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y + 1, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } if (checkBlock.z > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z - 1)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } //check for polygons on the right side of the cube if (checkBlock.z < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z + 1)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } BuildBuffers(); #if DEBUG Console.WriteLine("Chunk Updated"); #endif } public void RenderChunk() { } public void LoadChunk() { #if DEBUG Console.WriteLine("Loading Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void SaveChunk() { #if DEBUG Console.WriteLine("Saving Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Saved"); #endif } private bool IsVisible(int pX,int pY,int pZ) { Block testBlock; Coordinates<byte> checkBlock = new Coordinates<byte> { x = Convert.ToByte(pX), y = Convert.ToByte(pY), z = Convert.ToByte(pZ) }; if (mChunkData.TryGetValue(checkBlock,out testBlock )) //if data exists { if (testBlock.SeeThrough() == true) //if existing data is not seethrough { return true; } } return true; } private void AddPoly(byte pX, byte pY, byte pZ, int BufferSide) { //create temp array GL.BindBuffer(BufferTarget.ArrayBuffer, BufferSide); if (BufferSide == mVBOHandle.front) { //front face mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.right) { //back face mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.top) { //left face mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.bottom) { //right face mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.front) { //top face mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.back) { //bottom face mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); } } private void BuildBuffers() { #if DEBUG Console.WriteLine("Building Chunk Buffers"); #endif GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.front); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.front.Count), mVBOVertexBuffer.front.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.back); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.back.Count), mVBOVertexBuffer.back.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.left); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.left.Count), mVBOVertexBuffer.left.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.right); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.right.Count), mVBOVertexBuffer.right.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.top); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.top.Count), mVBOVertexBuffer.top.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.bottom); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.bottom.Count), mVBOVertexBuffer.bottom.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer,0); #if DEBUG Console.WriteLine("Chunk Buffers Built"); #endif } private void ClearPolyLists() { #if DEBUG Console.WriteLine("Clearing Polygon Lists"); #endif mVBOVertexBuffer.top.Clear(); mVBOVertexBuffer.bottom.Clear(); mVBOVertexBuffer.left.Clear(); mVBOVertexBuffer.right.Clear(); mVBOVertexBuffer.front.Clear(); mVBOVertexBuffer.back.Clear(); #if DEBUG Console.WriteLine("Polygon Lists Cleared"); #endif } }//END CLASS }//END NAMESPACE

    Read the article

  • MySql multiple selects batching in .net

    - by Amith George
    I have a situation in my application. For each x-axis point in my chart, I am plotting 5 y-axis values. To calculate each of these 5 values, I need to make 4 different queries. Ie, for each x-axis point I need to fire 20 sql queries. Now, I need to plot 40 such points in the my chart. Its resulting in a pathetic performance where it takes close to a minute to get all the data back from the database. Each of 4 different queries consists of a join between 2 tables. One has only 6 rows. The other close to 10,000. Each of the 4 queries has different WHERE clauses, so they are different queries. For each point in the x-axis, only the values for the where clauses change. I have tried combining each of the 4 queries into one big string. Basically batch the four selects. These are again batched for each y-axis value. So, for each x-axis point, I am now firing one big command that consists of 20 different select statements. Technically, I should be experiencing a big performance boost, right? Instead of hitting the db 40x5x4 = 800 times, I am now hitting it just 40 times. But instead of taking 60 seconds, it taking 50-55 seconds... not much of a help. I am using MySql 5.1, and the 6.1 version of its .Net connector. What can I do to improve the performance? Edit: One of the 4 queries is as follows: SELECT SUM(TIME_TO_SEC(TIMEDIFF(T1.col2, T1.col1))* T2.col1 / (3600 *1000)) AS TotalTime FROM Table T1 JOIN Table T2 ON T1.col3 = T2.col3 WHERE T1.col4 = 'i' AND T1.col1 >= '2009-12-25 00:00:00' AND T1.col2 <= '2009-12-26 00:00:00'; The other 3 queries are similar, only the where clause changes slightly. This set of 4 queries is fired 5 times. The first 3 times against the join of table T1 and T2, passing in different values for col4. And the next two times against the join of table T3 and T2 passing in different values for col4. These 5 values are the y-axis values for a particular x-axis point. The data returned by all these queries is the same format. so, we tried doing a UNION ALL on all these queries. No substantial difference. One strange thing, however, after indexing the foreign key on the table T1 [while it contained over a lakh records], the queries were using the index, but they had become slower. At times, the queries would take double the time to return the data.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >