Search Results

Search found 49435 results on 1978 pages for 'query string'.

Page 172/1978 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • 'Good' programming form in maintaining / updating / accessing files by entry

    - by zhermes
    Basic Question: If I'm storying/modifying data, should I access elements of a file by index hard-coded index, i.e. targetFile.getElement(5); via a hardcoded identifier (internally translated into index), i.e. target.getElementWithID("Desired Element"), or with some intermediate DESIRED_ELEMENT = 5; ... target.getElement(DESIRED_ELEMENT), etc. Background: My program (c++) stores data in lots of different 'dataFile's. I also keep a list of all of the data-files in another file---a 'listFile'---which also stores some of each one's properties (see below, but i.e. what it's name is, how many lines of information it has etc.). There is an object which manages the data files and the list file, call it a 'fileKeeper'. The entries of a listFile look something like: filename , contents name , number of lines , some more numbers ... Its definitely possible that I may add / remove fields from this list --- but in general, they'll stay static. Right now, I have a constant string array which holds the identification of each element in each entry, something like: const string fileKeeper::idKeys[] = { "FileName" , "Contents" , "NumLines" ... }; const int fileKeeper::idKeysNum = 6; // 6 - for example I'm trying to manage this stuff in 'good' programatic form. Thus, when I want to retrieve the number of lines in a file (for example), instead of having a method which just retrieves the '3'rd element... Instead I do something like: string desiredID = "NumLines"; int desiredIndex = indexForID(desiredID); string desiredElement = elementForIndex(desiredIndex); where the function indexForID() goes through the entries of idKeys until it finds desiredID then returns the index it corresponds to. And elementForIndex(index) actually goes into the listFile to retrieve the index'th element of the comma-delimited string. Problem: This still seems pretty ugly / poor-form. Is there a way I should be doing this? If not, what are some general ways in which this is usually done? Thanks!

    Read the article

  • Java looping through array - Optimization

    - by oudouz
    I've got some Java code that runs quite the expected way, but it's taking some amount of time -some seconds- even if the job is just looping through an array. The input file is a Fasta file as shown in the image below. The file I'm using is 2.9Mo, and there are some other Fasta file that can take up to 20Mo. And in the code im trying to loop through it by bunches of threes, e.g: AGC TTT TCA ... etc The code has no functional sens for now but what I want is to append each Amino Acid to it's equivalent bunch of Bases. Example : AGC - Ser / CUG Leu / ... etc So what's wrong with the code ? and Is there any way to do it better ? Any optimization ? Looping through the whole String is taking some time, maybe just seconds, but need to find a better way to do it. import java.io.BufferedReader; import java.io.File; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; public class fasta { public static void main(String[] args) throws IOException { File fastaFile; FileReader fastaReader; BufferedReader fastaBuffer = null; StringBuilder fastaString = new StringBuilder(); try { fastaFile = new File("res/NC_017108.fna"); fastaReader = new FileReader(fastaFile); fastaBuffer = new BufferedReader(fastaReader); String fastaDescription = fastaBuffer.readLine(); String line = fastaBuffer.readLine(); while (line != null) { fastaString.append(line); line = fastaBuffer.readLine(); } System.out.println(fastaDescription); System.out.println(); String currentFastaAcid; for (int i = 0; i < fastaString.length(); i+=3) { currentFastaAcid = fastaString.toString().substring(i, i + 3); System.out.println(currentFastaAcid); } } catch (NullPointerException e) { System.out.println(e.getMessage()); } catch (FileNotFoundException e) { System.out.println(e.getMessage()); } catch (IOException e) { System.out.println(e.getMessage()); } finally { fastaBuffer.close(); } } }

    Read the article

  • NUL symbol gets inserted anywhere in the XML

    - by Racs
    I am using Oracle 11g and when I query data from database manually, it looks ok. But when I view it using Notepad++, there is a special character that gets inserted anywhere in the xml produced. Please see image below: I tried everything and deleted some parts of the xml, updated the db and query again but the invalid character just differ in location. Any idea is much appreciated, thanks! Best regards, Racs

    Read the article

  • Searching strings C

    - by Skittles
    First time posting here so I'm sorry if I mess up. I need to search a string and return any strings containing the search data with the search data highlighted. If my string is Hi my name is and I searched name it would produce Hi my NAME is This is a quick code I wrote that works but it only works once. If I try and search again it seg faults. I was hoping someone could hint me at a better way to write this because this code is disgusting! void search(char * srcStr, int n){ int cnt = 0, pnt,i = 0; char tmpText[500]; char tmpName[500]; char *ptr, *ptr2, *ptrLast; int num; while(*(node->text+cnt) != '\0'){ //finds length of string cnt++; } for(pnt = 0; pnt < cnt; pnt++){ //copies node->text into a tmp string tmpText[pnt] = *(node->text+pnt); } tmpText[pnt+1] = '\0'; //prints up to first occurrence of srcStr ptr = strcasestr(tmpText, srcStr); for(num = 0; num < ptr-tmpText; num++){ printf("%c",tmpText[num]); } //prints first occurrence of srcStr in capitals for(num = 0; num < n; num++){ printf("%c",toupper(tmpText[ptr-tmpText+num])); } ptr2 = strcasestr((ptr+n),srcStr); for(num = (ptr-tmpText+n); num < (ptr2-tmpText); num++){ printf("%c",tmpText[num]); } while((ptr = strcasestr((ptr+n), srcStr)) != NULL){ ptr2 = strcasestr((ptr+n),srcStr); for(num = (ptr-tmpText+n); num < (ptr2-tmpText); num++){ printf("%c",tmpText[num]); } for(num = 0; num < n; num++){ printf("%c",toupper(tmpText[ptr-tmpText+num])); } ptrLast = ptr; } //prints remaining string after last occurrence for(num = (ptrLast-tmpText+n); num < cnt; num++){ printf("%c",tmpText[num]); } }

    Read the article

  • Better way to design a database

    - by cMinor
    I have a conceptual problem and I would like to get your ideas on how I'll be able to do what I am aiming. My goal is to create a database with information of persons who work at a place depending on their profession and skills,and keep control of salary and projects (how much would cost summing all the hours of work) I have 3 categories which can have subcategories: Outsourcing Technician welder turner assistant Administrative supervisor manager So each person has its information and the projects they are working on, also one person may do several jobs... I was thinking about having 5 tables (EMPLOYEE, SKILLS, PROYECTS, SALARY, PROFESSION) but I guess there is a better way of doing this. create table Employee ( PRIMARY KEY [Person_ID] int(10), [Name] varchar(30), [sex] varchar(10), [address] varchar(10), [profession] varchar(10), [Skills_ID] int(10), [Proyect_ID] int(10), [Salary_ID] int(10), [Salary] float ) create table Skills ( PRIMARY KEY [Skills_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID), [Skills_pay] float(10), [Comments] varchar(50) ) create table Proyects ( PRIMARY KEY [Proyect_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) create table Salary ( PRIMARY KEY [Salary_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) So to get the total amount of the cost of a project I would just sum the working hours of each employee envolved and sum some extra costs in an aggregate query. Is there a way to do this in a more efficient way? What to add or delete of this small model? I guess I am missing something in the salary - maybe I need another table for that?

    Read the article

  • Trying to Set Up SSH Tunneling To MySQL Server for MySQL Query Browser

    - by Teno
    I'm trying to set up SSH tunneling on a remote web server to another MySQL server so that the database can be browsed easily with MySQL Query Browser. I'm following this page but cannot connect to the MySQL server. http://www.howtogeek.com/howto/ubuntu/access-your-mysql-server-remotely-over-ssh/ What I've done: logged in to the web server with Putty via SSH. typed ssh -L 33060:[database]:3306 [myusername]@[webserver_address] where [...]s are altered by the actual information. I was asked a password and typed it and got the following message. So it seems login was successful. socket: Protocol not supported Last login: .... 2012 from .... Copyright (c) 1980, 1983, 1986, 1988, 1990, 1991, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD 7.1-RELEASE.... Welcome to FreeBSD! Opened MySQL Query Browser in Windows and entered Server Host: localhost Port: 33060 UserName: myusername PassWord: mypassword And it says, Could not connect to the specified instance. MySQL Error Number 2003 Can't connect to MySQL Server on 'localhost' (10061) Sorry if this is too basic. Thanks for your information.

    Read the article

  • DNS something is wrong?

    - by Nickolas R.
    Hello I am configuring bind9 on a server with two network interfaces, one is connected to the LAN and the other is connected to the Internet through NAT so bind is not faced directly to the Internet. Everything seems to work fine, clients can do both forward and reverse lookups but somethings seems strange. On the server if i try to ping www.google.com one time, a great amount of network activity is genereated, alot more that one would expect so i decided to sniff the traffic with tcpdump. When loading the dump into Wireshark i can see about 250 entries with "Standard query A" and "Standard query response" Here a some of the entries from the dump DNS Standard query A www.google.com DNS Standard query A blackhole-1.iana.org DNS Standard query A blackhole-2.iana.org DNS Standard query response DNS Standard query A ns2.isc-sns.com DNS Standard query A ns1.isc-sns.net DNS Standard query A ns3.isc-sns.info DNS Standard query response PTR b.iana-servers.net RRSIG DNS Standard query A auth2.dns.cogentco.com DNS Standard query A ns1.crsnic.net DNS Standard query A ns2.nsiregistry.net DNS Standard query A ns3.verisign-grs.net DNS Standard query A ns4.verisign-grs.net DNS Standard query PTR 79.52.19.199.in-addr.arpa I do not have too much experince with DNS yet, but i am pretty sure that something is wrong. Anybody that have an idea of whats is going on?

    Read the article

  • Find files containing a string on the whole filesystem

    - by Fabio
    I need to find all the instances of a given string in the whole filesystem, because I don't remember in which configuration files, script or any other programs I put it and I need to update that string with a new one. I tried with the following command `grep -nr 'needle' / --exclude-dir=.svn | mail [email protected] -s 'References on xxx' If I run this command on a small directory it gives me the output I need in the form /path1/:nn:line containing needle /path2/:nn:line containing needle where /path1 is the full path of the file, nn is the row containing the needle and last field is the content of the line. However when I run the command on the root directory the grep process hang after a while. I run this script about 8 hours ago and even on a small filesystem (less than 5GB) it doesn't end and if I run top or ps the process seems sleeping root 24909 0.0 0.1 3772 1520 pts/1 S+ Feb10 0:15 grep -nr needle / --exclude-dir=.svn Why it doesn't end? Is there any better way to do this (it's a one time job, I don't need to execute this more than once) Thanks.

    Read the article

  • How to rename everything matching a certain string in a folder

    - by lostiniceland
    Hello Everyone I am running Linux and I have some basic console knowledge but my current problem is quite difficult and I dont know how to achieve this. I want/need to rename everything within a folder that matches a given string. By everything I mean folders/files content within a file content in hidden files Basically I want to refactor a Java-project. Sure, I could use Eclipse to handle the replacing, but this leaves out the folders or resources outside of my workspace. I was thinking of a script that could do the job for me but this seems rather tricky. For instance when it comes to folder-/file-rename I want to replace only the part of the name that matches my string, the rest should remain untouched. Maybe someone already has something like this in his/her script-collection :-) Thanks in advance Marc

    Read the article

  • How to rename everything matching a certain string in a folder

    - by lostiniceland
    Hello Everyone I am running Linux and I have some basic console knowledge but my current problem is quite difficult and I dont know how to achieve this. I want/need to rename everything within a folder that matches a given string. By everything I mean folders/files content within a file content in hidden files Basically I want to refactor a Java-project. Sure, I could use Eclipse to handle the replacing, but this leaves out the folders or resources outside of my workspace. I was thinking of a script that could do the job for me but this seems rather tricky. For instance when it comes to folder-/file-rename I want to replace only the part of the name that matches my string, the rest should remain untouched. Maybe someone already has something like this in his/her script-collection :-) Thanks in advance Marc

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

  • Highlight identical strings in vi(m)

    - by Boldewyn
    One feature of Notepad++, which I find really useful and haven't found elsewhere, is the highlighting of other text that is identical to the one currently selected. Is there something similar possible with vi(m)? (Of course, there is. But how do I achieve it?) That is, any of those: If I am in Visual Mode and have text selected: Highlight identical text If I have searched /foo, highlight all instances of foo. If I am at the beginning of a string (series of characters, numbers or underscores), highlight all other matching strings (prefered solution). The last one is similar to the closing parentheses matching and IMHO the most useful. Edit: For my second use case, I found a solution (that is, Google found it...): :set hls However, the others remain.

    Read the article

  • Automatically send a string when opening a raw connection in putty

    - by MBraedley
    I have a program running on a server that accepts TCP/IP connections on a specified port. When a connection is made, this program waits for the user (i.e. me) to send a string which identifies the type of user. Once the user identifies themselves, the program will start sending data that is relevant to that user type. I want to automate the first step and have putty automatically send the user identification string once the connection is made. I've been through the settings, and can't seem to find anything like "send following commands on connection". Any help would be appreciated.

    Read the article

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

  • Command output as string

    - by rik
    I want to get output from command C:\Program Files (x86)\Java\jre7\bin\java.exe" -version as string variable. I tried this way: $out = &"C:\Program Files (x86)\Java\jre7\bin\java.exe" -version but it gives error message: java.exe : java version "1.7.0_05" At line:1 char:9 + $out = & <<<< "C:\Program Files (x86)\Java\jre7\bin\java.exe" -version + CategoryInfo : NotSpecified: (java version "1.7.0_05":String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Java(TM) SE Runtime Environment (build 1.7.0_05-b05) Java HotSpot(TM) Client VM (build 23.1-b03, mixed mode, sharing) $out variable seems empty. What am I doing wrong?

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Query doesn't use a covering-index when applicable

    - by Dor
    I've downloaded the employees database and executed some queries for benchmarking purposes. Then I noticed that one query didn't use a covering index, although there was a corresponding index that I created earlier. Only when I added a FORCE INDEX clause to the query, it used a covering index. I've uploaded two files, one is the executed SQL queries and the other is the results. Can you tell why the query uses a covering-index only when a FORCE INDEX clause is added? The EXPLAIN shows that in both cases, the index dept_no_from_date_idx is being used anyway. To adapt myself to the standards of SO, I'm also writing the content of the two files here: The SQL queries: USE employees; /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date); /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp"; DESCRIBE dept_emp; SHOW KEYS IN dept_emp; /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); The results: -------------- /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date) -------------- Query OK, 331603 rows affected (33.95 sec) Records: 331603 Duplicates: 0 Warnings: 0 -------------- /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp" -------------- +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | dept_emp | InnoDB | 10 | Compact | 331883 | 36 | 12075008 | 0 | 21544960 | 29360128 | NULL | 2010-05-04 13:07:49 | NULL | NULL | utf8_general_ci | NULL | | | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ 1 row in set (0.47 sec) -------------- DESCRIBE dept_emp -------------- +-----------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+-------+ | emp_no | int(11) | NO | PRI | NULL | | | dept_no | char(4) | NO | PRI | NULL | | | from_date | date | NO | | NULL | | | to_date | date | NO | | NULL | | +-----------+---------+------+-----+---------+-------+ 4 rows in set (0.05 sec) -------------- SHOW KEYS IN dept_emp -------------- +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | dept_emp | 0 | PRIMARY | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 0 | PRIMARY | 2 | dept_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | emp_no | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no | 1 | dept_no | A | 7 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 1 | dept_no | A | 13 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 2 | from_date | A | 165941 | NULL | NULL | | BTREE | | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 6 rows in set (0.23 sec) -------------- /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no,dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 21402 | Using where | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ 3 rows in set (0.09 sec) -------------- /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 37468 | Using where; Using index | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ 3 rows in set (0.05 sec) Bye

    Read the article

  • Mysql - help me optimize this query (improved question)

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) AS key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) AS key_2_total_matches, COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published', CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN tags AS t ON ( t.id_tag = ttagrels.id_tag ) OR ( t.id_tag = lptagrels.id_tag ) OR ( t.id_tag = wtagrels.id_tag ) WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( t.id_tag = lptagrels.id_tag ) AND ( lp.id_lp 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( t.id_tag = wtagrels.id_tag ) AND ( wc.id_wc 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed. The tag field of tags table is indexed. Is there something faulty with the query? What can be the reason behind 20+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query.jpg

    Read the article

  • Please help fix and optimize this query

    - by user607217
    I am working on a system to find potential duplicates in our customers table (SQL 2005). I am using the built-in SOUNDEX value that our software computes when customers are added/updated, but I also implemented the double metaphone algorithm for better matching. This is the most-nested query I have created, and I can't help but think there is a better way to do it and I'd like to learn. In the inner-most query I am joining the customer table to the metaphone table I created, then finding customers that have identical pKey (primary phonetic key). I take that, union that with customers that have matching soundex values, and then proceed to score those matches with various text similarity functions. This is currently working, but I would also like to add a union of customers whose aKey (alternate phonetic key) match. This would be identical to "QUERY A" except to substitute on (c1Akey = c2Akey) for the join. However, when I attempt to include that, I get errors when I try to execute my query. Here is the code: --Create aggregate ranking select c1Name, c2Name, nDiff, c1Addr, c2Addr, aDiff, c1SSN, c2SSN, sDiff, c1DOB, c2DOB, dDiff, nDiff+aDiff+dDiff+sDiff as Score ,(sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 as [Rank] FROM ( --Create match scores for different fields SELECT c1Name, c2Name, c1Addr, c2Addr, c1SSN, c2SSN, c1LTD, c2LTD, c1DOB, c2DOB, dbo.Jaro(c1name, c2name) AS nDiff, dbo.JaroWinkler(c1addr, c2addr) AS aDiff, CASE WHEN c1dob = '1901-01-01' OR c2dob = '1901-01-01' OR c1dob = '1800-01-01' OR c2dob = '1800-01-01' THEN .5 ELSE dbo.SmithWaterman(c1dob, c2dob) END AS dDiff, CASE WHEN c1ssn = '000-00-0000' OR c2ssn = '000-00-0000' THEN .5 ELSE dbo.Jaro(c1ssn, c2ssn) END AS sDiff FROM -- Generate list of possible matches based on multiple phonetic matching fields ( select * from -- List of similar names from pKey field of ##Metaphone table --QUERY A BEGIN (select customers.custno as c1Custno, name as c1Name, haddr as c1Addr, ssn as c1SSN, lasttripdate as c1LTD, dob as c1DOB, soundex as c1Soundex, pkey as c1Pkey, akey as c1Akey from Customers WITH (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c1 JOIN (select customers.custno as c2Custno, name as c2Name, haddr as c2Addr, ssn as c2SSN, lasttripdate as c2LTD, dob as c2DOB, soundex as c2Soundex, pkey as c2Pkey, akey as c2Akey from Customers with (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c2 on (c1Pkey = c2Pkey) and (c1Custno < c2Custno) WHERE (c1Name <> 'PARENT, GUARDIAN') and c1soundex != c2soundex --QUERY A END union --List of similar names from pregenerated SOUNDEX field (select t1.custno, t1.name, t1.haddr, t1.ssn, t1.lasttripdate, t1.dob, t1.[soundex], 0, 0, t2.custno, t2.name, t2.haddr, t2.ssn, t2.lasttripdate, t2.dob, t2.[soundex], 0, 0 from Customers t1 WITH (nolock) join customers t2 with (nolock) on t1.[soundex] = t2.[soundex] and t1.custno < t2.custno where (t1.name <> 'PARENT, GUARDIAN')) ) as a ) as b where (sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 >= 7.5 order by [rank] desc, score desc Previously, I was using joins such as on c1.pkey = c2.pkey or c1.akey = c2.akey or c1.soundex = c2.soundex but the performance was horrendous, and using unions seems to be working a lot better. Out of 103K customers, tt is currently generating a list of 8.5M potential matches (based on the phonetic codes) in 2.25 minutes, and then taking another 2 to score, rank and filter those down to about 3000. So I am happy with the performance, I just can't help but think there is a better way to structure this, and I need help adding the extra union condition. Thanks!

    Read the article

  • Projections.count() and Projections.countDistinct() both result in the same query

    - by Kim L
    EDIT: I've edited this post completely, so that the new description of my problem includes all the details and not only what I previously considered relevant. Maybe this new description will help to solve the problem I'm facing. I have two entity classes, Customer and CustomerGroup. The relation between customer and customer groups is ManyToMany. The customer groups are annotated in the following way in the Customer class. @Entity public class Customer { ... @ManyToMany(mappedBy = "customers", fetch = FetchType.LAZY) public Set<CustomerGroup> getCustomerGroups() { ... } ... public String getUuid() { return uuid; } ... } The customer reference in the customer groups class is annotated in the following way @Entity public class CustomerGroup { ... @ManyToMany public Set<Customer> getCustomers() { ... } ... public String getUuid() { return uuid; } ... } Note that both the CustomerGroup and Customer classes also have an UUID field. The UUID is a unique string (uniqueness is not forced in the datamodel, as you can see, it is handled as any other normal string). What I'm trying to do, is to fetch all customers which do not belong to any customer group OR the customer group is a "valid group". The validity of a customer group is defined with a list of valid UUIDs. I've created the following criteria query Criteria criteria = getSession().createCriteria(Customer.class); criteria.setProjection(Projections.countDistinct("uuid")); criteria = criteria.createCriteria("customerGroups", "groups", Criteria.LEFT_JOIN); List<String> uuids = getValidUUIDs(); Criterion criterion = Restrictions.isNull("groups.uuid"); if (uuids != null && uuids.size() > 0) { criterion = Restrictions.or(criterion, Restrictions.in( "groups.uuid", uuids)); } criteria.add(criterion); When executing the query, it will result in the following SQL query select count(*) as y0_ from Customer this_ left outer join CustomerGroup_Customer customergr3_ on this_.id=customergr3_.customers_id left outer join CustomerGroup groups1_ on customergr3_.customerGroups_id=groups1_.id where groups1_.uuid is null or groups1_.uuid in ( ?, ? ) The query is exactly what I wanted, but with one exception. Since a Customer can belong to multiple CustomerGroups, left joining the CustomerGroup will result in duplicated Customer objects. Hence the count(*) will give a false value, as it only counts how many results there are. I need to get the amount of unique customers and this I expected to achieve by using the Projections.countDistinct("uuid"); -projection. For some reason, as you can see, the projection will still result in a count(*) query instead of the expected count(distinct uuid). Replacing the projection countDistinct with just count("uuid") will result in the exactly same query. Am I doing something wrong or is this a bug? === "Problem" solved. Reason: PEBKAC (Problem Exists Between Keyboard And Chair). I had a branch in my code and didn't realize that the branch was executed. That branch used rowCount() instead of countDistinct().

    Read the article

  • Grails - ElasticSearch - QueryParsingException[[index] No query registered for [query]]; with elasticSearchHelper; JSON via curl works fine though

    - by v1p
    I have been working on a Grails project, clubbed with ElasticSearch ( v 20.6 ), with a custom build of elasticsearch-grails-plugin(to support geo_point indexing : v.20.6) have been trying to do a filtered Search, while using script_fields (to calculate distance). Following is Closure & the generated JSON from the GXContentBuilder : Closure records = Domain.search(searchType:'dfs_query_and_fetch'){ query { filtered = { query = { if(queryTxt){ query_string(query: queryTxt) }else{ match_all {} } } filter = { geo_distance = { distance = "${userDistance}km" "location"{ lat = latlon[0]?:0.00 lon = latlon[1]?:0.00 } } } } } script_fields = { distance = { script = "doc['location'].arcDistanceInKm($latlon)" } } fields = ["_source"] } GXContentBuilder generated query JSON : { "query": { "filtered": { "query": { "match_all": {} }, "filter": { "geo_distance": { "distance": "5km", "location": { "lat": "37.752258", "lon": "-121.949886" } } } } }, "script_fields": { "distance": { "script": "doc['location'].arcDistanceInKm(37.752258, -121.949886)" } }, "fields": ["_source"] } The JSON query, using curl-way, works perfectly fine. But when I try to execute it from Groovy Code, I mean with this : elasticSearchHelper.withElasticSearch { Client client -> def response = client.search(request).actionGet() } It throws following error : Failed to execute phase [dfs], total failure; shardFailures {[1][index][3]: SearchParseException[[index][3]: from[0],size[60]: Parse Failure [Failed to parse source [{"from":0,"size":60,"query_binary":"eyJxdWVyeSI6eyJmaWx0ZXJlZCI6eyJxdWVyeSI6eyJtYXRjaF9hbGwiOnt9fSwiZmlsdGVyIjp7Imdlb19kaXN0YW5jZSI6eyJkaXN0YW5jZSI6IjVrbSIsImNvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiI6eyJsYXQiOiIzNy43NTIyNTgiLCJsb24iOiItMTIxLjk0OTg4NiJ9fX19fSwic2NyaXB0X2ZpZWxkcyI6eyJkaXN0YW5jZSI6eyJzY3JpcHQiOiJkb2NbJ2NvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiddLmFyY0Rpc3RhbmNlSW5LbSgzNy43NTIyNTgsIC0xMjEuOTQ5ODg2KSJ9fSwiZmllbGRzIjpbIl9zb3VyY2UiXX0=","explain":true}]]]; nested: QueryParsingException[[index] No query registered for [query]]; } The above Closure works if I only use filtered = { ... } script_fields = { ... } but it doesn't return the calculated distance. Anyone had any similar problem ? Thanks in advance :) It's possible that I might have been dim to point out the obvious here :P

    Read the article

  • How to create a simple adf dashboard application with EJB 3.0

    - by Rodrigues, Raphael
    In this month's Oracle Magazine, Frank Nimphius wrote a very good article about an Oracle ADF Faces dashboard application to support persistent user personalization. You can read this entire article clicking here. The idea in this article is to extend the dashboard application. My idea here is to create a similar dashboard application, but instead ADF BC model layer, I'm intending to use EJB3.0. There are just a one small trick here and I'll show you. I'm using the HR usual oracle schema. The steps are: 1. Create a ADF Fusion Application with EJB as a layer model 2. Generate the entities from table (I'm using Department and Employees only) 3. Create a new Session Bean. I called it: HRSessionEJB 4. Create a new method like that: public List getAllDepartmentsHavingEmployees(){ JpaEntityManager jpaEntityManager = (JpaEntityManager)em.getDelegate(); Query query = jpaEntityManager.createNamedQuery("Departments.allDepartmentsHavingEmployees"); JavaBeanResult.setQueryResultClass(query, AggregatedDepartment.class); return query.getResultList(); } 5. In the Departments entity, create a new native query annotation: @Entity @NamedQueries( { @NamedQuery(name = "Departments.findAll", query = "select o from Departments o") }) @NamedNativeQueries({ @NamedNativeQuery(name="Departments.allDepartmentsHavingEmployees", query = "select e.department_id, d.department_name , sum(e.salary), avg(e.salary) , max(e.salary), min(e.salary) from departments d , employees e where d.department_id = e.department_id group by e.department_id, d.department_name")}) public class Departments implements Serializable {...} 6. Create a new POJO called AggregatedDepartment: package oramag.sample.dashboard.model; import java.io.Serializable; import java.math.BigDecimal; public class AggregatedDepartment implements Serializable{ @SuppressWarnings("compatibility:5167698678781240729") private static final long serialVersionUID = 1L; private BigDecimal departmentId; private String departmentName; private BigDecimal sum; private BigDecimal avg; private BigDecimal max; private BigDecimal min; public AggregatedDepartment() { super(); } public AggregatedDepartment(BigDecimal departmentId, String departmentName, BigDecimal sum, BigDecimal avg, BigDecimal max, BigDecimal min) { super(); this.departmentId = departmentId; this.departmentName = departmentName; this.sum = sum; this.avg = avg; this.max = max; this.min = min; } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setSum(BigDecimal sum) { this.sum = sum; } public BigDecimal getSum() { return sum; } public void setAvg(BigDecimal avg) { this.avg = avg; } public BigDecimal getAvg() { return avg; } public void setMax(BigDecimal max) { this.max = max; } public BigDecimal getMax() { return max; } public void setMin(BigDecimal min) { this.min = min; } public BigDecimal getMin() { return min; } } 7. Create the util java class called JavaBeanResult. The function of this class is to configure a native SQL query to return POJOs in a single line of code using the utility class. Credits: http://onpersistence.blogspot.com.br/2010/07/eclipselink-jpa-native-constructor.html package oramag.sample.dashboard.model.util; /******************************************************************************* * Copyright (c) 2010 Oracle. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * @author shsmith ******************************************************************************/ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.exceptions.ConversionException; import org.eclipse.persistence.internal.helper.ConversionManager; import org.eclipse.persistence.internal.sessions.AbstractRecord; import org.eclipse.persistence.internal.sessions.AbstractSession; import org.eclipse.persistence.jpa.JpaHelper; import org.eclipse.persistence.queries.DatabaseQuery; import org.eclipse.persistence.queries.QueryRedirector; import org.eclipse.persistence.sessions.Record; import org.eclipse.persistence.sessions.Session; /*** * This class is a simple query redirector that intercepts the result of a * native query and builds an instance of the specified JavaBean class from each * result row. The order of the selected columns musts match the JavaBean class * constructor arguments order. * * To configure a JavaBeanResult on a native SQL query use: * JavaBeanResult.setQueryResultClass(query, SomeBeanClass.class); * where query is either a JPA SQL Query or native EclipseLink DatabaseQuery. * * @author shsmith * */ public final class JavaBeanResult implements QueryRedirector { private static final long serialVersionUID = 3025874987115503731L; protected Class resultClass; public static void setQueryResultClass(Query query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); DatabaseQuery databaseQuery = JpaHelper.getDatabaseQuery(query); databaseQuery.setRedirector(javaBeanResult); } public static void setQueryResultClass(DatabaseQuery query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); query.setRedirector(javaBeanResult); } protected JavaBeanResult(Class resultClass) { this.resultClass = resultClass; } @SuppressWarnings("unchecked") public Object invokeQuery(DatabaseQuery query, Record arguments, Session session) { List results = new ArrayList(); try { Constructor[] constructors = resultClass.getDeclaredConstructors(); Constructor javaBeanClassConstructor = null; // (Constructor) resultClass.getDeclaredConstructors()[0]; Class[] constructorParameterTypes = null; // javaBeanClassConstructor.getParameterTypes(); List rows = (List) query.execute( (AbstractSession) session, (AbstractRecord) arguments); for (Object[] columns : rows) { boolean found = false; for (Constructor constructor : constructors) { javaBeanClassConstructor = constructor; constructorParameterTypes = javaBeanClassConstructor.getParameterTypes(); if (columns.length == constructorParameterTypes.length) { found = true; break; } // if (columns.length != constructorParameterTypes.length) { // throw new ColumnParameterNumberMismatchException( // resultClass); // } } if (!found) throw new ColumnParameterNumberMismatchException( resultClass); Object[] constructorArgs = new Object[constructorParameterTypes.length]; for (int j = 0; j < columns.length; j++) { Object columnValue = columns[j]; Class parameterType = constructorParameterTypes[j]; // convert the column value to the correct type--if possible constructorArgs[j] = ConversionManager.getDefaultManager() .convertObject(columnValue, parameterType); } results.add(javaBeanClassConstructor.newInstance(constructorArgs)); } } catch (ConversionException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalArgumentException e) { throw new ColumnParameterMismatchException(e); } catch (InstantiationException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalAccessException e) { throw new ColumnParameterMismatchException(e); } catch (InvocationTargetException e) { throw new ColumnParameterMismatchException(e); } return results; } public final class ColumnParameterMismatchException extends RuntimeException { private static final long serialVersionUID = 4752000720859502868L; public ColumnParameterMismatchException(Throwable t) { super( "Exception while processing query results-ensure column order matches constructor parameter order", t); } } public final class ColumnParameterNumberMismatchException extends RuntimeException { private static final long serialVersionUID = 1776794744797667755L; public ColumnParameterNumberMismatchException(Class clazz) { super( "Number of selected columns does not match number of constructor arguments for: " + clazz.getName()); } } } 8. Create the DataControl and a jsf or jspx page 9. Drag allDepartmentsHavingEmployees from DataControl and drop in your page 10. Choose Graph > Type: Bar (Normal) > any layout 11. In the wizard screen, Bars label, adds: sum, avg, max, min. In the X Axis label, adds: departmentName, and click in OK button 12. Run the page, the result is showed below: You can download the workspace here . It was using the latest jdeveloper version 11.1.2.2.

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >