Search Results

Search found 19449 results on 778 pages for 'query builder'.

Page 57/778 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How To Implement The Query Side Of CQS in DDD?

    - by Laz
    I have implemented the command side of DDD using the domain model and repositories, but how do I implement the query side? Do I create an entirely new domain model for the UI, and where is this kept in the project structure...in the domain layer, the UI layer, etc? Also, what do I use as my querying mechanism, do I create new repositories specifically for the UI domain objects, something other than repositories, or something else?

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • LINQ to SQL: ExecuteQuery not working when performing a parameterized query.

    - by ajbeaven
    I have a weird problem with ExecuteQuery in that it isn't working when performing a parameterized query. The following returns 1 record: db.ExecuteQuery<Member>(@"SELECT * FROM Member INNER JOIN aspnet_Users ON Member.user_id = aspnet_Users.UserId WHERE [aspnet_Users].[UserName] = 'Marina2'"); However, the parameterized version returns no results: db.ExecuteQuery<Member>(@"SELECT * FROM Member INNER JOIN aspnet_Users ON Member.user_id = aspnet_Users.UserId WHERE [aspnet_Users].[UserName] = '{0}'", "Marina2"); What am I doing wrong?

    Read the article

  • How to construct this query? (Ordering by COUNT() and joining with users table)

    - by Andrew
    users table: id-user-other columns scores table: id-user_id-score-other columns They're are more than one rows for each user, but there's only two scores you can have. (0 or 1, == win or loss). So I want to output all the users ordered by the number of wins, and all the users ordered by the numbers of losses. I know how to do this by looping through each user, but I was wondering how to do it with one query. Any help is appreciated!

    Read the article

  • Avoiding SQL Injection in SQL query with Like Operator using parameters?

    - by MikeJ
    Taking over some code from my predecessor and I found a query that uses the Like operator: SELECT * FROM suppliers WHERE supplier_name like '%'+name+%'; Trying to avoid SQL Injection problem and parameterize this but I am not quite sure how this would be accomplished. Any suggestions ? note, I need a solution for classic ADO.NET - I don't really have the go-ahead to switch this code over to something like LINQ.

    Read the article

  • Can I make this two LINQ queries into one query only?

    - by Holli
    From a List of builtAgents I need all items with OptimPriority == 1 and only 5 items with OptimPriority == 0. I do this with two seperate queries but I wonder if I could make this with only one query. IEnumerable<Agent> priorityAgents = from pri in builtAgents where pri.OptimPriority == 1 select pri; IEnumerable<Agent> otherAgents = (from oth in builtAgents where oth.OptimPriority == 0 select oth).Take(5);

    Read the article

  • How to handle when SSRS does not automatically update fields based on database query?

    - by badpanda
    So I am trying to change the number of fields in my dataset in SSRS and the refresh button is not picking up the added field from the SQL server. The query is definitely returning the correct data, as I have double checked in the server engine itself. Also, I have tried manually adding the field using the SSRS menu, but as soon as I execute it disappears. Any suggestions or similar experiences?

    Read the article

  • MySQL: How can fetch SUM() of all fields in one Query?

    - by takpar
    Hi, I just want somthing like this: select SUM(*) from `mytable` group by `year` any suggestion? (I am using Zend Framework; if you have a suggestion using ZF rather than pure query would be great!) Update: I have a mass of columns in table and i do not want to write their name down one by one. No Idea??

    Read the article

  • How can an improvement to the query cache be tracked?

    - by Bill Paetzke
    I am parameterizing my web app's ad hoc sql. As a result, I expect the query plan cache to reduce in size and have a higher hit ratio. Perhaps even other important metrics will be improved. Could I use perfmon to track this? If so, what counters should I use? If not perfmon, how could I report on the impact of this change?

    Read the article

  • PHP - get MySQL query results as their native data type?

    - by redidas
    I've tried fetching MySQL query results using mysql_fetch_row() mysql_result() and numeric values are being returned as strings. Is there any way to fetch the data as its datatype stored in the table? The application will be querying many different queries so I will be unable to cast the values as the intended datatype on a 1 by 1 basis.

    Read the article

  • Running query from scratch with something like exec function?

    - by Steel Plume
    Hi, is it possible to make something similar to the following with Postgresql without using a function? pseudo sql code: select * from sometable where somecol = somevalue AND someothercol IN exec( 'select something from exclusionlist' ) My primary intention is to build up a table with predefined queries to call inside a where clause pseudo sql code: select * from sometable where somecol = somevalue AND someothercol IN exec( select query from predefinedqueries where id=someid )

    Read the article

  • Understanding and Controlling Parallel Query Processing in SQL Server

    Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them.

    Read the article

  • MySQL 5.5 - Lost connection to MySQL server during query

    - by bully
    I have an Ubuntu 12.04 LTS server running at a german hoster (virtualized system). # uname -a Linux ... 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I want to migrate a Web CMS system, called Contao. It's not my first migration, but my first migration having connection issues with mysql. Migration went successfully, I have the same Contao version running (it's more or less just copy / paste). For the database behind, I did: apt-get install mysql-server phpmyadmin I set a root password and added a user for the CMS which has enough rights on its own database (and only its database) for doing the stuff it has to do. Data import via phpmyadmin worked just fine. I can access the backend of the CMS (which needs to deal with the database already). If I try to access the frontend now, I get the following error: Fatal error: Uncaught exception Exception with message Query error: Lost connection to MySQL server during query (<query statement here, nothing special, just a select>) thrown in /var/www/system/libraries/Database.php on line 686 (Keep in mind: I can access mysql with phpmyadmin and through the backend, working like a charme, it's just the frontend call causing errors). If I spam F5 in my browser I can sometimes even kill the mysql deamon. If I run # mysqld --log-warnings=2 I get this: ... 120921 7:57:31 [Note] mysqld: ready for connections. Version: '5.5.24-0ubuntu0.12.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 05:57:37 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346679 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f1485db3b20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f1480041e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f1483b96459] mysqld(handle_fatal_signal+0x483)[0x7f1483a5c1d3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f1482797cb0] /lib/x86_64-linux-gnu/libm.so.6(+0x42e11)[0x7f14821cae11] mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1368)[0x7f1483b26cb8] mysqld(+0x33116a)[0x7f148397916a] mysqld(_ZN4JOIN8optimizeEv+0x558)[0x7f148397d3e8] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0xdd)[0x7f148397fd7d] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f1483985d2c] mysqld(+0x2f4524)[0x7f148393c524] mysqld(_Z21mysql_execute_commandP3THD+0x293e)[0x7f14839451de] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f1483948bef] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1365)[0x7f148394a025] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f14839ec7cd] mysqld(handle_one_connection+0x50)[0x7f14839ec830] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f148278fe9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1481eba4bd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f1464004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED From /var/log/syslog: Sep 21 07:17:01 s16477249 CRON[23855]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Sep 21 07:18:51 s16477249 kernel: [231923.349159] type=1400 audit(1348204731.333:70): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=23946 comm="apparmor_parser" Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23990]: Upgrading MySQL tables if necessary. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysql' as: /usr/bin/mysql Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: This installation of MySQL is already upgraded to 5.5.24, use --force if you still need to run mysql_upgrade Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24004]: Checking for insecure root accounts. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24009]: Triggering myisam-recover for all MyISAM tables I'm using MyISAM tables all over, nothing with InnoDB there. Starting / stopping mysql is done via sudo service mysql start sudo service mysql stop After using google a little bit, I experimented a little bit with timeouts, correct socket path in the /etc/mysql/my.cnf file, but nothing helped. There are some old (from 2008) Gentoo bugs, where re-compiling just solved the problem. I already re-installed mysql via: sudo apt-get remove mysql-server mysql-common sudo apt-get autoremove sudo apt-get install mysql-server without any results. This is the first time I'm running into this problem, and I'm not very experienced with this kind of mysql 'administration'. So mainly, I want to know if anyone of you could help me out please :) Is it a mysql bug? Is something broken in the Ubuntu repositories? Is this one of those misterious 'use-tcp-connection-instead-of-socket-stuff-because-there-are-problems-on-virtualized-machines-with-sockets'-problem? Or am I completly on the wrong way and I just miss-configured something? Remember, phpmyadmin and access to the backend (which uses the database, too) is just fine. Maybe something with Apache? What can I do? Any help is appreciated, so thanks in advance :)

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >