Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 25/188 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Is it necessary to write ROLLBACK if queries fail?

    - by Donator
    I write mysql_query("SET AUTOCOMMIT=0"); mysql_query("START TRANSACTION"); before I write all queries. Then check if all of them are true and then write: mysql_query("COMMIT"); But if one of query fails, I just pass COMMIT query. So do I really need ROLLBACK function if one of the queries fail? Because without ROLLBACK it also works. Thanks.

    Read the article

  • Append Results from two queries and output as a single table.

    - by tHeSiD
    I have two queries that I have to run, I cannon join them But their resultant tables have the same structrure. For example I have select * from products where producttype=magazine select * from products where producttype = book I have to combine the result of these two queries, and then output it as one single result. I have to do this inside a stored procedure. PS These are just examples I provided, i have a complex table structure. The main thing is I cannot join them.

    Read the article

  • SQLAuthority News Were sorry but your computer or network may be sending automated queries. To pro

    I use multiple browser many times when I am working with multiple projects simultaneously. Often I use Google Reader to read few feeds. Recently, I faced the following error and this error will not go. I even restarted my computer and rebooted my network. I am confident that my computer does not have viruses or [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using ADO.NET Entities LINQ Provider to model complex SQL Queries?

    - by Ivan Zlatanov
    What I find really powerful in ADO.NET Entities or LINQ to SQL, is the ability to model complex queries. I really don't need the mappings that Entities or LINQ to SQL are doing for me - I just need the ability to model complex expressions that can be translated into T-SQL. My question is - am I abusing too much? Can I use the Entity Framework for modeling queries and just that? Should I? I know I can write my own custom LINQ to SQL provider, but that is just not possible to handle in the time spans I have. What is the best approach to model complex T-SQL queries? How do you handle conditional group byes, orders, joins, unions etc in the OOP world? Using StringBuilders for this kind of job feels too ugly and harder to maintain given the possibilities we have with Expression Trees. When I use StringBuilder to model a complex SQL Query I feel kind of guilty! I feel the same way as when I have to hard code any number into my code that is different than 0 or 1. Feeling that makes you ask yourself if there is a better and cleaner way of doing it... I must mention that I am using C# 4.0, but I am not specifically looking for an answer in this language, but rather in the domain of CLR 4.

    Read the article

  • Why are .local and .arpa DNS queries showing up outside my network on OpenDNS?

    - by Baodad
    We have a Windows office network with a local Actice Directory/DNS domain server. The server is set up with OpenDNS's servers as forwarders (see screenshot). However, when I look at my OpenDNS query statistics, I notice that the 3rd most popular query is *.in-addr-arpa, and the 12th is *.local which is from our local domain. Should I, or how can I prevent these local queries from going beyond my local network?

    Read the article

  • Is there a system monitoring tool that lets me write complex queries against the data?

    - by benhsu
    I am looking for a system stat collection tool that will let me write queries against the data collected. I am planning to answer questions like: what is the average load, over the last 30 days, on this machine between 9AM and 5PM, as opposed to at night what was the average disk io on these 10 machines yesterday what was the average daytime memory usage on these 10 machines last week, as opposed to 2 weeks ago Has anyone done this with, say, collectd or graphite?

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Can the same CriteriaBuilder (JPA 2) instance be used to create multiple queries?

    - by pkainulainen
    This seems like a pretty simple question, but I have not managed to find a definitive answer yet. I have a DAO class, which is naturally querying the database by using criteria queries. So I would like to know if it is safe to use the same CriteriaBuilder implementation for the creation of different queries or do I have to create new CriteriaBuilder instance for each query. Following code example should illustrate what I would like to do: public class DAO() { CriteriaBuilder cb = null; public DAO() { cb = getEntityManager().getCriteriaBuilder(); } public List<String> getNames() { CriteriaQuery<String> nameSearch = cb.createQuery(String.class); ... } public List<Address> getAddresses(String name) { CriteriaQuery<Address> nameSearch = cb.createQuery(Address.class); ... } } Is it ok to do this?

    Read the article

  • How to run a loop of queries in access?

    - by tksy
    Hi I have a database with a table which is full of conditions and error messages for checking another database. I want to run a loop such that each of these conditions is checked against all the tables in the second database and generae a report which gives the errors. Is this possible in ms access. For example, querycrit table id query error 1 speed<25 and speed>56 speed above limit 2 dist<56 or dist >78 dist within limit I have more than 400 queries like this of different variables. THe table against which I am running the queries is records table id speed dist accce decele aaa bbb ccc 1 33 34 44 33 33 33 33 2 45 44 55 55 55 22 23 regards ttk

    Read the article

  • Possible to see what actual SQL queries Rails invokes when using console script?

    - by randombits
    Sometimes I like to pop open the console script that comes with Rails to test small excerpts of code. That code normally involves some more involved ActiveRecord queries. Although not an expert in ActiveRecord, I'm proficient with SQL and want to see what it's translating underneath the hood for efficiency purposes. This will help me refactor or rethink how I'm writing my app if it looks inefficient. Now when the query is in the actual application itself, it all shows up in logs. Ad-hoc ActiveRecord queries in the console do not though. Anyway to change that behavior?

    Read the article

  • How to run a set of SQL queries from a file, in PHP?

    - by Harish Kurup
    I have some set of SQL queries which is in a file(i.e query.sql), and i want to run those queries in files using PHP, the code that i have wrote is not working, //database config's... $file_name="query.sql"; $query==file($file_name); $array_length=count($query); for($i=0;$i<$array_length;$i++) { $data .= $query[$i]; } echo $data; mysql_query($data); it echos the SQL Query from the file but throws an error at mysql_query() function...

    Read the article

  • How can I kill MySQL queries every 60 seconds in Windows?

    - by Ethan Allen
    I want to check my MySQL server every minute and kill queries that have run longer than 150 seconds. The main reason I want to do this is because I don't want queries from certain people to lock up the DB for everyone else. I know this is not the ultimate solution to the problem, but at least it's a fallback in case something goes wrong with a query. I don't have a slave DB (this is just an at-home project). I'd like to schedule a script to run that does this for me. I'm unfamiliar with Perl or Ruby and I need it done on my Windows 2008 Server box. I've looked into creating a simple cmd line script, but that doesn't seem to be possible. I know currently I can do something like this but I have to do it manually: mysqladmin processlist mysqladmin kill Anyone have any ideas or examples on how I could do this?

    Read the article

  • Is there a way to force Report Builder to use "WITH (NOLOCK)" in the queries it generates?

    - by Joe Pineda
    Hi. At work, users are very happy to generate their own reports using Reporting Services' Report Builder. But, alas, the queries it generates are very inefficient, and they don't use "WITH (NOLOCK)" - slowing down things for everyone. These are reports that really do need to be run using latest data - can't be offloaded to the reporting server. And since they query very specific, detailed data, hypercubes are of no use here. So the question is: Is there a way to configure Report Builder's Data Models so the queries it generates always use "WITH (NOLOCK)" when querying a table?

    Read the article

  • "Remember" last three MySql queries; Cookie, passed variable or other method?

    - by Camran
    I have a classified website, with pretty sophisticated searching, and I am about to implement a function where the last three queries is displayed for the user, so that the user can go back easier through the queries. This because for each query the user has to provide a lot of input. I have four questions for you: I wonder, how can I save the actual query (SELECT * FROM etc etc)...? Do I need to add some form of encryption to be on the safe side? How will this affect performance? (I don't like the fact that cookies slow websites down) Anything else to think about? If you need more input, let me know... Btw, the website is PHP based. Thanks

    Read the article

  • Why do these seemingly similar queries have such drastically different run times?

    - by Jherico
    I'm working with an oracle DB trying to tune some queries and I'm having trouble understanding why working a particular clause in a particular way has such a drastic impact on the query performance. Here is a performant version of the query I'm doing select * from ( select a.*, rownum rn from ( select * from table_foo ) a where rownum < 3 ) where rn >= 2 The same query by replacing the last two lines with this ) a where rownum >=2 rownum < 3 ) performs horribly. Several orders of magnitude worse ) a where rownum between 2 and 3 ) also performs horribly. I don't understand the magic from the first query and how to apply it to further similar queries.

    Read the article

  • What kind of server do I need to handle 10 million requests and mySQL queries a day?

    - by Calvin
    I'm a new bie of server administration and I'm looking for a powerful hosting service to host my new website. This website is basically a back-end of an mobile online game, and it will: handle up to 10 million of HTTPS request and mySQL queries a day store up to 2000 GB file on the hard disk transfer probably 5000 GB data in and out per month it runs on PHP and mySQL have 10 million records in mySQL database, for each record there are 5-10 fields, around 100 bytes each I really don't know what kind of a server I need to handle these requirements, my question is: what cpu/ram do I need for a dedicated server or vps? what hosting companies are able to offer this kind of dedicated server or VPS? what about cloud computing? I've researched Amazon EC2 but it seems complicated to me. And I've contacted Rackspace but strangely they said Cloudsites is not suitable for my requirements. I wonder if there is other cloud hosting company. any other alternative method? thanks very much!

    Read the article

  • how to proxy sql queries (INSERT, UPDATE e.t.c.)

    - by XakRu
    I have installed cluster MYSQL (galley with mariadb) As an application server installed Apache. on a server with Apache installed haproxy which proxies requests from php in this case installed for zabbix server cluster. But faced with deadlocks, now I want to proxy requests WRITE, INSERT, UPDATE to the second server. SELECT queries to the second and third server. I would be happy to see your suggestions. Please do not write: use mysql - proxy. I want to see what program it may to proxy SQL requests. scheme: http://www.gliffy.com/pubdoc/4474830/L.png

    Read the article

  • How do I do MongoDB console-style queries in PHP?

    - by Zoe Boles
    I'm trying to get a MongoDB query from the javascript console into my PHP app. What I'm trying to avoid is having to translate the query into the PHP "native driver"'s format... I don't want to hand build arrays and hand-chain functions any more than I want to manually build an array of MySQL's internal query structure just to get data. I already have a string producing the exact content I want in the Mongo console: db.intake.find({"processed": {"$exists": "false"}}).sort({"insert_date": "1"}).limit(10); The question is, is there a way for me to hand this string, as is, to MongoDB and have it return a cursor with the dataset I request? Right now I'm at the "write your own parser because it's not valid json to kinda turn a subset of valid Mongo queries into the format the PHP native driver wants" state, which isn't very fun. I don't want an ORM or a massive wrapper library; I just want to give a function my query string as it exists in the console and get an Iterator back that I can work with. I know there are a couple of PHP-based Mongo manager applications that apparently take console-style queries and handle them, but initial browsing through their code, I'm not sure how they handle the translation. I absolutely love working with mongo in the console, but I'm rapidly starting to loathe the thought of converting every query into the format the native writer wants...

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >