Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 48/193 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Fix N+1 query in "declarative_authorization" gem using gem "bullet"

    - by makaroni4
    Currently I am working on one big web application and to make it work faster I decided to refactor all N+1 queries (to decrease number of requests to database, http://rails-bestpractices.com/posts/29-fix-n-1-queries). So I installed gem "bullet" which doesn`t work with Rails 3.1.1 now (you can use fork from https://github.com/flyerhzm/bullet). When using declarative_authorization gem on each page I get same alerts: N+1 Query detected Role => [:permissions] Add to your finder: :include => [:permissions] N+1 Query detected Permission => [:permission_rules] Add to your finder: :include => [:permission_rules] CACHE (0.0ms) SELECT "roles".* FROM "roles" CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 1 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 2 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 3 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 4 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 6 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 7 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 8 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 30 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 31 ... Could you please help me with that and to make this queries faster?

    Read the article

  • How would you organize this in asp.net mvc?

    - by chobo
    I have an asp.net mvc 2.0 application that contains Areas/Modules like calendar, admin, etc... There may be cases where more than one area needs to access the same Repo, so I am not sure where to put the Data Access Layers and Repositories. First Option: Should I create Data Access Layer files (Linq to SQL in my case) with their accompanying Repositories for each area, so each area only contains the Tables, and Repositories needed by those areas. The benefit is that everything needed to run that module is one place, so it is more encapsulated (in my mind anyway). The downside is that I may have duplicate queries, because other modules may use the same query. Second Option Or, would it be better to place the DAL and Repositories outside the Area's and treat them as Global? The advantage is I won't have any duplicate queries, but I may be loading a lot of unnecessary queries and DAL tables up for certain modules. It is also more work to reuse or modify these modules for future projects (though the chance of reusing them is slim to none :)) Which option makes more sense? If someone has a better way I'd love to hear it. Thanks!

    Read the article

  • Why doesn't this CompiledQuery give a performance improvement?

    - by Grammarian
    I am trying to speed up an often used query. Using a CompiledQuery seemed to be the answer. But when I tried the compiled version, there was no difference in performance between the compiled and non-compiled versions. Can someone please tell me why using Queries.FindTradeByTradeTagCompiled is not faster than using Queries.FindTradeByTradeTag? static class Queries { // Pre-compiled query, as per http://msdn.microsoft.com/en-us/library/bb896297 private static readonly Func<MyEntities, int, IQueryable<Trade>> mCompiledFindTradeQuery = CompiledQuery.Compile<MyEntities, int, IQueryable<Trade>>( (entities, tag) => from trade in entities.TradeSet where trade.trade_tag == tag select trade); public static Trade FindTradeByTradeTagCompiled(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = mCompiledFindTradeQuery(entities, tag); return tradeQuery.FirstOrDefault(); } public static Trade FindTradeByTradeTag(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = from trade in entities.TradeSet where trade.trade_tag == tag select trade; return tradeQuery.FirstOrDefault(); } }

    Read the article

  • Can't delete record via the datacontext it was retrieved from

    - by Antilogic
    I just upgraded one of my application's methods to use compiled queries (not sure if this is relevant). Now I'm getting contradicting error messages when I run the code. This is my method: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } When I run it I get this message: Cannot remove an entity that has not been attached. Note that the compiled query and the DeleteOnSubmit reference the same DataContext. Still I figured I'd humor the application and add an attach command before the DeleteOnSubmit, like so: MyClass existing = Queries.MyStaticCompiledQuery(MyRequestScopedDataContext, param1, param2).SingleOrDefault(); if (existing != null) { MyRequestScopedDataContext.MyClasses.Attach(existing); MyRequestScopedDataContext.MyClasses.DeleteOnSubmit(existing); } BUT... When I run this code, I get a completely different contradictory error message: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported. I'm at a complete loss... Does anyone else have some insight as to why I can't delete a record via the same DataContext I retrieved it from?

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • Is it possible to create multi-tiered WHERE statements in mySQL

    - by Brendan
    I'm currently developing a program that will generate reports based upon lead data. My issue is that I'm running 3 queries for something that I would like to only have to run one query for. For instance, I want to gather data for leads generated in the past day submission_date > (NOW() - INTERVAL 1 DAY) and I would like to find out how many total leads there were, and how many sold leads there were in that timeframe. (sold=1 / sold=0). The issue comes with the fact that this query is currently being done with 2 queries, one with WHEREsold= 1 and one with WHEREsold= 0. This is all well and good, but when I want to generate this data for the past day,week,month,year,and all time I will have to run 10 queries to obtain this data. I feel like there HAS to be a more efficient way of doing this. I know I can create a mySQL function for this, but I don't see how this could solve the problem. Thanks!!

    Read the article

  • redirect web app results to own application

    - by vbNewbie
    Is it possible to redirect a web apps results to a second application? I cannot parse the html source. It contains the javascript functions that execute the queries but all the content is probably server side. I hope this makes sense. The owner has made the script available but I am not sure how this helps. Can I using .net call the site and redirect results perhaps to a file or database? the app accesses one of googles apis and performs searches/queries and returns results which are displayed on the site. Now all the javascript functions that perform these queries are listed in the source but I do not know javascript so it does not make much sense to me. I have used the documentation which uses the oauth protocol to access the api and have implemented that in my web app but it took me nearly a week to get the request token right and now to send requests to the api, sometimes I get one result back and sometimes none. It is frustrating me and the owner of the web app has given use of his script but he says all that happens is that my browser interacts with the google api and not his server. So I thought why not have my web app call his, since his interacts with the API flawlessly and have the results sent to my app to save in a database. I have very little experience here so pardon my ignorance

    Read the article

  • How to set element value dynamically based on for each loop count

    - by user1515918
    Here is snippet of an xsl file that I am trying to make work. I would like to change value for element request-tot-queries in the header based on loop count in the Body. Your help would be greatly appreciated! <HEADER> <request-tot-queries>$Counter</request-tot-queries> </HEADER> <Body> <xsl:for-each select="//Request/Responses/Pooled/ResidenceHistory/Residencies/Residency"> <count><xsl:variable name="counter" select="position()"/></count> <xsl:if test="DateRange/To/Date[@Type!='Present']"> <subject-query> . . . </subject-query> </xsl:if> </xsl:for-each> </Body>

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • Chat app vs REST app - use a thread in an Activity or a thread in a Service?

    - by synic
    In Virgil Dobjanschi's talk, "Developing Android REST client applications" (link here), he said a few things that took me by surprise. Including: Don't run http queries in threads spawned by your activities. Instead, communicate with a service to do them, and store the information in a ContentProvider. Use a ContentObserver to be notified of changes. Always perform long running tasks in a Service, never in your Activity. Stop your Service when you're done with it. I understand that he was talking about a REST API, but I'm trying to make it fit with some other ideas I've had for apps. One of APIs I've been using uses long-polling for their chat interface. There is a loop http queries, most of which will time out. This means that, as long as the app hasn't been killed by the OS, or the user hasn't specifically turned off the chat feature, I'll never be done with the Service, and it will stay open forever. This seems less than optimal. Long question short: For a chat application that uses long polling to simulate push and immediate response, is it still best practice to use a Service to perform the HTTP queries, and store the information in a ContentProvider?

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • Questions about shifting from mysql to PDO

    - by Scarface
    Hey guys I have recently decided to switch all my current plain mysql queries performed with php mysql_query to PDO style queries to improve performance, portability and security. I just have some quick questions for any experts in this database interaction tool Will it prevent injection if all statements are prepared? (I noticed on php.net it wrote 'however, if other portions of the query are being built up with unescaped input, SQL injection is still possible' I was not exactly sure what this meant). Does this just mean that if all variables are run through a prepare function it is safe, and if some are directly inserted then it is not? Currently I have a connection at the top of my page and queries performed during the rest of the page. I took a look at PDO in more detail and noticed that there is a try and catch procedure for every query involving a connection and the closing of that connection. Is there a straightforward way to connecting and then reusing that connection without having to put everything in a try or constantly repeat the procedure by connecting, querying and closing? Can anyone briefly explain in layman's terms what purpose a set_exception_handler serves? I appreciate any advice from any more experienced individuals.

    Read the article

  • DNS Query.log - Multiple query’s for ripe.net

    - by Christopher Wilson
    Currently I run a DNS server (bind9) that handles queries from clients over the internet lately I have noticed hundreds of queries from all different address's that look like this (Server IP removed) client 216.59.33.210#53: query: ripe.net IN ANY +ED (0.0.0.0) client 216.59.33.204#53: query: ripe.net IN ANY +ED (0.0.0.0) client 208.64.127.5#53: query: ripe.net IN ANY +ED (0.0.0.0) client 184.107.255.202#53: query: ripe.net IN ANY +ED (0.0.0.0) client 208.64.127.5#53: query: ripe.net IN ANY +ED (0.0.0.0) client 208.64.127.5#53: query: ripe.net IN ANY +ED (0.0.0.0) client 205.204.65.83#53: query: ripe.net IN ANY +ED (0.0.0.0) client 69.162.110.106#53: query: ripe.net IN ANY +ED (0.0.0.0) client 216.59.33.210#53: query: ripe.net IN ANY +ED (0.0.0.0) client 69.162.110.106#53: query: ripe.net IN ANY +ED (0.0.0.0) client 216.59.33.204#53: query: ripe.net IN ANY +ED (0.0.0.0) client 208.64.127.5#53: query: ripe.net IN ANY +ED (0.0.0.0) Can someone please explain why there are so many clients querying for ripe.net ?

    Read the article

  • How to enable WMI Provider MSCluster on MS Server 2008 R2

    - by Tobias Hertkorn
    I have successfully set up a failover cluster on Microsoft Server 2008 R2 Enterprise Edition. Now I want to talk to the MSCluster WMI Provider on said server. WMI Queries to e.g. CIMV2 successed. But queries like select * from MSCluster_ResourceGroup where MSCluster_ResourceGroup.Name=\"testserver\" fail with "Access denied". I am using a domain admin account. Do I have to enable the MSCluster WMI Provider? What am I missing?

    Read the article

  • Microsoft SyncFramework - Sync different tables into one

    - by evnu
    Hello, we are trying to get the Microsoft SyncFramework running in our application to synchronize an oracle db with a mobile device. Problem The queries that we need to gather the data on the oracle db take much time (and we haven't found a way to speed them up yet), so we try to split them up in as much portions as possible. One big part of the whole problem is, that we need different information out of one big table, that bloats a query if combined. Unfortunately, the SyncFramework allows only one TableAdapter per SyncTable. Now this is a problem for our application: If we were able to use more than one TableAdapter per SyncTable, we could easily spread the queries in a more efficient way. Using one query per Table which combines all the needed data takes way too much time. Ideas I thought of creating different TableAdapters for each one of the required queries and then merge the resulting datasets afterwards (preferably on the server). This seems to work, but is a rather awkward solution. Does someone of you know a better solution? Or do you have some ideas that could help? Thanks in advance, evnu EDIT: So, I implemented the merge solution. If you are interested, take a look at the following code. I'll give more details if there are questions. <WebMethod()> _ Public Function GetChanges(ByVal groupMetadata As SyncGroupMetadata, ByVal syncSession As SyncSession) As SyncContext Dim stream As MemoryStream Dim format As BinaryFormatter = New BinaryFormatter Dim anchors As Dictionary(Of String, Byte()) ' keep track of the tables that will be updated Dim addTables As Dictionary(Of String, List(Of SyncTableMetadata)) = New Dictionary(Of String, List(Of SyncTableMetadata)) ' list of all present anchors Dim allAnchors As Dictionary(Of String, Byte()) = New Dictionary(Of String, Byte()) ' fill allAnchors - deserialize all given anchors For Each Table As SyncTableMetadata In groupMetadata.TablesMetadata If Table.LastReceivedAnchor Is Nothing Or Table.LastReceivedAnchor.IsNull Then Continue For stream = New MemoryStream(Table.LastReceivedAnchor.Anchor) anchors = format.Deserialize(stream) For Each item As KeyValuePair(Of String, Byte()) In anchors allAnchors.Add(item.Key, item.Value) Next stream.Dispose() Next For Each Table As SyncTableMetadata In groupMetadata.TablesMetadata If allAnchors.ContainsKey(Table.TableName) Then Table.LastReceivedAnchor.Anchor = allAnchors(Table.TableName) End If Dim addSyncTables As List(Of SyncTableMetadata) If syncSession.SyncParameters.Contains(Table.TableName) Then Dim tableNames() As String = syncSession.SyncParameters(Table.TableName).Value.ToString.Split(":") addSyncTables = New List(Of SyncTableMetadata) For Each tableName As String In tableNames Dim newSynctable As SyncTableMetadata = New SyncTableMetadata newSynctable.TableName = tableName If allAnchors.ContainsKey(tableName) Then Dim anker As SyncAnchor = New SyncAnchor(allAnchors(tableName)) newSynctable.LastReceivedAnchor = anker Else newSynctable.LastReceivedAnchor = Nothing End If newSynctable.SyncDirection = Table.SyncDirection addSyncTables.Add(newSynctable) Next addTables.Add(Table.TableName, addSyncTables) End If Next ' add the newly created synctables For Each item As KeyValuePair(Of String, List(Of SyncTableMetadata)) In addTables For Each Table As SyncTableMetadata In item.Value groupMetadata.TablesMetadata.Add(Table) Next Next ' fire queries Dim context As SyncContext = servSyncProvider.GetChanges(groupMetadata, syncSession) ' merge resulting datasets For Each item As KeyValuePair(Of String, List(Of SyncTableMetadata)) In addTables For Each Table As SyncTableMetadata In item.Value If context.DataSet.Tables.Contains(Table.TableName) Then If Not context.DataSet.Tables.Contains(item.Key) Then Dim tmp As DataTable = context.DataSet.Tables(Table.TableName).Copy tmp.TableName = item.Key context.DataSet.Tables.Add(tmp) Else context.DataSet.Tables(item.Key).Merge(context.DataSet.Tables(Table.TableName)) context.DataSet.Tables.Remove(Table.TableName) End If End If Next Next ' create new anchors Dim allAnchorsDict As Dictionary(Of String, Byte()) = New Dictionary(Of String, Byte()) For Each Table As SyncTableMetadata In groupMetadata.TablesMetadata allAnchorsDict.Add(Table.TableName, context.NewAnchor.Anchor) Next stream = New MemoryStream format.Serialize(stream, allAnchorsDict) context.NewAnchor.Anchor = stream.ToArray stream.Dispose() Return context End Function

    Read the article

  • SQL Server & Disk Space

    - by Dismissile
    I created a database on a local SQL server that we use for development. I created the log and data files on a second hard drive (E:\MSSQL\DATA). I am using this database to do some speed tests so I created a lot of data (7 Million rows). I started running some pretty intensive queries and to get some test data I ran an update statement that updated all 7 million rows and now it has taken up all of the space on my C:\, which I don't understand since I put the data files on the E:\. Is there some files on the C:\ that would be growing based on me running queries on this other database, if so how do I stop it? I am doing with this database but I need to get my C:\ back in order. The database file group was PRIMARY, is this relevant?

    Read the article

  • slow query logging in mysql server

    - by Vinayak Mahadevan
    Hi I have installed MySQL Community Edition 5.1.41 on a windows 2000 server. In my.ini file I have enabled slow query logging and have redirected the output to a table.I have set the long_query_time to 10 seconds. Then after running some queries I checked up the slow query log table and found that all the queries which were executed have been logged and a file called database-slow.log has also been created in the data folder. Can anybody please tell me where I am going wrong. I am using the inbuilt innodb and not activated the innodb plugin. Thanks

    Read the article

  • Wrong DNS query in Active directory network with NetBIOS enabled client

    - by koankoder
    The setup: Active Directory is enabled on the network (abcd.com) We have a single character host name (1.abcd.com) one of the desktop has an old XP with NetBIOS stuff enabled The Problem Whenever we query for any host name from the XP machine, the first character alone is taken for DNS query (one.abcd.com will query for o.abcd.com, two.abcd.com will query for t.abcd.com) Even if we give some IP, the application queries with numeric prefix (10.x.x.x will query for 1.abcd.com).Since we already have 1.abcd.com, all query and traffic ends up in 1.abcd.com After discussion with network guys, it seems netbios DNS queries by having some prefix etc. but none of them is actually sure on what is happening. Is there any docs which can explain this behavior ? Is this valid behavior in NetBIOS environment ?

    Read the article

  • Outlook DASL Filter - Custom Search

    - by Ryan B
    I'm trying to write a DASL filter to combine three queries: Get all mail with no category and no flag. ("urn:schemas-microsoft-com:office:office#Keywords" IS NULL AND "urn:schemas:httpmail:messageflag" IS NULL) Get all mail that is categorized as "Ryan" and flagged with a red "Today" flag. Don't know how to write this one. Get all mail that is uncategorized and flagged with a red "Today" flag. Don't know how to write this one. Once I have the individual queries, I will combine and OR them. I am stuck on how to filter the flag value.

    Read the article

  • Can / should I prevent my domain controller doing forward lookups for remote users?

    - by markmnl
    I have a Windows Server 2003 server in the office. I VPN into the LAN remotely. My VPN has a virtual NIC with the Windows Server as the primary DNS since it is a domain controller. When connected to the VPN and I do a nslookup or simply browse the web my VPN's DNS (the office's Windows Server) provides the DNS answers - I beleive becuase it has DNS forwarders so queries it cant answer it forwards and then relays the answer. This is the desired behaviour for workstations in the office (they should query their domain controller first). However for remote VPN users this is not desirable - I do not want my remote office's server to answer DNS queries it is not the authority of (which happends to be 192.168.x.x). Is there any way I can configure this?

    Read the article

  • MySQL 5.5 on Windows server is horribly slow

    - by Brad
    I have had no luck getting MySQL 5.5 to be as fast as 5.1 or MariaDB on the exact same hardware/database/environment under Windows server 2003R2 or 2008R2. My benchmarks from our application: MySQL 5.5 + CentOS 5.2 (XenServer Virtual) = 28 seconds (box is "busy" not buried) MariaDB (5.1) + Windows 2003 (Physical box) = 130 seconds (box is 2% busy) MySQL 5.1 + Windows 2003 (Physical box) = 170 seconds (box is 2% busy) MySQL 5.5 + Windows 2003 (Physical box) = 305 seconds (As high as 600 seconds...) (box is 2% busy) The only difference between these runs is the removal of skip-locking and the running of mysql_upgrade.exe to update some tables for stored procs on 5.5. Yes, I know it's a release candidate, I'm feeding that back to MySQL as well. No slow queries are logged, it doesn't think it's being slow, it just is. I'm going to start tearing into the queries themselves to see if the INSERT/SELECT plans have gone buggo on 5.5. Any help would be appreciated! Thanks

    Read the article

  • Zabbix - Some of the monitored items don't refreh

    - by Niro
    I'm experiencing a strange issue with Zabbix monitoring a MySQL server. Most of the data from the server such as MySQL queries per second and MySQL uptime , Buffers memory etc. update nicely while some data like CPU iowait time (avg1) , Host local time ,MySQL number of threads and other items which were monitored in the past has last check time of about a week ago. I can't find any logic in this, for example Mysql number of threads and Mysql queries per second are obtained in a similar way so it does not make sense one of them is monitored and one is not. Please help- how can I fix this? Update - I used zabbix_get from the zabbix server to check one of the items on the zabbix client and it works so the problem must be on the zabbix server side

    Read the article

  • MySQL and User level logging

    - by Adraen
    I have been looking at logging only certain users activity in MySQL. I found that the logging could be enabled or disabled for all users but one of the service using the db does a lot of queries and therefore I would like to only log specific users. Google told me that a flag can be SET to enable disable logging, however, I cannot modify the service DB connection code and asking every single user to enable logging before they do anything might not be as reliable as I want. So, do you know if there is any way to log only a set of users queries ? Thanks !

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >