Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 108/285 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • How do you handle huge if-conditions?

    - by Teifion
    It's something that's bugged me in every language I've used, I have an if statement but the conditional part has so many checks that I have to split it over multiple lines, use a nested if statement or just accept that it's ugly and move on with my life. Are there any other methods that you've found that might be of use to me and anybody else that's hit the same problem? Example, all on one line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true){ Example, multi-line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true){ Example-nested: if (var1 = true && var2 = true && var2 = true && var3 = true){     if (var4 = true && var5 = true && var6 = true)     {

    Read the article

  • Twitter Bootstrap: how to put unknown number of span* within a row-fluid?

    - by StackOverflowNewbie
    Assume I have the following nesting: <div class="cointainer-fluid"> <div class="row-fluid"> <div class="span3"> <!-- left sidebar here --> </div> <div class="span9"> <!-- main content here --> </div> </div> </div> I'd like to put an unknown number of <div class="span3"></div> in the main content area. (Each of the span3 is suppose to contain a product photo, name, price, etc.) Of course, my aim is that be responsive. So, I might display 20 products, which I'd like to possibly display 5 products per "row" on a wide screen, then 4 products per "row" on a slightly less wide screen, then 3, then 2, then 1. For example (each X represents a product): Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X row 4: X X X X X Less Wide Screen row 1: X X X X row 2: X X X X row 3: X X X X row 4: X X X X row 5: X X X X Even Less Wide Screen row 1: X X X row 2: X X X row 3: X X X row 4: X X X row 5: X X X row 6: X X X row 7: X X It seems like I need to do nested rows. However, if I do that, then I'll only be able to fit a certain amount of products in each nested row. That'll cause problems as the screen width decreases, for example (each X represents a product): Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X Less Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X How do I do what I want to do in Twitter Bootstrap?

    Read the article

  • Concatenate an each loop inside another

    - by Lothar
    I want to to concatenate the results of a jquery each loop inside of another but am not getting the results I expect. $.each(data, function () { counter++; var i = 0; var singlebar; var that = this; tableRow = '<tr>' + '<td>' + this.foo + '</td>' + $.each(this.bar, function(){ singlebar = '<td>' + that.bar[i].baz + '</td>'; tableRow + singlebar; }); '</tr>'; return tableRow; }); The portion inside the nested each does not get added to the string that is returned. I can console.log(singlebar) and get the expected results in the console but I cannot concatenate those results inside the primary each loop. I have also tried: $.each(this.bar, function(){ tableRow += '<td>' + that.bar[i].baz + '</td>'; }); Which also does not add the desired content. How do I iterate over this nested data and add it in the midst of the table that the primary each statement is building?

    Read the article

  • can't create partial objects with accepts_nested_attributes_for

    - by Isaac Cambron
    I'm trying to build a form that allows users to update some records. They can't update every field, though, so I'm going to do some explicit processing (in the controller for now) to update the model vis-a-vis the form. Here's how I'm trying to do it: Family model: class Family < ActiveRecord::Base has_many :people, dependent: :destroy accepts_nested_attributes_for :people, allow_destroy: true, reject_if: ->(p){p[:name].blank?} end In the controller def check edited_family = Family.new(params[:family]) #compare to the one we have in the db #update each person as needed/allowed #save it end Form: = form_for current_family, url: check_rsvp_path, method: :post do |f| = f.fields_for :people do |person_fields| - if person_fields.object.user_editable = person_fields.text_field :name, class: "person-label" - else %p.person-label= person_fields.object.name The problem is, I guess, that Family.new(params[:family]) tries to pull the people out of the database, and I get this: ActiveRecord::RecordNotFound in RsvpsController#check Couldn't find Person with ID=7 for Family with ID= That's, I guess, because I'm not adding a field for family id to the nested form, which I suppose I could do, but I don't actually need it to load anything from the database for this anyway, so I'd rather not. I could also hack around this by just digging through the params hash myself for the data I need, but that doesn't feel a slick. It seems nicest to just create an object out of the params hash and then work with it. Is there a better way? How can I just create the nested object?

    Read the article

  • Cleaner method for list comprehension clean-up

    - by Dan McGrath
    This relates to my previous question: Converting from nested lists to a delimited string I have an external service that sends data to us in a delimited string format. It is lists of items, up to 3 levels deep. Level 1 is delimited by '|'. Level 2 is delimited by ';' and level 3 is delimited by ','. Each level or element can have 0 or more items. An simplified example is: a,b;c,d|e||f,g|h;; We have a function that converts this to nested lists which is how it is manipulated in Python. def dyn_to_lists(dyn): return [[[c for c in b.split(',')] for b in a.split(';')] for a in dyn.split('|')] For the example above, this function results in the following: >>> dyn = "a,b;c,d|e||f,g|h;;" >>> print (dyn_to_lists(dyn)) [[['a', 'b'], ['c', 'd']], [['e']], [['']], [['f', 'g']], [['h'], [''], ['']]] For lists, at any level, with only one item, we want it as a scalar rather than a 1 item list. For lists that are empty, we want them as just an empty string. I've came up with this function, which does work: def dyn_to_min_lists(dyn): def compress(x): return "" if len(x) == 0 else x if len(x) != 1 else x[0] return compress([compress([compress([item for item in mv.split(',')]) for mv in attr.split(';')]) for attr in dyn.split('|')]) Using this function and using the example above, it returns: [[['a', 'b'], ['c', 'd']], 'e', '', ['f', 'g'], ['h', '', '']] Being new to Python, I'm not confident this is the best way to do it. Are there any cleaner ways to handle this? This will potentially have large amounts of data passing through it, are there any more efficient/scalable ways to achieve this?

    Read the article

  • Convert Markdown text to RTF, using Ruby and Pandoc?

    - by niteshade
    Playing with Ruby and Ruby-Pandoc. Seems like a nice tool, if I can get it to work. I'd like to convert some Markdown text (with embedded lists and other fanciness) to Rich Text. Here's the text I'm converting: Title === This is a paragraph. Hallelujah. Here comes a nested list. --- * List item 1 * List item 1.1 * List item 1.2 * List item 2 * List item 2.1 Here's my Ruby code... require 'pandoc-ruby' input = File.read(test.md) converter = PandocRuby.new(input, from: :markdown, to: :rtf) puts converter.convert ... which (after saving the output to a file) produces a document without anything but a title: Here's the code of the RTF file: {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs36 Title\par} {\pard \ql \f0 \sa180 \li0 \fi0 This is a paragraph. Hallelujah.\par} {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs32 Here comes a nested list.\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2.1\sa180\par} In addition, even if it did show up in my RTF viewer (Mac TextEdit), the RTF code seems to have lost all list nesting. I don't know how to diagnose this, whether I have not stated necessary header information or something in Ruby-Pandoc. Thanks in advance!

    Read the article

  • jquery problem with toggle event only fireing on the 2nd click...

    - by Ronedog
    Can anyone explain why the following jquery only fires the 2nd toggle event and how to fix it? Specifically, every time I click the nested < a element it brings up the alert "2nd click". I tested the selector to make sure it was selecting the element properly and it does, or at least it inserted a class without any problems. The selector is selecting the very last node in the unordered list that has an anchor tag. $("#nav li:not(:has(li)) a").toggle(function() { //1st click alert("1st Click"); }, function() { //2nd click alert("2nd Click"); }); Nested HTML structure that fails: <ul id="nav"> <li> <span>stuff</span> <a href="#">Cat 1</a> <ul> <li> <span>stuff</span> <a href="#">Subcat1</a> <ul> <li> <span>Stuff</span> <a href="#">Subcat Details</a> </li> </ul> </li> </ul> </li> </ul> However, this works right and fires both click events: <ul id="nav"> <li> <span>stuff</span> <a href="#">Cat 1</a> </li> </ul>

    Read the article

  • Multiple collections tied to one base collection with filters and eventing

    - by damienc88
    I have a complex model served from my back end, which has a bunch of regular attributes, some nested models, and a couple of collections. My page has two tables, one for invalid items, and one for valid items. The items in question are from one of the nested collections. Let's call it baseModel.documentCollection, implementing DocumentsCollection. I don't want any filtration code in my Marionette.CompositeViews, so what I've done is the following (note, duplicated for the 'valid' case): var invalidDocsCollection = new DocumentsCollection( baseModel.documentCollection.filter(function(item) { return !item.isValidItem(); }) ); var invalidTableView = new BookIn.PendingBookInRequestItemsCollectionView({ collection: app.collections.invalidDocsCollection }); layout.invalidDocsRegion.show(invalidTableView); This is fine for actually populating two tables independently, from one base collection. But I'm not getting the whole event pipeline down to the base collection, obviously. This means when a document's validity is changed, there's no neat way of it shifting to the other collection, therefore the other view. What I'm after is a nice way of having a base collection that I can have filter collections sit on top of. Any suggestions?

    Read the article

  • setsockopt EOPNOTSUPP (Operation not supported)

    - by brant
    When I strace my MySQL process, I keep finding the same error over and over: setsockopt(240, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0x87ab944, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x87ab940, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x87ab260, FUTEX_WAKE_PRIVATE, 1) = 1 select(13, [10 12], NULL, NULL, NULL) = 1 (in [12]) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, path="\246\32629iE"...}, [2]) = 803 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(803, {sa_family=AF_FILE, path="/var/lib/mysql\1"...}, [28]) = 0 fcntl64(803, F_SETFL, O_RDONLY) = 0 fcntl64(803, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(803, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(803, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0x87ab944, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x87ab940, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x87ab260, FUTEX_WAKE_PRIVATE, 1) = 1 select(13, [10 12], NULL, NULL, NULL) = 1 (in [12]) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, path="\246\32629iE"...}, [2]) = 240 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(240, {sa_family=AF_FILE, path="/var/lib/mysql\1"...}, [28]) = 0 fcntl64(240, F_SETFL, O_RDONLY) = 0 fcntl64(240, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(240, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(240, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) When I look for running mysql processes I don't see anything out of the ordinary. I figured it might be someplace in my code, so I modified .htaccess to spit out a 502 error to prevent it from loading anything. The error still shows up, just less frequently. There have been quite a few threads that talk about this error, but no real answer as to how to solve it. my.conf, as per request: [mysqld] #skip-networking #log-slow-queries #safe-show-database #local-infile = 0 log-slow-queries = /var/log/mysql-slow.log max_connections = 200 query_cache_limit = 128643200 key_buffer_size = 1200144000 low_priority_updates = 1 concurrent_insert = 2 thread_cache_size = 7 query_cache_size = 662144000 table_cache = 1600 table_definition_cache = 1024 long_query_time = 2.5 open_files_limit = 2647 max_connect_errors=999999999

    Read the article

  • Alter charset and collation in all columns in all tables in MySQL

    - by The Disintegrator
    I need to execute these statements in all tables for all columns. alter table table_name charset=utf8; alter table table_name alter column column_name charset=utf8; Is it possible to automate this in any way inside MySQL? I would prefer to avoid mysqldump Update: Richard Bronosky showed me the way :-) The query I needed to execute in every table: alter table DBname.DBfield CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; Crazy query to generate all other queries: SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname'; I only wanted to execute it in one database. It was taking too long to execute all in one pass. It turned out that it was generating one query per field per table. And only one query per table was necessary (distinct to the rescue). Getting the output on a file was how I realized it. How to generate the output to a file: mysql -B -N --user=user --password=secret -e "SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname';" > alter.sql And finally to execute all the queries: mysql --user=user --password=secret < alter.sql Thanks Richard. You're the man!

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • mysql startup, shtudown and logging on osx

    - by Joelio
    Hi, I am trying to troubleshoot some mysql problems (I have a table I cant seem to delete or drop, it hangs forever) I have 10.5.8 osx, I dont remember how/if I installed mysql, here is what I know: it automatically starts on boot the process looks like this: /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/var --pid-file=/usr/local/mysql/var/Joels-New-Pro.local.pid _mysql 96 0.0 0.0 75884 684 ?? Ss Sat06PM 0:00.02 /bin/sh /usr/local/mysql/bin/mysqld_safe when I run: /usr/local/mysql/libexec/mysqld --verbose --help it says: /usr/local/mysql/libexec/mysqld Ver 5.0.45 for apple-darwin9.1.0 on i686 (Source distribution) it seems to use my.cnf from /etc/my.cnf Now here are my questions: I dont see anything in the startupitems that remotely looks like mysql ls /Library/StartupItems/ BRESINKx86Monitoring ChmodBPF HP IO HP Trap Monitor Parallels ParallelsTransporter 1.) So how does it startup automatically? 2.) How do I start & stop this type of installation? Also, looking at the config, the logs have no values: /usr/local/mysql/libexec/mysqld --verbose --help|grep '^log' log (No default value) log-bin (No default value) log-bin-index (No default value) log-bin-trust-function-creators FALSE log-bin-trust-routine-creators FALSE log-error log-isam myisam.log log-queries-not-using-indexes FALSE log-short-format FALSE log-slave-updates FALSE log-slow-admin-statements FALSE log-slow-queries (No default value) log-tc tc.log log-tc-size 24576 log-update (No default value) log-warnings 1 3.) Does that mean there is no logging enabled in mysetup? thanks in advance! Joel

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • New Windows Server 2008 R2 WIMP running slower than Windows Server 2003

    - by starshine531
    We recently upgraded a WIMP server from Windows Server 2003 (32 bit) to Windows Server 2008 R2 (64 bit). The new server has significantly better hardware than the old server, yet many processes take much longer than the old box. We have a rather complex web application process that normally takes about 7 seconds on the old box, but on the new one it takes 11-12 seconds. That's down from 15.5 seconds it took before I disabled IPV6. This process involves some queries (some of them involve transactions with maybe 3 queries between the start and commit) and creating and emailing some pdfs. Windows updates are current with a more or less fresh machine. This happens consistently even when we have almost no traffic on the site and memory and cpu aren't being hard pressed at all. The only differences between the servers other than the OS and hardware: 1) When available, we used 64 bit versions of programs 2) The new server uses MySQL 5.5 rather than MySQL 5.1 (I did run the mysql_upgrade program and we use InnoDB for the engine) 3) The new server uses PHP Version 5.3.18 rather than PHP Version 5.3.1 4) With the new OS came IIS7 rather than IIS6 of course. What could be causing better hardware to run so much slower? Let me know if you need more details. Thank you.

    Read the article

  • How to make a huge ram drive?

    - by Brandon Moore
    At my old job when a report was needed I could sit down with someone and pull up results and get immediate feedback, and then refine my queries and ultimately have the data we needed, in the format we needed within 30-90 minutes. I just started working for a new company with a database containing millions of records and I spent my whole 8 hours making a report that I feel I could have made in less than 2 hours if it were not for the massive amount of data the queries are working with, and the fact that I couldn't ask the person needing the data to sit down with me and give me feedback as I pulled up results as I am used to. So I am trying to think of how we can make the server faster... much faster, so that I can have the same level of productivity I'm used to. One thought that just came to mind is that memory is so cheap these days, and by my calculations I could buy 10 8gig ram sticks for 1000 bucks. What I have never heard of though is a device that would let me combine these into a huge ram drive. So I'd like to know if any such device exists, and if not what is the largest ram drive I could realistically make and how would I go about doing so? EDIT: To you guys who are saying the database shema needs to be analyzed... you can't make a query such as "Select f1, f2, f3, etc from SomeTable" run any faster by normalizing or indexing the table. What I'm talking about IS ABSOLUTELY a need for improved performance at the hardware level. I am used to having results come back to me in a few seconds, not a few minutes or much less a half an hour. Maybe that's what you guys are used to who have 100 billion record tables and you feel like that's fast, but I'm looking for results back from tables with about 10 million records to come back to me withing less than half a minute TOPS.

    Read the article

  • Google: "302 Moved" in Firefox

    - by Virtlink
    For some Google search queries executed from the Firefox search bar, or manually by typing in the URL, I get a ''302 Moved'' page. I did a quick virus scan, have checked the hosts file and the plugins and add-ons that are installed in Firefox. Nothing is out of the ordinary. What could be the problem? These URLs (and any URL with google.com, empty and firefox-a in them) show me a 302 Moved page: https://www.google.com/search?q=c%23+empty+array&client=firefox-a https://www.google.com/search?q=empty&client=firefox-a Whereas these URLs work fine: https://www.google.com/search?q=c%23+empty+array (no firefox-a) https://www.google.com/search?q=empty (no firefox-a) https://www.google.com/search?q=c%23+array&client=firefox-a (no empty) https://www.google.nl/search?q=c%23+empty+array&client=firefox-a (no .com) By default Google redirects my queries to their .nl website, and uses HTTPS. I am currently executing a full system virus scan using Security Essentials. My Firefox plugins were up-to-date. No unfamiliar Firefox plugins or add-ons found. Restarting Firefox did not solve the issue. The issue does not occur in Internet Explorer. The hosts file did not contain any unfamiliar entries.

    Read the article

  • Mysql master-master not replicating

    - by frankil
    I'm setting up a master-master mysql replication on two servers (db1 and db2). I started with setting up db2 as a slave to db1 and that works fine. But when I set up db1 as a slave to db2 it isn't replicating. On the face of it everything looks fine but the data isn't replicating. There are no errors in either of the error logs. The slave status is updating the bin log position. I have used mysqlbinlog to examine both the binlog on the db2 and the relay log on db1 and all of the queries are going in there, but not being executed to db1. "show slave status" on both servers shows that both the slave io and sql threads are "Yes" and that the relay log position is updated by the sql thread. Also on both servers: >echo "show processlist" | mysql | grep "system user" 166819 system user NULL Connect 3655 Waiting for master to send event NULL 166820 system user NULL Connect 3507 Has read all relay log; waiting for the slave I/O thread to update it NULL Relevant config for db1: server-id = 1 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 1 master-host = db2 master-port = 3306 master-user = slaveuser master-password = *** skip-slave-start sync_binlog = 1 binlog-ignore-db=mysql Config for db2 server-id = 2 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 2 master-host = db1 master-port = 3306 master-user = slaveuser master-password = *** sync_binlog = 1 relay-log=mysql-relay-bin binlog-ignore-db=mysql What else can I look for to make sure db1 executes the queries from db2?

    Read the article

  • Multicast hostname lookups on OSX

    - by KARASZI István
    I have a problem with hostname lookups on my OSX computer. According to Apple's HK3473 document it says for v10.6: Host names that contain only one label in addition to local, for example "My-Computer.local", are resolved using Multicast DNS (Bonjour) by default. Host names that contain two or more labels in addition to local, for example "server.domain.local", are resolved using a DNS server by default. Which is not true as my testing. If I try to open a connection on my local computer to a remote port: telnet example.domain.local 22 then it will lookup the IP address with multicast DNS next to the A and AAAA lookups. This causes a two seconds lookup timeout on every lookup. Which is a lot! When I try with IPv4 only then it won't use the multicast queries to fetch the remote address just the simple A queries. telnet -4 example.domain.local 22 When I try with IPv6 only: telnet -6 example.domain.local 22 then it will lookup with multicast DNS and AAAA again, and the 2 seconds timeout delay occurs again. I've tried to create a resolver entry to my /etc/resolver/domain.local, and /etc/resolver/local.1, but none of them was working. Is there any way to disable this multicast lookups for the "two or more label addition to local" domains, or simply disable it for the selected subdomain (domain.local)? Thank you!

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • APC (php accelerator). What situations should I use this?

    - by matthewsteiner
    So I've just got a small vps. I've installed apc, which sped up normal pages by 20% - 30%. I was reading about memcached and came to the conclusion that I can use apc for the same thing (caching objects from database results) if I'm not distributing over other servers. Since I only have the one server, apc will be just as beneficial for caching things in memory. I'm still in development mode, and I'm sure it's hard to tell what would be best for production mode. The thing is, my database queries seem pretty fast (between .0008 and .02). None of my pages are way database intensive. Would it be beneficial to me to cache results in memory? If the database is running well right now, is it going to be having a hard time later? Also, is connecting to the database at all something that costs speed (even if I cache most of my queries, every page has to have a little database interaction for session data). So, basically if I have a limited ram, and one machine, will using apc rather than just letting the database be uncached be much faster? Ideas?

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • Trying to connect simple VB6 ADO to SQL Server 2008

    - by Henry
    We have a VB6 App that -for current purposes - is very basic ADO: Dim a New ADODB.Recorset, set some basic properties like Cursor Location and Lock Type, set a Connection String and a .Source like "Select * from CustomerMaster", and .OPEN - nothing fancy here! Yet, on a new SBS installation with SQL Server 2008 across 2 Servers (one for Apps, the other dedicated to SQL!), it dies/hangs/crashes if you try to run such a Query from anything but the SQL Sever box. Initially, we were using the SQLOLEDB.1 driver, which would crash/hang the entire SQL Server after about 4 such queries (built a simple 6 line App just for this purpose). Then switched to the NATIVE SQL driver, which did allow us unlimited, happy queries - until you did the first Change/Update - THEN it would corrupt the SQL Server if you exited and tried to go back in. All this 'corruption' is happening from the 'App Server' of the SBS pair, and I presume that the App Server (also installed in tandem with the SQL server this week) has the latest MDACs, etc. And running it from a 'lowly XP workstation' is (obviously) no better. ANY ideas??? -Henry

    Read the article

  • Nginx + php-fpm - recv() error

    - by Ilya Biryukov
    I get the follow error in the nginx log [error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]" I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04 I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail. Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it. I have a feeling this is something to do with php. But I can't figure it out. Any suggestions of where to even start looking? Update: and the php error log is silent. No record of anything going wrong

    Read the article

  • mysql thread count

    - by Ryan M.
    We have a web application that uses apache and mysql. Generally (according to Munin) our MySQL thread count sits between 2 and 4 at all times. The other day, our server almost came to a halt. HTTP requests were slow or wouldn't go through at all, SSH would work, but would take 30+ seconds to register keystrokes, etc.. So we pull up Munin and the only thing that's out of normal boundaries is the Mysql thread count. CPU usage was under 1%, load was under 1.0, plenty of available RAM. As mentioned before, the thread count floats around 2 to 4. At the time of our slow downs it had spiked to 14. So I start poking around the Internet and I see that in most cases, you'll start to see a higher thread count when you start running into slow queries. If I understand it correctly, the request comes in that takes a while to process, in the mean time other requests are coming in, so a new thread will be created to work on the request (yes?). But at the time of the slow down, we had 0 slow queries. My question is: What else can cause mysql to create additional threads. And would this sudden spike in threads possibly cause the server to slow down? To fix the issue, we restarted apache and everything went back to it's beautiful, normal self. Considering the the Server Vitals (CPU, RAM, Network, etc) were all ideal, and the thread count was the only thing out of place, this seems like the most logical thing to pursue as the possible cause. If it matters, we're on Mysql 5.1.40. Server is FreeBSD 7.2 and the server in question is inside a jail.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >