Search Results

Search found 38640 results on 1546 pages for 'full table scan'.

Page 311/1546 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Corrupt mysql system tables

    - by psynnott
    I am having issues with the columns_priv table in the mysql system database. I cannot add new users currently. I have tried repairing it using mysqlcheck --auto-repair --all-databases --password but I get the following output: mysql.columns_priv Error : Incorrect file format 'columns_priv' error : Corrupt Is there any other way to repair this table, or how do I go about replacing it with a blank table? What would I lose by doing that? Thank you Edit (Additional Info) mysqld is currently using 100% cpu constantly. Looking at show processlist, I get: mysql> show processlist; +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | 5 | debian-sys-maint | localhost | mysql | Query | 1589 | Opening tables | ALTER TABLE tables_priv MODIFY Column_priv set('Select','Insert','Update','References') COLL | | | 752 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • How to get data from a bluetooth device that is not visible?

    - by jonobacon
    I just bought a Fitbit One which includes Bluetooth 4.0 to sync with mobile devices. Currently libfitbit does not include support for bluetooth syncing, so I would like to see how much data I can get out of the device that I can pass onto the libfitbit devs so that they can explore bluetooth support. I ran: hcitool scan which unfortunately did not return any devices. I also used blueman to scan for devices and nothing was found either. Therefore I am assuming that the bluetooth radio in the device is not visible by default. Can anyone recommend any ways to get data out of the device that could be helpful?

    Read the article

  • New Success Story: McGrath RentCorp Improves Business Reporting and Analytics Capabilities with Cloud-based Business Intelligence Solution

    - by LanaProut
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} McGrath RentCorp worked with Jade Global, an Oracle Platinum Partner, to scope, design, and execute the deployment, using its Oracle Accelerate solution to jumpstart the process and accelerate the time to value. Click here to view the full story.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • How to make Windows 7 use the internet connection that I specify

    - by user138957
    I have a LAN adapter and a USB wireless internet connection. When both connected windows 7 always uses the USB. I tried changing the metric values but no luck. Let me explains the steps I took. Currently automatic metric on all adapters. LAN connected. ipconfig shows that it is connected to the correct ip/dns/gateway etc. IPv4 Route table shows Metric 24 Then connected USB. ipconfig shows USB connectivity then LAN in that order. Internet is now through USB. IPv4 Route table shows Metric 4249 for LAN and USB is 41. Gateway for USB shows "on-link". netstat -rn shows USBDEVICE on top. Changed LAN metric to 5 and now the route table shows LAN as 9 (not sure why it added 4) and USB as 41. netstat shows LAN then USB. ipconfig shows LAN then USB. But still connection is through USB. How do I know? Task manager shows utilization only through USB as well as speed is showing around 1mbps rather than LANs 10mbps. How can I get win7 use LAN while USB is connected. I am just trying to use USB as a backup just in case I lose LAN connection. Please help!! I thought i will make USB metric manually to say 10. But it says I have to reconnect for it to be effective. Currently USB still shows below LAN and still has 9 and 41 in the table. Disconnected USB. Table shows LAN metric as 24 (Not sure why it got changed from 9 and setting got reverted by to automatic) Reconnected USB. Now in the setting still shows 10 and the route table shows 11 for USB and LAN shows 4249 (settings shows 4245, 4 less)) For some reason restarting USB is resetting LAN setting when reconnected. Thanks

    Read the article

  • Canon MP280 All in One prints but scanner not recognized

    - by Chris
    I recently installed Ubuntu 12.04 LTS after not using any Linux distro for several years. My Canon MP280 worked perfectly as far as printing goes and I was even able to set it up on a Samba share and print from a Windows 7 machine. However when I open Simple Scan it does not detect a scanner. I did find some instructions online but the scanner is still not recognized in Simple Scan even after installing a couple .deb packages from the Canon site. I also couldn't seem to find the ScanGear package after installing those .deb packages as well but I haven't had a chance to delve fully into it. Thanks for your time!

    Read the article

  • external hard drive not detected after ubuntu crash [restarting machine]

    - by Netmoon
    today i tried to watch movie [with vlc media player] from my external hard drive [expansion portable 500 GB], and it's play good, then pause the movie and play it again, and my Ubuntu crashed! [Ubuntu 12.04]i had to restart the machine, so did it, but after that Ubuntu can't recognize this hard drive! i changed USB cable but it was not effective. this is my dmesg command result : [ 191.281630] usb 2-1.3: new full-speed USB device number 9 using ehci_hcd [ 191.353527] usb 2-1.3: device descriptor read/64, error -32 [ 191.529115] usb 2-1.3: device descriptor read/64, error -32 [ 191.704669] usb 2-1.3: new full-speed USB device number 10 using ehci_hcd [ 191.776524] usb 2-1.3: device descriptor read/64, error -32 [ 191.952202] usb 2-1.3: device descriptor read/64, error -32 [ 192.127772] usb 2-1.3: new full-speed USB device number 11 using ehci_hcd [ 192.534742] usb 2-1.3: device not accepting address 11, error -32 [ 192.606749] usb 2-1.3: new full-speed USB device number 12 using ehci_hcd [ 193.013696] usb 2-1.3: device not accepting address 12, error -32 [ 193.013906] hub 2-1:1.0: unable to enumerate USB device on port 3 note: I am able to hear the sound of the external drive lens. When attach hard drive to usb port, the status light goes on, but it's low bright and i think its power is low! try to mount in Microsoft Windows 7, but nothing happened.

    Read the article

  • MySQL Unions/Subselects not utilizing keys from associated tables

    - by Brett
    I've noticed by doing EXPLAINs that when a MySQL union between two tables is used, mysql creates a temporary table, but the temp table does not use keys, so queries are slowed considerably. Here is an example: SELECT * FROM ( SELECT `part_number`, `part_manufacturer_clean`, `part_number_clean`, `part_heci`, `part_manufacturer`, `part_description` FROM `new_products` AS `a` UNION SELECT `part` as `part_number`, `manulower` as `part_manufacturer_clean`, `partdeluxe` as `part_number_clean`, `heci` as `part_heci`, `manu` as `part_manufacturer`, `description` as `part_description` FROM `warehouse` AS `b` ) AS `c` WHERE `part_manufacturer_clean` = 'adc' EXPLAIN yields this: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL (NULL) (NULL) (NULL) (NULL) 17206 Using where 2 DERIVED a ALL (NULL) (NULL) (NULL) (NULL) 17743 3 UNION b ALL (NULL) (NULL) (NULL) (NULL) 5757 (NULL) UNION RESULT <union2,3> ALL (NULL) (NULL) (NULL) (NULL) (NULL) In this case, part_manufacturer_clean and manulower are keys in both tables. When I don't use the subselects and union, and just use one table, everything works fine. I'm not sure if the issue is with the union or with the subselects. Is there any way to union two tables and still use keys/indexes for performance?

    Read the article

  • Program to swap files between drives?

    - by josi
    Has anyone built a program/script to transfer files between 2 hard drives, but like if both are near full....so one copies 1 file over then the other copies the other file, then they delete the files that were copied? Kind of annoying, have a 6tb raid at about 4tb full, then 1 4.5tb basically full, can't really swap them easily....without doing many copies and deletes of files.... Anyone know a way to make them just swap? lol

    Read the article

  • Do cross reference database tables have a place in domain driven design?

    - by Mike Cellini
    First some background. Let's say we have a system where a customer is placing an order in a web interface. The items that customer is ordering can priced in various ways. Sometimes including the cost of delivery and sometimes not at all. That pricing effectively depends on a variety of factors including the vendor's own pricing model, that vendor's individual contracts with customers as well as that vendor's contracts with its own suppliers. Let's assume that once a customer places an order for a particular item and chooses a contract if any, the method of delivery can be determined by variables on those contracts. Those delivery methods also live in their own table in the database and have various properties consumed downstream. It makes sense that a cross reference or lookup table would store that information. That table would be loaded into the domain and could then be used to apply the appropriate delivery method while processing the order. Does this make sense in the context of domain driven design? Or is my thinking too relational? Is this logic that should be built into it's own class/method (I mean beyond apply the cross reference table data)?

    Read the article

  • Cloud Strategy for Partners Announced at OOW

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle made a significant announcement about its Cloud Strategy for partners: Oracle has unveiled a comprehensive new set of Oracle Cloud partner programs and enablement resources that help partners to speed time to market with new cloud-based services and solutions and deliver increased value to customers. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} New Oracle PartnerNetwork Cloud offerings include an Oracle Cloud Referral Program, Oracle Cloud Specialization featuring RapidStart and Oracle Cloud Builder Specializations, Oracle Cloud Resale Program and Oracle Platform Services for Independent Software Vendors (ISVs).

    Read the article

  • How do you handle the task of changing the schema of a production MySQL database?

    - by Continuation
    One of the biggest complaints I have heard about MySQL is that it locks up a table if you try to change its schema like adding a column or adding an index. By "locking up the table" does it mean I can neither read nor write to the table? Sometimes for hours? That seems a pretty severe limitations. I was going to use MySQL for my new project but this gives me pause. Is there a workaround for this? How do you handle the task of changing the schema of your production MySQL database? By the way someone told me Postgresql doesn't have this problem. Is that true - I can both read and write to a Postgresql table while changing its schema? Is there any performance penalty incurred? Would love to hear your experiences.

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • Website (X)HTML Code Change Detection [closed]

    - by 0pt1m1z3
    I am looking for an enterprise-grade service or a tool that can be used to scan / fingerprint websites and notify when major XHTML code changes are detected. The tool should be able to continuously scan thousands of websites and determine the percentage of HTML code that has been modified since the last run. And then either save the data where it can be easily accessed or send periodic notifications. I know of services like ChangeDetect.com, but they don't do markup only changes and instead focus on everything, including content. We don't really care about presentation content, because a lot of sites we need to cover are updated frequently with content.

    Read the article

  • REST API wrapper - class design for 'lite' object responses

    - by sasfrog
    I am writing a class library to serve as a managed .NET wrapper over a REST API. I'm very new to OOP, and this task is an ideal opportunity for me to learn some OOP concepts in a real-life situation that makes sense to me. Some of the key resources/objects that the API returns are returned with different levels of detail depending on whether the request is for a single instance, a list, or part of a "search all resources" response. This is obviously a good design for the REST API itself, so that full objects aren't returned (thus increasing the size of the response and therefore the time taken to respond) unless they're needed. So, to be clear: .../car/1234.json returns the full Car object for 1234, all its properties like colour, make, model, year, engine_size, etc. Let's call this full. .../cars.json returns a list of Car objects, but only with a subset of the properties returned by .../car/1234.json. Let's call this lite. ...search.json returns, among other things, a list of car objects, but with minimal properties (only ID, make and model). Let's call this lite-lite. I want to know what the pros and cons of each of the following possible designs are, and whether there is a better design that I haven't covered: Create a Car class that models the lite-lite properties, and then have each of the more detailed responses inherit and extend this class. Create separate CarFull, CarLite and CarLiteLite classes corresponding to each of the responses. Create a single Car class that contains (nullable?) properties for the full response, and create constructors for each of the responses which populate it to the extent possible (and maybe include a property that returns the response type from which the instance was created). I expect among other things there will be use cases for consumers of the wrapper where they will want to iterate through lists of Cars, regardless of which response type they were created from, such that the three response types can contribute to the same list. Happy to be pointed to good resources on this sort of thing, and/or even told the name of the concept I'm describing so I can better target my research.

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • Rebuild the index with REINDEX [closed]

    - by kuttyarif
    WARNING: index "pk_alarmid" contains 1363436 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_uei_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_nodeid_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX.

    Read the article

  • InnoDb Overhead?

    - by Rimary
    I just converted several large tables from MyISAM to InnoDB. When I view the tables in phpMyAdmin, they are showing a significant amount of overhead (One table has 6.8GB). Optimizing the tables (which isn't a supported command on InnoDB) has no affect like it does on MyISAM. Is this a result of InnoDB having the ever growing data file that never returns space even after deletes? If that's the case, I've never seen overhead like this before from other InnoDB tables. Is there a way to clean this up? Edit: Here are the things I've tried (with no success): Optimize Table Reorder table by primary key Defragment table

    Read the article

  • You cannot do cross joins in SQL Azure but there is a way around that....

    - by SeanBarlow
    So I was asked today how to do cross joins in SQL Azure using Linq. Well the simple answer is you cant do it. It is not supported but there are ways around that. The solution is actually very simple and easy to implement. So here is what I did and how I did it. I created two SQL Azure Databases. The first Database is called AccountDb and has a single table named Account, which has an ID, CompanyId and Name in it. The second database I called CompanyDb and it contains two tables. The first table I named Company and the second I named Address. The Company Table has an Id and Name column. The Address Table has an Id and CompanyId columns. Since we cannot do cross joins in Azure we have to have one of the models preloaded with data. I simply put the Accounts into a List of accounts and use that in my join.   var accounts = new AccountsModelContainer().Accounts.ToList(); var companies = new CompanyModelContainer().Companies; var query = from account in accounts             join company in                 (                       from c in companies                      select c                  ) on account.CompanyId equals company.Id             select new AccountView() {                                               AccountName = account.Name, CompanyName = company.Name,                                 Addresses = company.Addresses                         }; return query.ToList();   So as long as you have your data loaded from one of the contexts you can still execute your queries and get the data back that you want.

    Read the article

  • How compilers know about other classes and their properties?

    - by OnResolve
    I'm writing my first programming language that is object orientated and so far so good with create a single 'class'. But, let's say I want to have to classes, say ClassA and ClassB. Provided these two have nothing to do with each other then all is good. However, say ClassA creates a ClassB--this poses 2 related questions: -How would the compiler know when compiling ClassA that ClassB even exists, and, if it does, how does it know it's properties? My thoughts thus far had been: instead of compiling each class at a time (i.e scan, parse and generate code) each "file (not really file, per se, but a "class") do I need to scan + parse each first, then generate code for all?

    Read the article

  • Does "I securely erased my drive" really work with Truecrypt partitions?

    - by TheLQ
    When you look at Truecrypt's Plausible Deniability page it says that one of the reasons for partition with solely random data is that you securely erased your drive. But what about the partition table with full disk encryption? How can you explain why the partition table says there's a partition of unknown type (With my limited knowledge of partition tables I think that they store all the partition filesystem types) and with solely random data? It seems that if your going to securely erase the drive you would destroy everything, including the partition table. And even if you just wiped the partition, the partition table would still say that the partition was originally NTFS, which it isn't anymore. Does the "I securely erased my drive" excuse still work here? (Note: I know that there's hidden truecrypt volumes, but I'm avoiding them due to the high risk of data loss)

    Read the article

  • MySQL reclaim index space after large delete?

    - by cdunn
    After performing a large delete in MySQL, I understand you need to run a NULL ALTER to reclaim disk space, is this also true for reclaiming index space? We have tables using 10G of index space and have deleted/archived large chunks of this data and unsure if we need to rebuild the table in order to decrease the size of the index. Can anyone offer any advice? We are trying to avoid rebuilding the table since it would take quite awhile and lock the table. Thanks!

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • How to create a link to a different part of a sheet ?

    - by ldigas
    Is there an excel feature that enables you to create a link to a different part of a sheet so you don't have to scroll down ... wherever, to get there ? I have about 2000 tables in one sheet, and some "table of contents" listing all the tables. I'd like to create a link from the table of contents to the appropriate table (it's all within the same sheet). Is something like that possible ?

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >