Search Results

Search found 9926 results on 398 pages for 'lookup tables'.

Page 87/398 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Contiguous Time Periods

    It is always better, and more efficient, to maintain referential integrity by using constraints rather than triggers. Sometimes it is not at all obvious how to do this, and the history table, and other temporal data tables, presented problems for checking data that were difficult to solve with constraints. Suddenly, Alex Kuznetsov came up with a good solution, and so now history tables can benefit from more effective integrity checking. Joe explains...

    Read the article

  • Taking a Projects Development to the Next Level

    - by user1745022
    I have been looking for some advice for a while on how to handle a project I am working on, but to no avail. I am pretty much on my fourth iteration of improving an "application" I am working on; the first two times were in Excel, the third Time in Access, and now in Visual Studio. The field is manufacturing. The basic idea is I am taking read-only data from a massive Sybase server, filtering it and creating much smaller tables in Access daily (using delete and append Queries) and then doing a bunch of stuff. More specifically, I use a series of queries to either combine data from multiple tables or group data in specific ways (aggregate functions), and then I place this data into a table (so I can sort and manipulate data using DAO.recordset and run multiple custom algorithms). This process is then repeated multiple times throughout the database until a set of relevant tables are created. Many times I will create a field in a query with a value such as 1.1 so that when I append it to a table I can store information in the field from the algorithms. So as the process continues the number of fields for the tables change. The overall application consists of 4 "back-end" databases linked together on a shared drive, with various output (either front-end access applications or Excel). So my question is is this how many data driven applications that solve problems essentially work? Each backend database is updated with fresh data daily and updating each takes around 10 seconds (for three) and 2 minutes(for 1). Project Objectives. I want/am moving to SQL Server soon. Front End will be a Web Application (I know basic web-development and like the administration flexibility) and visual-studio will be IDE with c#/.NET. Should these algorithms be run "inside the database," or using a series of C# functions on each server request. I know you're not supposed to store data in a database unless it is an actual data point, and in Access I have many columns that just hold calculations from algorithms in vba. The truth is, I have seen multiple professional Access applications, and have never seen one that has the complexity or does even close to what mine does (for better or worse). But I know some professional software applications are 1000 times better then mine. So Please Please Please give me a suggestion of some sort. I have been completely on my own and need some guidance on how to approach this project the right way.

    Read the article

  • Visual View for Schema Based Editor

    - by Geertjan
    Starting from yesterday's blog entry, make the following change in the DataObject's constructor: registerEditor("text/x-sample+xml", true); I.e., the MultiDataObject.registerEditor method turns the editor into a multiview component. Now, again, within the DataObject, add the following, to register a source editor in the multiview component: @MultiViewElement.Registration(         displayName = "#LBL_Sample_Source",         mimeType = "text/x-sample+xml",         persistenceType = TopComponent.PERSISTENCE_NEVER,         preferredID = "ShipOrderSourceView",         position = 1000) @NbBundle.Messages({     "LBL_Sample_Source=Source" }) public static MultiViewElement createEditor(Lookup lkp){     return new MultiViewEditorElement(lkp); } Result: Next, let's create a visual editor in the multiview component. This could be within the same module as the above or within a completely separate module. That makes it possible for external contributors to provide modules with new editors in an existing multiview component: @MultiViewElement.Registration(displayName = "#LBL_Sample_Visual", mimeType = "text/x-sample+xml", persistenceType = TopComponent.PERSISTENCE_NEVER, preferredID = "VisualEditorComponent", position = 500) @NbBundle.Messages({ "LBL_Sample_Visual=Visual" }) public class VisualEditorComponent extends JPanel implements MultiViewElement {     public VisualEditorComponent() {         initComponents();     }     @Override     public String getName() {         return "VisualEditorComponent";     }     @Override     public JComponent getVisualRepresentation() {         return this;     }     @Override     public JComponent getToolbarRepresentation() {         return new JToolBar();     }     @Override     public Action[] getActions() {         return new Action[0];     }     @Override     public Lookup getLookup() {         return Lookup.EMPTY;     }     @Override     public void componentOpened() {     }     @Override     public void componentClosed() {     }     @Override     public void componentShowing() {     }     @Override     public void componentHidden() {     }     @Override     public void componentActivated() {     }     @Override     public void componentDeactivated() {     }     @Override     public UndoRedo getUndoRedo() {         return UndoRedo.NONE;     }     @Override     public void setMultiViewCallback(MultiViewElementCallback callback) {     }     @Override     public CloseOperationState canCloseElement() {         return CloseOperationState.STATE_OK;     } } Result: Next, the DataObject is automatically returned from the Lookup of DataObject. Therefore, you can go back to your visual editor, add a LookupListener, listen for DataObjects, parse the underlying XML file, and display values in GUI components within the visual editor.

    Read the article

  • cannot find table or object

    - by jeni
    hi all, Am running asp.net,c# application with sql server 2005. I got some problems with database tables.I got an inconsistency errors in some tables.I tried to run the below dbcc command to remove the inconsistent datas; DBCC CHECKTABLE ('Customer',repair_allow_data_loss) WITH ALL_ERRORMSG At first i run DBCC CHECKTABLE ('Customer') it is working.but now it is not working, i got an error as Cannot find a table or object with the name "Customer". Check the system catalog. Is my commands wrong.

    Read the article

  • Designing Efficient SQL: A Visual Approach

    Sometimes, it is a great idea to push away the keyboard when tackling the problems of an ill-performing, complex, query, and take up pencil and paper instead. By drawing a diagram to show of all the tables involved, the joins, the volume of data involved, and the indexes, you'll see more easily the relative efficiency of the possible paths that your query could take through the tables.

    Read the article

  • Ubuntu 12.04 - PPTP VPN is the only Internet Access

    - by user212553
    I know this has been covered. I've read dozens of posts but still have questions. I have a work server whose traffic should never leave my house without encryption. The VPN is PPTP. Currently I have a cron job that checks the status of the ppp0 adapter each minute. If the connection drops, which it does fairly often, it shuts key components down. It's fairly easy to restart PPTP with "nmcli con up id 'myVPNServer'" but there's no assurance it will reconnect and I need a better way to stop traffic (other than killing apps) when ppp0 is down. The two options I've seen discussed are the firewall (UFW, Firestarter, IPTables) or the route tables. I could be easily swayed to consider the firewall option but I focused on the route tables since no new function needs to be started. My questions involve the way the route tables change and then specifics on rules. When I start the PPTP VPN the route tables change. That suggests that if the VPN drops, the table will change back, defeating my stated intent of preventing external traffic. How can I make "sticky" changes to the route table that will persist even if the VPN connection drops? Perhaps the check boxes "Ignore automatically obtained routes" or "Use this connection only for resources on it's network" (which are part of the VPN configuration options)? It would seem that, if I can force the active VPN route table to stay in effect, even when the VPN drops, that this will effectively kill any external traffic should the VPN drop. This will give me the latitude to run a routine to restart the VPN from the command line (assuming the route table rules don't prevent me re-establishing the connection). My route table, with the VPN active is (ip route list): Any comments on what 10.10.1.1 is? $ ip route list default dev ppp0 proto static 10.10.1.1 dev ppp0 proto kernel scope link src 10.10.1.11 VPN_Server_IP_Address via 192.168.1.1 dev eth0 proto static VPN_Server_IP_Address via 192.168.1.1 dev eth0 src 192.168.1.60 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.60 metric 1

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • A Look At Style Sheet Languages

    Style sheet languages are computer languages which was introduced when the market demanded for a new way or method of designing a website other than the use of tables and spacers. In the past, tables... [Author: Margarette Mcbride - Web Design and Development - May 17, 2010]

    Read the article

  • Stairway to T-SQL DML Level 11: How to Delete Rows from a Table

    You may have data in a database that was inserted into a table by mistake, or you may have data in your tables that is no longer of value. In either case, when you have unwanted data in a table you need a way to remove it. The DELETE statement can be used to eliminate data in a table that is no longer needed. In this article you will see the different ways to use the DELETE statement to identify and remove unwanted data from your SQL Server tables.

    Read the article

  • JEditorPane Code Completion (Part 3)

    - by Geertjan
    The final step is to put an object into the Lookup on key listening in each of the JEditorPanes, e.g., a "City" object for the CityEditorPane and a "Country" object for the CountryEditorPane. Then, within the CompletionProviders, only add items to the CompletionResultSet if the object of interest is in the Lookup. The result is that you can then have different code completions in different JEditorPanes, as shown below: I've also included the Tools | Options | Editor | Code Completion tab, so that the code completion can be customized. The full source code for the example is here: java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.2/misc/CustomerApp

    Read the article

  • Things to Look For in Finding the Best SEO Company

    Preparing to employ the best SEO company? Due to the impact of lookup engine optimisation, or SEO on lookup motor rankings, finding the best SEO company for your business is a lot more crucial than ever. In a way, it's like discovering the right shoe that fits-it's easy to wear but resilient and lasts lengthy. When SEO services are correctly handled, websites and blogs rank very high on major search engines like Yahoo, Google, and Bing by utilizing on-page and off-page SEO techniques and best a SEO company can assist you in this region.

    Read the article

  • BeansBinding Across Modules in a NetBeans Platform Application

    - by Geertjan
    Here's two TopComponents, each in a different NetBeans module. Let's use BeansBinding to synchronize the JTextField in TC2TopComponent with the data published by TC1TopComponent and received in TC2TopComponent by listening to the Lookup. The key to getting to the solution is to have the following in TC2TopComponent, which implements LookupListener: private BindingGroup bindingGroup = null; private AutoBinding binding = null; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && binding != null) { bindingGroup.getBinding("customerNameBinding").unbind(); } if (!result.allInstances().isEmpty()){ Customer c = result.allInstances().iterator().next(); // put the customer into the lookup of this topcomponent, // so that it will remain in the lookup when focus changes // to this topcomponent: ic.set(Collections.singleton(c), null); bindingGroup = new BindingGroup(); binding = Bindings.createAutoBinding( // a two-way binding, i.e., a change in // one will cause a change in the other: AutoBinding.UpdateStrategy.READ_WRITE, // source: c, BeanProperty.create("name"), // target: jTextField1, BeanProperty.create("text"), // binding name: "customerNameBinding"); bindingGroup.addBinding(binding); bindingGroup.bind(); } } I must say that this solution is preferable over what I've been doing prior to getting to this solution: I would get the customer from the resultChanged, set a class-level field to that customer, add a document listener (or action listener, which is invoked when Enter is pressed) on the text field and, when a change is detected, set the new value on the customer. All that is not needed with the above bit of code. Then, in the node, make sure to use canRename, setName, and getDisplayName, so that when the user presses F2 on a node, the display name can be changed. In other words, when the user types something different in the node display name after pressing F2, the underlying customer name is changed, which happens, in the first place, because the customer name is bound to the text field's value, so that the text field's value will also change once enter is pressed on the changed node display name. Also set a PropertyChangeListener on the node (which implies you need to add property change support to the customer object), so that when the customer object changes (which happens, in the second place, via a change in the value of the text field, as defined in the binding defined above), the node display name is updated. In other words, there's still a bit of plumbing you need to include. But less than before and the nasty class-level field for storing the customer in the TC2TopComponent is no longer needed. And a listener on the text field, with a property change listener implented on the TC2TopComponent, isn't needed either. On the other hand, it's more code than I was using before and I've had to include the BeansBinding JAR, which adds a bit of overhead to my application, without much additional functionality over what I was doing originally. I'd lean towards not doing things this way. Seems quite expensive for essentially replacing a listener on a text field and a property change listener implemented on the TC2TopComponent for being notified of changes to the customer so that the text field can be updated. On the other other hand, it's kind of nice that all this listening-related code is centralized in one place now. So, here's a nice improvement over the above. Instead of listening for a customer, listen for a node, from which the customer can be obtained. Then, bind the node display name to the text field's value, so that when the user types in the text field, the node display name is updated. That saves you from having to listen in the node for changes to the customer's name. In addition to that binding, keep the previous binding, because the previous binding connects the customer name to the text field, so that when the customer display name is changed via F2 on the node, the text field will be updated. private BindingGroup bindingGroup = null; private AutoBinding nodeUpdateBinding; private AutoBinding textFieldUpdateBinding; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && textFieldUpdateBinding != null) { bindingGroup.getBinding("textFieldUpdateBinding").unbind(); } if (bindingGroup != null && nodeUpdateBinding != null) { bindingGroup.getBinding("nodeUpdateBinding").unbind(); } if (!result.allInstances().isEmpty()) { Node n = result.allInstances().iterator().next(); Customer c = n.getLookup().lookup(Customer.class); ic.set(Collections.singleton(n), null); bindingGroup = new BindingGroup(); nodeUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, n, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "nodeUpdateBinding"); bindingGroup.addBinding(nodeUpdateBinding); textFieldUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, c, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "textFieldUpdateBinding"); bindingGroup.addBinding(textFieldUpdateBinding); bindingGroup.bind(); } } Now my node has no property change listener, while the customer has no property change support. As in the first bit of code, the text field doesn't have a listener either. All that listening is taken care of by the BeansBinding code.  Thanks to Toni for help with this, though he can't be blamed for anything that is wrong with it, only thanked for anything that is right with it. 

    Read the article

  • How to repair multiple KDC an Netlogon errors

    - by Keith Sirmons
    Howdy, I have several erros in the system event log of my single Windows 2003 SP2 domain controller. Multiple member computers on the domain are listed in these errors. I am seeing two similar errors for each computer one second apart in the event log. Event ID 7 Source KDC The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field. The account name was [email protected] and lookup type 0x8. followed by Event ID 7 Source KDC The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field. The account name was MEMBERNAME$ and lookup type 0x8. The Lookup Types are also different, I have 0x8, 0x28, 0x0, 0x20. I am also receiving other authentication errors in the same time frame as all of the KDC errors Event ID 5722 Source NETLOGON The session setup from the computer MEMBERNAME failed to authenticate. The name(s) of the account(s) referenced in the security database is MEMBERNAME$. The following error occurred: Access is denied. I have run dcdiag /v to see if there was something wrong with Active Directory, but all tests passed. I also ran netdiag /v and it appers all of those tests ran. Any ideas on where to start for this issue? Thank you, Keith

    Read the article

  • Different routing rules for a particular user using firewall mark and ip rule

    - by Paul Crowley
    Running Ubuntu 12.10 on amd64. I'm trying to set up different routing rules for a particular user. I understand that the right way to do this is to create a firewall rule that marks the packets for that user, and add a routing rule for that mark. Just to get testing going, I've added a rule that discards all packets as unreachable: # ip rule 0: from all lookup local 32765: from all fwmark 0x1 unreachable 32766: from all lookup main 32767: from all lookup default With this rule in place and all firewall chains in all tables empty and policy ACCEPT, I can still ping remote hosts just fine as any user. If I then add a rule to mark all packets and try to ping Google, it fails as expected # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 # ping www.google.com ping: unknown host www.google.com If I restrict this rule to the VPN user, it seems to have no effect. # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 -m owner --uid-owner vpn # sudo -u vpn ping www.google.com PING www.google.com (173.194.78.103) 56(84) bytes of data. 64 bytes from wg-in-f103.1e100.net (173.194.78.103): icmp_req=1 ttl=50 time=36.6 ms But it appears that the mark is being set, because if I add a rule to drop these packets in the firewall, it works: # iptables -t mangle -A OUTPUT -j DROP -m mark --mark 0x01 # sudo -u vpn ping www.google.com ping: unknown host www.google.com What am I missing? Thanks!

    Read the article

  • Installation of Active Directory on separate VM from DNS does not entierly work - not sure why

    - by René Kåbis
    Not sure what I am doing wrong here. I have a moderately midrange server (16 cores, 2Ghz, 32GB ECC REG RAM, 6TB storage, nothing too extreme) where I am running Hyper-V (Server 2012 R2 Enterprise) in order to provision virtual machines. So why an AD separate from DNS? I want redundancy. I want to be able to move VMs and back them up individually and not have too many services on any one VM. I have already provisioned a VM with DNS, and have set it up right -- essentially, I have: Set up Static IP’s for everyone involved. Installed the DNS service on the DNS VM. Created a forward lookup zone and a reverse lookup zone (primary zone) xyz.ca Configured the zones to use nonsecure and secure dynamic updates (i will change this to secure later after the domain controller is online). Created a A record for the DC in the forward lookup zone (and a reverse ptr) Changed DC’s DNS server (network settings) to the new DNS server. Checked that I can ping the dns server from the new DC by hostname. When I went ahead and did a DCpromo on the DC, and un-cheked the “install DNS” option, everything seemed to go well (no error messages), but I saw no changes on the DNS server whatsoever (no additional settings). Plus, the DNS server seems to be unable to join the domain, as it claims that the domain is not discoverable. As a final note, I do run Symantec Endpoint Protection, which includes a firewall and most settings set as default. I have not yet tried turning this off, but my experience has been that if a service would open up a port on a Windows firewall, it would do the same through Symantec. There is pretty tight integration these days with corporate-class AV and Windows. I have a template vhdx fully set up (just short of any special roles and features) that I can use to replace the current AD VM with, so doing this all over again is not too much skin off of my nose.

    Read the article

  • Recover mysql database - mysqldump gives "table <tablename> doesn't exist (1146)"

    - by Matthew
    Backstory Ubuntu died (wouldn't boot) and I couldn't fix it. I booted a live cd to recover the important stuff and saved it to my NAS. One of the things I backed up was /var/lib/mysql. Reinstalled with Linux Mint because I was on Ubuntu 10.0.4 this was a good opportunity to try a new distro (and I don't like Unity). Now I want to recover my old mediawiki, so I shut down mysql daemon, cp -R /media/NAS/Backup/mysql/mediawiki@002d1_19_1 /var/lib/mysql/, set file ownership and permissions correctly, and start mysql back up. Problem Now I'm trying to export the database so I can restore the database, but when I execute the mysqldump I get an error: $ mysqldump -u mediawikiuser -p mediawiki-1_19_1 -c | gzip -9 > wiki.2012-11-15.sql.gz Enter password: mysqldump: Got error: 1146: Table 'mediawiki-1_19_1.archive' doesn't exist when using LOCK TABLES Things I've tried I tried using --skip-lock-tables but I get this: Error: Couldn't read status information for table archive () mysqldump: Couldn't execute 'show create table `archive`': Table 'mediawiki-1_19_1.archive' doesn't exist (1146) I tried logging in to mysql and I can list the tables that should be there, but trying to describe or select from them errors out the same way as the dump: mysql> show tables; +----------------------------+ | Tables_in_mediawiki-1_19_1 | +----------------------------+ | archive | | category | | categorylinks | ... | user_properties | | valid_tag | | watchlist | +----------------------------+ 49 rows in set (0.00 sec) mysql> describe archive; ERROR 1146 (42S02): Table 'mediawiki-1_19_1.archive' doesn't exist I believe mediawiki was installed using innodb and binary data. Am I screwed or is there a way to recover this?

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • MySQL Error: FUNCTION LEVENSHTEIN already exists

    - by kgrote
    I've got an ExpressionEngine database and I exported a couple of tables from it, then dropped those tables. When I try to re-import the tables in PHPMyAdmin, I get this error: SQL query: -- -- Database: `my_db` -- DELIMITER $$ -- -- Functions -- CREATE DEFINER=`my_username`@`%` FUNCTION `LEVENSHTEIN`(s1 VARCHAR(255), s2 VARCHAR(255)) RETURNS int(11) DETERMINISTIC BEGIN DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT; DECLARE s1_char CHAR; DECLARE cv0, cv1 VARBINARY(256); SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0; IF s1 = s2 THEN RETURN 0; ELSEIF s1_len = 0 THEN RETURN s2_len; ELSEIF s2_len = 0 THEN RETURN s1_len; ELSE WHILE j <= s2_len DO SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1; END WHILE; WHILE i <= s1_len DO SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1; WHILE j <= s2_len DO SET c = c + 1; IF s1_char = SUBSTRING(s2, j, 1) THEN SET cost = 0; ELSE SET cost = 1; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost; IF c > c_temp THEN SET c = [...] MySQL said: Documentation #1304 - FUNCTION LEVENSHTEIN already exists I get this error even if I drop all tables from the DB and try to import anything. The only way I can get the error to go away is to totally delete the database and re-create it. What's causing that error and how can I stop it from happening?

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • mysqldump skip one table

    - by danneth
    I'm running a cronjob to backup our system using mysqldump. The database contains 90 or so tables. One of these tables is HUGE and every once in a while causes the dump to fail. From the manual I see that you can specify specific tables to dump shell> mysqldump [options] db_name [tbl_name ...] This got me thinking. What if I have two jobs, one for dumping the huge table, and one for all the others. To accomplish this it would be nice if I could to something like shell> mysqldump -u backupuser -p database huge_table > db_huge_table.sql shell> mysqldump -u backupuser -p database --skip huge_table > db_rest.sql Unfortenately I'm not seeing such and option. I could of course explicitly state the 90 tables, but that just seems like a mess. Another option would be a script of some sort, but before checking that route I'll try this resource. MySQL is 5.1.61 on CentOS 6.2

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >