Search Results

Search found 9926 results on 398 pages for 'lookup tables'.

Page 102/398 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • How To Investigate/Restore MySQL Permissions? MySQL ERROR 1045 (28000): Access denied for user

    - by Recc
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Debian. mysqld is listening on 3306 supposedly Telnet to 3306 works Also tried binding it specifically yo localhost and then 127.0.0.1 which made no difference However: # netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 78993 /var/run/mysqld/mysqld.sock # mysql -P3306 -ptest ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Things I've tried: dpkg-reconfigure mysql-server-5.1 Doesn't help http://www.debian-administration.org/articles/442 Doesn't help This command (source): UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root'; FLUSH PRIVILEGES; Doesn't help, in fact: Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0 So might the user be deleted? Extremely unlikely as all this started after packages update a colleague did and some separate services started screwing around but my colleague said he removed the offenders. Theres more: while # mysqld_safe --skip-grant-tables is running one can access the data tables, only with the valid passwords! So there's users and some authentication takes place hence the 0 rows affected above. Can the privileges tables be damaged somehow and how can I recreate/restore them when my only way of getting a mysql console is to skip them? Can I spare my reinstall of MySQL? Either way I did get a dump of the DBs now that I could get in with the above mode.

    Read the article

  • Maintenance window and recovery for a large database

    - by NYSystemsAnalyst
    One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following: Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes. In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible. Backup the database within the allotted nightly time window (a few hours). My questions are: Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?

    Read the article

  • Mail-Merge on Steroids: Can Word 2003 do this?

    - by richardtallent
    I have a huge report to put together, made up of over 1,000 smaller, nearly-identical reports. Each report includes: General 1:1 information (basic mail-merge stuff) Lots of text, some of which may need to be disabled or have alternate text based on a boolean field. A few embedded images, preferably loaded via HTTP URL, but if they have to be on the a file system thing I can do that. (Filenames will be provided as a field in the data source.) Fortunately, all images are roughly the same size/shape. Several 1:m tables with a few fields apiece. The kicker is the master/child tables. I've seen examples for Word 2000 for doing this by left-joining the master and child table and using some IF/THEN logic to know whether to jump to the next master record. But in my case, I have several of these subtables, so that approach won't really work. So, can Word 2003 handle arbitrary master/child tables? If so, how? If not, I considered InfoPath, but I haven't used it before, and it seems to be made for data entry, not long formatted reports. I'm a software developer, so I could always hack something together with a massive VBA macro, or generating the report in HTML on the web server (where the data is coming from anyway). But I'm hoping Word will work without such gymnastics, since it will give the ultimate users of the report template better control over formatting and making minor changes.

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here?

    Read the article

  • Openlayers - Redraw() layer. Add / Remove layer.

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • Best way to re-use the same django models and admin for multiple apps

    - by kepioo
    Given a reference app ( called guide), how can I create additional apps that will reuse the same model/admin/views than guide - the motivation behind is to be able to individually control each subapp. guide guideApp1 exact same models/admin/views than guide guideApp2 exact same models/admin/views than guide in the Admin site, I should have : 1 section for guideApp1 with all the tables defined in guide, that applies to guideApp1 1 section for guideApp12 with all the tables defined in guide, that applies to guideApp2

    Read the article

  • mysqldump table names prefix

    - by Bogdan Gusiev
    I have two mysql databases that have almost the same structure and representing the data of the same web app but one of them represents the current version and second one was made long time ago. How can I create the database with both dumps inside but with old_ prefix for tables from the first and new_ prefix for tables from the second database? Is there any mysqldump options to setup the prefix or other solution?

    Read the article

  • SSIS package randomly hangs during execution

    - by Adam MacLeod
    Hi Guys, I am having an ongoing and painful problem with an SSIS package. The package runs every 5 minutes as an SQL Agent Job and every 2-10 days the package will start running and never stop (thus preventing further executions). If I stop the hung job manually it will begin working perfectly again in the next 5 minute interval. The SSIS package is for moving data from an Oracle database to a MSSQL 2005 database. It has 7 steps: Step 1 calls an Oracle Stored Procedure to prepare the temporary tables inside ORACLE Steps 2-6 process the data from the ORACLE tables to the MSSQL tables ORACLE - MSSQL Step 7 calls an Oracle Stored Procedure to clear the ORACLE temporary tables I suspect that the issue is caused by a communications error between the MSSQL server and the ORACLE server. Both the MSSQL database and Agent/package run on one machine with the ORACLE database running over the network. I have enabled logging of the SQL package and after more than 2GB of log file I have captured the instant where the package stops responding: OnPreValidate,ADV-SRV5,NT AUTHORITY\SYSTEM,CallistaIntegrationToMonashCRM_delta,{F88F6C45-CFA2-4801-A2F2-DDF03D458A48},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,(null) OnPreValidate,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,(null) OnInformation,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,1074016266,0x,Validation phase is beginning. OnProgress,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,Validating Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_pre: The object is ready to make the following external request: 'IDataInitialize::GetDataSource'. Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_post: 'IDataInitialize::GetDataSource succeeded'. The external request has completed. Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_pre: The object is ready to make the following external request: 'IDBInitialize::Initialize'. These messages show the entire log generated for the failed run, for a successful run the output is typically ~2500 lines. I can see that the package is hanging during the initialize operation on the Callista connection (ORACLE database). I have not been able to work out a way to either fix this issue or have the package die gracefully (an error to the log would be A-OK with me). Any help or advice would be greatly appreciated.

    Read the article

  • Master/Detail insert using BCP?

    - by Joseph
    I've a master table with identity on for the primary key and I've some child tables. When I insert a record to the master table using BCP, how can I use the newly created ID from the master table to insert in the related child tables (May be BCP again)? Will it be possible? I may be able to set identity off, but, will it work? Thanks Joseph

    Read the article

  • SQL Server "User-Schema Separation" and Entity Framework issues

    - by Ryan
    I have been fooling around with EF with a database that has implemented user-schema separation with a twist, there are multiple tables with the same name but are separated via the schema. So like: admin.tasks staff.tasks contractor.tasks When I created my EF model I noticed that there were 3 tasks tables: tasks tasks1 tasks2 Is this by design? Also is there a way to tell EF to add the schema to the name of the entity or am I SOL and doing it myself?

    Read the article

  • Taking a snapshot (cloning) of a project using Linq 2 Sql

    - by JoeBlogs1999
    I have a Project entity with several child tables, eg ProjectAwards ProjectTeamMember I would like to copy the data from Project (and child tables) into a new Project record and update the Project status. eg var projectEntity = getProjectEntity(projectId); draftProjectEntity = projectEntity draftProjectEntity.Status = NewStatus context.SubmitChanges(); I found this link from Marc Gravell Its part of the way there but it updates the child records to the new draftProject, where I need it to copy.

    Read the article

  • Ext JS how to tell PagingToolbar to use parent Grid storage?

    - by Nazariy
    I'm trying to build application that use single config passed by server as non native JSON (can contain functions). Everything works fine so far but I'm curious why PagingToolbar does not have an option to use parent Grid store? I have tried to set store in my config like this, but without success: {... store:Ext.StoreMgr.lookup('unique_store_id') } Is there any way to do so without writing tons of javascript for each view defining store, grid and other items in my application or at least extend functionality of PaginationToolbar that use options from parent object? UPDATED, Here is short example of server response (minified) { "xtype":"viewport", "layout":"border", "renderTo":Ext.getBody(), "autoShow":true, "id":"mainFrame", "defaults":{"split":true,"useSplitTips":true}, "items":[ {"region":"center", "xtype":"panel", "layout":"fit", "id":"content-area", "items":{ "id":"manager-panel", "region":"center", "xtype":"tabpanel", "activeItem":0, "items":[ { "xtype":"grid", "id":"domain-grid", "title":"Manage Domains", "store":{ "xtype":"arraystore", "id":"domain-store", "fields":[...], "autoLoad":{"params":{"controller":"domain","view":"store"}}, "url":"index.php" }, "tbar":[...], "bbar":{ "xtype":"paging", "id":"domain-paging-toolbar", "store":Ext.StoreMgr.lookup('domain-store') }, "columns":[...], "selModel":new Ext.grid.RowSelectionModel({singleSelect:true}), "stripeRows":true, "height":350, "loadMask":true, "listeners":{ "cellclick":activateDisabledButtons } } ] }, } ] }

    Read the article

  • A better way of converting Codepage-1251 in RTF to Unicode

    - by blue painted
    I am trying to parse RTF (via MSEDIT) in various languages, all in Delphi 2010, in order to produce HTML in unicode. Taking Russian/Cyrillic as my starting point I find that the overall document codepage is 1252 (Western) but the Russian parts of the text are identified by the charset of the font (RUSSIAN_CHARSET 204). So far I am: 1) Use AnsiString (or RawByteString) when parsing the RTF 2) Determine the CodePage by a lookup from the font charset (see http://msdn.microsoft.com/en-us/library/cc194829.aspx) 3) Translating using a lookup table in my code: (This table generated from http://msdn.microsoft.com/en-gb/goglobal/cc305144.aspx) - I'm going to need one table per supported codepage! There MUST be a better way than this? Preferably something supplied by the OS and so less brittle than tables of constants.

    Read the article

  • Assign table values to multiple variables using a single SELECT statement and CASE?

    - by Darth Continent
    I'm trying to assign values contained in a lookup table to multiple variables by using a single SELECT having multiple CASE statements. The table is a lookup table with two columns like so: [GreekAlphabetastic] SystemID Descriptor -------- ---------- 1 Alpha 2 Beta 3 Epsilon This is my syntax: SELECT @VariableTheFirst = CASE WHEN myField = 'Alpha' THEN tbl.SystemID END, @VariableTheSecond = CASE WHEN myField = 'Beta' THEN tbl.SystemID END, @VariableTheThird = CASE WHEN myField = 'Epsilon' THEN tbl.SystemID END FROM GreekAlphabetastic tbl However, when I check the variables after this statement executes, I expected each to be assigned the appropriate value, but instead only the last has a value assigned. SELECT @VariableTheFirst AS First, @VariableTheSecond AS Second, @VariableTheThird AS Third Results: First Second Third NULL NULL 3 What am I doing wrong?

    Read the article

  • Database Replication

    - by tanthiamhuat
    I have tried to follow the steps outlined in http://www.howtoforge.com/mysql_database_replication for Database Replication. I have created the database exampledb, and create some tables and load them with values. But when I execute USE exampledb; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; I do not get any output, it says 0 rows affected. why is it so?

    Read the article

  • Can a Compound key be set as a primary key to another table?

    - by Nikos
    Can a compound key be set as a primary key to another table? For instance i have the tables: Books -with Primary Key:Product_ID Client -with Primary key: Client_ID Clients_Books -with Compound Primary Key:Product_Id and Client_ID I want to set this compound Primary key from Clients_Books as a Primary Key to another table named: Order_Details. But I don't want to use the Product_ID and the Client_ID from the Books, Clients tables. Does this make sense? All opinions more than welcome.

    Read the article

  • paste(1) in SQL

    - by pilcrow
    How in SQL could you "zip" together records in separate tables (à la the UNIX paste(1) utility)? For example, assuming two tables, A and B, like so: A B ======== ==== Harkness unu Costello du Sato tri Harper Jones How could you produce a single result set NAME | NUM =============== Harkness | unu Costello | du Sato | tri Harper | NULL Jones | NULL ?

    Read the article

  • How to differenciate the data when doing a UNION on 2 SELECTS ?

    - by wiooz
    If I have the following two tables : Employes Bob Gina John Customers Sandra Pete Mom I will do a UNION for having : Everyone Bob Gina John Sandra Pete Mom The question is : In my result, how can I creat a dumn column of differenciate the data from my tables ? Everyone Bob (Emp) Gina (Emp) John (Emp Sandra (Cus) Pete (Cus) Mom (Cus) I want to know from with table the entry is from withouth adding a new column in the database... SELECT Employes.name FROM Employes UNION SELECT Customers.name FROM Customers;

    Read the article

  • VBNet2008 Transfer DATAREADER rows to CrystalReport DataSet1.xsd

    - by lennie
    Hi Good Guys, I need your help. Please help me as I am a newbie using VB.NET. I have been asked to transfer DATAREADER rows into Crystal Report DATASET1.XSD Here are the coding Private Sub BtnTransfer() dim strsql as string = "Select OrderID, OrderDate from ORDERS" dim DS as new dataset1 '<---crysal report dataset1.xsd dim DR as SqlDataReader dim RW as DataRow = DS.Tables(0).NewRow sqlconn = new sqlconnection(ConnString) sqlcmd = new sqlCommand(strSql, sqlconn) sqlcmd.Connection.open() DR = sqlcmd.ExecuteReader(CommandBehaviour.CloseConnection) Do while DR.READ RW("OrderID") = DR("OrderID") RW("OrderDate") = DR(OrderDate") DS.Tables(0).Rows.ADD(RW) Loop End sub

    Read the article

  • Sparse O(1) array with indices being consecutive products

    - by Kos
    Hello, I'd like to pre-calculate an array of values of some unary function f. I know that I'll only need the values for f(x) where x is of the form of a*b, where both a and b are integers in range 0..N. The obvious time-optimized choice is just to make an array of size N*N and just pre-calculate just the elements which I'm going to read later. For f(a*b), I'd just check and set tab[a*b]. This is the fastest method possible - however, this is going to take a lot of space as there are lots of indices in this array (starting with N+1) which will never by touched. Another solution is to make a simple tree map... but this slows down the lookup itself very heavily by introducing lots of branches. No. I wonder - is there any solution to make such an array less sparse and smaller, but still quick branchless O(1) in lookup?

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >