Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 163/189 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Error when calling Mysql Stored procedure

    - by devuser
    This is my stored procedure to search throgh all databases,tables and columns. This procedure got created with out any error. DELIMITER $$ DROP PROCEDURE IF EXISTS mydb.get_table$$ CREATE DEFINER=root@% PROCEDURE get_table(in_search varchar(50)) READS SQL DATA BEGIN DECLARE trunc_cmd VARCHAR(50); DECLARE search_string VARCHAR(250); DECLARE db,tbl,clmn CHAR(50); DECLARE done INT DEFAULT 0; DECLARE COUNTER INT; DECLARE table_cur CURSOR FOR SELECT concat('SELECT COUNT(*) INTO @CNT_VALUE FROM ', table_schema,'.', table_name, ' WHERE ', column_name,' REGEXP ''',in_search,'''' ) ,table_schema,table_name,column_name FROM information_schema.COLUMNS WHERE TABLE_SCHEMA NOT IN ('mydb','information_schema'); DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; #Truncating table for refill the data for new search. PREPARE trunc_cmd FROM 'TRUNCATE TABLE temp_details'; EXECUTE trunc_cmd ; OPEN table_cur; table_loop:LOOP FETCH table_cur INTO search_string,db,tbl,clmn; #Executing the search SET @search_string = search_string; SELECT search_string; PREPARE search_string FROM @search_string; EXECUTE search_string; SET COUNTER = @CNT_VALUE; SELECT COUNTER; IF COUNTER0 THEN # Inserting required results from search to table INSERT INTO temp_details VALUES(db,tbl,clmn); END IF; IF done=1 THEN LEAVE table_loop; END IF; END LOOP; CLOSE table_cur; #Finally Show Results SELECT * FROM temp_details; END$$ DELIMITER ; But when calling this procedure following error occurs. call get_table('aaa') Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'delete REGEXP 'aaa'' at line 1 (0 ms taken)

    Read the article

  • The woes of (sometimes) storing "date only" in datetimes

    - by Heinzi
    We have two fields from and to (of type datetime), where the user can store the begin time and the end time of a business trip, e.g.: From: 2010-04-14 09:00 To: 2010-04-16 16:30 So, the duration of the trip is 2 days and 7.5 hours. Often, the exact times are not known in advance, so the user enters the dates without a time: From: 2010-04-14 To: 2010-04-16 Internally, this is stored as 2010-04-14 00:00 and 2010-04-16 00:00, since that's what most modern class libraries (e.g. .net) and databases (e.g. SQL Server) do when you store a "date only" in a datetime structure. Usually, this makes perfect sense. However, when entering 2010-04-16 as the to date, the user clearly did not mean 2010-04-16 00:00. Instead, the user meant 2010-04-16 24:00, i.e., calculating the duration of the trip should output 3 days, not 2 days. I can think of a few (more or less ugly) workarounds for this problem (add "23:59" in the UI layer of the to field if the user did not enter a time component; add a special "dates are full days" Boolean field; store "2010-04-17 00:00" in the DB but display "2010-04-16 24:00" to the user if the time component is "00:00"; ...), all having advantages and disadvantages. Since I assume that this is a fairly common problem, I was wondering: Is there a "standard" best-practice way of solving it? If there isn't, have you experienced a similar requirement, how did you solve it and what were the pros/cons of that solution?

    Read the article

  • MySQL Access denied error

    - by dancingbush
    I am trying to install mySQL on a Mac OS 10.8 and set up a user account. NOTE I am a abs beginner when it comes to using the command line in Terminal window. I used these instructions to install: http://www.macminivault.com/mysql-mountain-lion/ I set my own password for all users here: GRANT ALL ON *.* TO 'root'@'localhost' IDENTIFIED BY 'mypass' WITH GRANT OPTION; quit Every time i try to execute mySQL as a root user on the command line i get this: Ciarans-MacBook-Pro:~ callanmooneys$ mysql -u root ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) I read around on the net and tried various things including tried this to change password: mysqladmin -u root -pyourcurrentmysqlrootpassword password yournewmysqlrootpassword, it returns -> -> USE mysql -> If i simply type 'mysql' and launch the mySQL monitor then try to crete a user account: mysql> USE mysql ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'mysql' mysql> Also tried answers on forum: access is denied for user 'root'@localhost mysql error 1045 returned '[email protected] command not found And MySQL - ERROR 1045 - Access denied: Ciarans-MacBook-Pro:~ callanmooneys$ mysqld_safe --skip-grant-tables 131105 21:44:41 mysqld_safe Logging to '/usr/local/mysql/data/Ciarans-MacBook-Pro.local.err'. 131105 21:44:41 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data /usr/local/mysql/bin/mysqld_safe: line 129: /usr/local/mysql/data/Ciarans-MacBook-Pro.local.err: Permission denied /usr/local/mysql/bin/mysqld_safe: line 166: /usr/local/mysql/data/Ciarans-MacBook-Pro.local.err: Permission denied 131105 21:44:41 mysqld_safe mysqld from pid file /usr/local/mysql/data/Ciarans-MacBook-Pro.local.pid ended /usr/local/mysql/bin/mysqld_safe: line 129: /usr/local/mysql/data/Ciarans-MacBook-Pro.local.err: Permission denied Ciarans-MacBook-Pro:~ callanmooneys$ mysql -u root ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) Ciarans-MacBook-Pro:~ callanmooneys$ Feedback appreciated.

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • Managing highly repetitive code and documentation in Java

    - by polygenelubricants
    Highly repetitive code is generally a bad thing, and there are design patterns that can help minimize this. However, sometimes it's simply inevitable due to the constraints of the language itself. Take the following example from java.util.Arrays: /** * Assigns the specified long value to each element of the specified * range of the specified array of longs. The range to be filled * extends from index <tt>fromIndex</tt>, inclusive, to index * <tt>toIndex</tt>, exclusive. (If <tt>fromIndex==toIndex</tt>, the * range to be filled is empty.) * * @param a the array to be filled * @param fromIndex the index of the first element (inclusive) to be * filled with the specified value * @param toIndex the index of the last element (exclusive) to be * filled with the specified value * @param val the value to be stored in all elements of the array * @throws IllegalArgumentException if <tt>fromIndex &gt; toIndex</tt> * @throws ArrayIndexOutOfBoundsException if <tt>fromIndex &lt; 0</tt> or * <tt>toIndex &gt; a.length</tt> */ public static void fill(long[] a, int fromIndex, int toIndex, long val) { rangeCheck(a.length, fromIndex, toIndex); for (int i=fromIndex; i<toIndex; i++) a[i] = val; } The above snippet appears in the source code 8 times, with very little variation in the documentation/method signature but exactly the same method body, one for each of the root array types int[], short[], char[], byte[], boolean[], double[], float[], and Object[]. I believe that unless one resorts to reflection (which is an entirely different subject in itself), this repetition is inevitable. I understand that as a utility class, such high concentration of repetitive Java code is highly atypical, but even with the best practice, repetition does happen! Refactoring doesn't always work because it's not always possible (the obvious case is when the repetition is in the documentation). Obviously maintaining this source code is a nightmare. A slight typo in the documentation, or a minor bug in the implementation, is multiplied by however many repetitions was made. In fact, the best example happens to involve this exact class: Google Research Blog - Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken (by Joshua Bloch, Software Engineer) The bug is a surprisingly subtle one, occurring in what many thought to be just a simple and straightforward algorithm. // int mid =(low + high) / 2; // the bug int mid = (low + high) >>> 1; // the fix The above line appears 11 times in the source code! So my questions are: How are these kinds of repetitive Java code/documentation handled in practice? How are they developed, maintained, and tested? Do you start with "the original", and make it as mature as possible, and then copy and paste as necessary and hope you didn't make a mistake? And if you did make a mistake in the original, then just fix it everywhere, unless you're comfortable with deleting the copies and repeating the whole replication process? And you apply this same process for the testing code as well? Would Java benefit from some sort of limited-use source code preprocessing for this kind of thing? Perhaps Sun has their own preprocessor to help write, maintain, document and test these kind of repetitive library code? A comment requested another example, so I pulled this one from Google Collections: com.google.common.base.Predicates lines 276-310 (AndPredicate) vs lines 312-346 (OrPredicate). The source for these two classes are identical, except for: AndPredicate vs OrPredicate (each appears 5 times in its class) "And(" vs Or(" (in the respective toString() methods) #and vs #or (in the @see Javadoc comments) true vs false (in apply; ! can be rewritten out of the expression) -1 /* all bits on */ vs 0 /* all bits off */ in hashCode() &= vs |= in hashCode()

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • Where are tables in Mnesia located?

    - by Sanoj
    I try to compare Mnesia with more traditional databases. As I understand it tables in Mnesia can be located to: ram_copies - tables are stored in RAM only, so no durability as in ACID. disc_copies - tables are located on disc and a copy is located in RAM, so the table can not be bigger than the available memory? disc_only_copies - tables are located to disc only, so no caching in memory and worse performance? And the size of the table are limited to the size of dets or the table has to be fragmented. So if I want the performance of doing reads from RAM and the durability of writes to disc, then the size of the tables are very limited compared to a traditional RDBMS like MySQL or PostgreSQL. I know that Mnesia aren't meant to replace traditional RDBMS:s, but can it be used as a big RDBMS or do I have to look for another database? The server I will use is a VPS with limited amount of memory, around 512MB, but I want good database performance. Are disc_copies and the other types of tables in Mnesia so limited as I have understood?

    Read the article

  • Django Models / SQLAlchemy are bloated! Any truly Pythonic DB models out there?

    - by Luke Stanley
    "Make things as simple as possible, but no simpler." Can we find the solution/s that fix the Python database world? from someAmazingDB import * class Task (model): title = '' isDone = False db.taskList = [] #or db.taskList = expandableTypeCollection(Task) #not sure what this syntax would be db['taskList'].append(Task(title='Beat old sql interfaces',done=False)) db.taskList.append(Task('Illustrate different syntax modes',True)) #at this point it should autosave #we should be able to reload the console and access like: >> from someAmazingDB import * >> print 'Done tasks:' >> for task in db.taskList: >> if task.done: >> print task 'Illustrate different syntax modes' I'm a fan of Python, webPy and Cherry Py, and KISS in general. We're talking automatic Python to SQL type translation or NoSQL. We don't have to totally be SQL compatible! Just a scalable subset or ignore it! Re:model changes, it's ok to ask the developer when they try to change it or have a set of sensible defaults. Here is the challenge: The above code should work with very little modification or thinking required. Why must we put up with compromise when we know better? It's 2010, we should be able to code scalable, simple databases in our sleep. If you think this is important, please upvote!

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Ideal Multi-Developer Lamp Stack?

    - by devians
    I would like to build an 'ideal' lamp development stack. Dual Server (Virtualised, ESX) Apache / PHP on one, Databases (MySQL, PgSQL, etc) on the other. User (Developer) Manageable mini environments, or instance. Each developer instance shares the top level config (available modules and default config etc) A developer should have control over their apache and php version for each project. A developer might be able to change minor settings, ie magicquotes on for legacy code. Each project would determine its database provider in its code The idea is that it is one administrate-able server that I can control, and provide globally configured things like APC, Memcached, XDebug etc. Then by moving into subsets for each project, i can allow my users to quickly control their environments for various projects. Essentially I'm proposing the typical system of a developer running their own stack on their own machine, but centralised. In this way I'd hope to avoid problems like Cross OS code problems, database inconsistencies, slightly different installs producing bugs etc. I'm happy to manage this in custom builds from source, but if at all possible it would be great to have a large portion of it managed with some sort of package management. We typically use CentOS, so yum? Has anyone ever built anything like this before? Is there something turnkey that is similar to what I have described? Are there any useful guides I should be reading in order to build something like this?

    Read the article

  • How do I solve an unresolved external when using C++ Builder packages?

    - by David M
    I'm experimenting with reconfiguring my application to make heaving use of packages. Both I and another developer running a similar experiment are running into a bit of trouble when linking using several different packages. We're probably both doing something wrong, but goodness knows what :) The situation is this: The first package, PackageA.bpl, contains C++ class FooA. The class is declared with the PACKAGE directive. The second package, PackageB.bpl, contains a class inheriting from FooA, called FooB. It includes FooB.h, and the package is built using runtime packages, and links to PackageA by adding a reference to PackageA.bpi. When building PackageB, it compiles fine but linking fails with a number of unresolved externals, the first few of which are: [ILINK32 Error] Error: Unresolved external '__tpdsc__ FooA' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external 'FooA::' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external '__fastcall FooA::~FooA()' referenced from blah\FooB.OBJ etc. Running TDump on PackageA.bpl shows: Exports from PackageA.bpl 14 exported name(s), 14 export addresse(s). Ordinal base is 1. Sorted by Name: RVA Ord. Hint Name -------- ---- ---- ---- 00002A0C 8 0000 __tpdsc__ FooA 00002AD8 10 0001 __linkproc__ FooA::Finalize 00002AC8 9 0002 __linkproc__ FooA::Initialize 00002E4C 12 0003 __linkproc__ PackageA::Finalize 00002E3C 11 0004 __linkproc__ PackageA::Initialize 00006510 14 0007 FooA:: 00002860 5 0008 FooA::FooA(FooA&) 000027E4 4 0009 FooA::FooA() 00002770 3 000A __fastcall FooA::~FooA() 000028DC 6 000B __fastcall FooA::Method1() const 000028F4 7 000C __fastcall FooA::Method2() const 00001375 2 000D Finalize 00001368 1 000E Initialize 0000610C 13 000F ___CPPdebugHook So the class definitely seems to be exported and available to link. I can see entries for the specific things ILink32 says it's looking for and not finding. Running TDump on the BPI file shows similar entries. Other info The class does descend from TObject, though originally before refactoring into packages it was a normal C++ class. (More detail below. It seems "safer" using VCL-style classes when trying to solve problems with a very Delphi-ish thing like this anyway. Changing this only changes the order of unresolved externals to first not find Method1 and Method2, then others.) Declaration for FooA: class PACKAGE FooA: public TObject { public: FooA(); virtual __fastcall ~FooA(); FooA(const FooA&); virtual __fastcall long Method1() const; virtual __fastcall long Method2() const; }; and FooB: class FooB: public FooA { public: FooB(); virtual __fastcall ~FooB(); ... other methods... }; All methods definitely are implemented in the .cpp files, so it's not not finding them because they don't exist! The .cpp files also contain #pragma package(smart_init) near the top, under the includes. Questions that might help... Are packages reliable using C++, or are they only useable with Delphi code? Is linking to the first package by adding a reference to its BPI correct - is that how you're supposed to do it? I could use a LIB but it seems to make the second package much larger, and I suspect it's statically linking in the contents of the first. Can we use the PACKAGE directive only on TObject-derived classes? There is no compiler warning using it on standard C++ classes. Is splitting code into packages the best way to achieve the goal of isolating code and communicating through defined layers / interfaces? I've been investigating this path because it seems to be the C++Builder / Delphi Way, and if it worked it looks attractive. But are there better alternatives? I'm very new to using packages and have only known about them through using components before. Any general words of advice would be great! We're using C++Builder 2010. I've fabricated the class and method names in the above code examples, but other than that the details are exactly what we're seeing. Cheers, David

    Read the article

  • sql 2008 sqldmo alternative

    - by alexdelpiero
    Hi! I previously was using sqldmo to automatically generate scripts from the databse. Now I upgraded to sql server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr < 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr < 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status < -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr < 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr < 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Database migration pattern for Java?

    - by Eno
    Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface. What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema). To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.

    Read the article

  • Java errors on Lotus Domino Designer Client 8.5.1

    - by ajcooper
    I have a clean install of Lotus Notes 8.5.1 (now with FP3) and I'm getting the following errors in Designer. This is with a new database with a couple of forms and views. I'm finding this is typical across all databases. Is there something I need to install/configure etc.? I'm not new to Notes, but I'm new to 8.5 Thanks Aidan Description Resource Path Location Type Cannot resolve plug-in: org.eclipse.core.runtime plugin.xml TestAgent.nsf line 9 Plug-in Problem Cannot resolve plug-in: org.eclipse.ui plugin.xml TestAgent.nsf line 8 Plug-in Problem Cannot resolve plug-in: com.ibm.commons plugin.xml TestAgent.nsf line 10 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.vfs plugin.xml TestAgent.nsf line 12 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.xml plugin.xml TestAgent.nsf line 11 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime plugin.xml TestAgent.nsf line 15 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime.directory plugin.xml TestAgent.nsf line 14 Plug-in Problem Cannot resolve plug-in: com.ibm.jscript plugin.xml TestAgent.nsf line 13 Plug-in Problem Cannot resolve plug-in: com.ibm.notes.java.api plugin.xml TestAgent.nsf line 20 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 16 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 21 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 18 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 22 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 19 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 23 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 17 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 24 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.rcp plugin.xml TestAgent.nsf line 25 Plug-in Problem

    Read the article

  • Linq to SQL Problem System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager

    - by luckyluke
    I have a really tricky thing going up here. My project has around 100 tables and they are all mapped by LINQ. Everything works fine in a dev and test environment. These enviroments are MS Win 2008 r2 servers with SQL 2008 sp1 databases. IIS and SQL are on a different machines. Now on production enviroment which is MS Win 2003 x64 web farm + geoclustered SQL 2008 IT DOES not work. All I get is the exception System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager3.TryCreateKeyFr>om Values(Object[] values, MultiKey& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache2.Find(Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at ERS.IIMP.Services.ExposuresSrv.Update(Int32 ExpID, Int32 AssID) Services\ExposuresSrv.cs` My question is What the hell. They have precisely the same DBML, the DB has exactly THE SAME structure (when I get the DB from prod to TEST and mount it eveything works just great), the binaries on the WEB Server are the same. I seriously do not know what to do.... Did anyone found that Linq works on one env and does not on the second?? I mam really lost here. I really hope You can help me:)

    Read the article

  • What type of webapp is the sweet spot for Scala's Lift framework?

    - by ajay
    What kind of applications are the sweet spot for Scala's lift web framework. My requirements: Ease of development and maintainability Ready for production purposes. i.e. good active online community, regular patches and updates for security and performance fixes etc. Framework should survive a few years. I don't want to write a app in a framework for which no updates/patches are available after 1 year. Has good UI templating engines Interoperation with Java (Scala satisfies this arleady. Just mentioning here for completeness sake) Good component oriented development. Time required to develop should be proportion to the complexity of web application. Should not be totally configuration based. I hate it when code gets automatically generated for me and does all sorts of magic under the hood. That is a debugging nightmare. Amount of Lift knowledge required to develop a webapp should be proportional to the complexity of the web application. i.e I should't have to spend 10+ hours learning Lift just to develop a simple TODO application. (I have knowledge of Databases, Scala) Does Lift satisfy these requirements?

    Read the article

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • The "first past the post election" query problem

    - by MPelletier
    This problem may seem like school work, but it isn't. At best it is self-imposed school work. I encourage any teachers to take is as an example if they wish. "First past the post" elections are single-round, meaning that whoever gets the most votes win, no second rounds. Suppose a table for an election. CREATE TABLE ElectionResults ( DistrictHnd INTEGER NOT NULL, PartyHnd INTEGER NOT NULL, CandidateName VARCHAR2(100) NOT NULL, TotalVotes INTEGER NOT NULL, PRIMARY KEY DistrictHnd, PartyHnd); The table has two foreign keys: DistrictHnd points to a District table (lists all the different electoral districts) and PartyHnd points to a Party table (lists all the different political parties). I won't bother with other tables here, joining them is trivial. This is just a wee bit of context. The question: What SQL query will return a table listing the DistrictHnd, PartyHnd, CandidateName and TotalVotes of the winners (max votes) in each District? This does not suppose any particular database system. If you wish to stick to a particular implementation of SQL, go the way of SQLite and MySQL. If you can devise a better schema (or an easier one), that is acceptable too. Criteria: simplicity, portability to other databases.

    Read the article

  • h2 (embedded mode ) database files problem

    - by aeter
    There is a h2-database file in my src directory (Java, Eclipse): h2test.db The problem: starting the h2.jar from the command line (and thus the h2 browser interface on port 8082), I have created 2 tables, 'test1' and 'test2' in h2test.db and I have put some data in them; when trying to access them from java code (JDBC), it throws me "table not found exception". A "show tables" from the java code shows a resultset with 0 rows. Also, when creating a new table ('newtest') from the java code (CREATE TABLE ... etc), I cannot see it when starting the h2.jar browser interface afterwards; just the other two tables ('test1' and 'test2') are shown (but then the newly created table 'newtest' is accessible from the java code). I'm inexperienced with embedded databases; I believe I'm doing something fundamentally wrong here. My assumption is, that I'm accessing the same file - once from the java app, and once from the h2 console-browser interface. I cannot seem to understand it, what am I doing wrong here?

    Read the article

  • SQL Server 2008 alternative for SQL-DMO

    - by alexdelpiero
    Hi! I previously was using SQL-DMO to automatically generate scripts from the database. Now I upgraded to SQL Server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr <> 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr <> 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status <> -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr <> 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr <> 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Handling dependencies with IoC that change within a single function call

    - by Jess
    We are trying to figure out how to setup Dependency Injection for situations where service classes can have different dependencies based on how they are used. In our specific case, we have a web app where 95% of the time the connection string is the same for the entire Request (this is a web application), but sometimes it can change. For example, we might have 2 classes with the following dependencies (simplified version - service actually has 4 dependencies): public LoginService (IUserRepository userRep) { } public UserRepository (IContext dbContext) { } In our IoC container, most of our dependencies are auto-wired except the Context for which I have something like this (not actual code, it's from memory ... this is StructureMap): x.ForRequestedType().Use() .WithCtorArg("connectionString").EqualTo(Session["ConnString"]); For 95% of our web application, this works perfectly. However, we have some admin-type functions that must operate across thousands of databases (one per client). Basically, we'd want to do this: public CreateUserList(IList<string> connStrings) { foreach (connString in connStrings) { //first create dependency graph using new connection string ???? //then call service method on new database _loginService.GetReportDataForAllUsers(); } } My question is: How do we create that new dependency graph for each time through the loop, while maintaining something that can easily be tested?

    Read the article

  • Parsing a blackberry .ipd file

    - by galaxywatcher
    I recently lost my Blackberry. When I discovered it was gone very shortly afterwards and called it, the sim card had already been removed. I ain't seeing that Blackberry again. Ok. I am out $300, but at least my data is backed up. I had an older working Blackberry fortunately and I got a new sim card and proceeded to restore my data using Blackberry Desktop Manager. 7000+ emails, hundreds of autotext entries, sms messages, calendar events, all backing up. Looking good. Lo and behold! My Address Book contacts refuse to back up? I try advanced, and it is greyed out as an option to restore. Far more frustrating than losing my bberry in the first place is wrangling with software that defies human logic. Ok, now I guess I will have to enter all 327 names by hand. That is, if I can read the .ipd file. I have tried the free version of ABC Amber Blackberry editor, but when I open the .ipd file, the contacts just do not show up. I am beginning to feel like the gods are conspiring against me. Then I found this: http://jabide.com/2009/03/parse-blackberry-ipd-files/ He posted a perl script that claims to extract the files. I copied and pasted the code and it did list all the different databases in my .ipd file, I was elated that a cool solution like this was published. I followed the instructions and garbled data with some discernible ascii was sent to standard output unlike a .csv file like he said it would. This is enough to make a grown man cry. Does anyone out there have a solution to extract my address book contacts from an .ipd file?

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >