Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 625/1260 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • Is it possible to dynamically set a static string during *class* Initialisation?

    - by LonnieBest
    I'm trying to dynamically create a connection string during compile time: public static string ConnectionString { get { string connectionString = @"Data Source=" + myLibrary.common.GetExeDir() + @"\Database\db.sdf;"; return connectionString; } } I keep running into type initialisation errors. I was trying to avoid having to set the connection string for all applications that user my code library. The location of the database is different for each project that uses the library. I have code that can determine the correction string, but was wanting run it during compile time. Is this even possible?

    Read the article

  • Gujarati Font Rendering

    - by Vishal Khakhkhar
    I am having sqlite database containing gujarati words.. The sql query for the database is... BEGIN TRANSACTION; CREATE TABLE eng_guj (_id INTEGER PRIMARY KEY, eng_word , guj_word ); INSERT INTO eng_guj VALUES(1,'aardvark','??? ??????? ?????????? ?? ?????? ????? ??????.'); COMMIT; I want to display the text in textview.. but its not rendering properly.. meant Its displaying the word in gujarati like "???????" will be displayed as "??????". I already have used Typeface and different ttf fonts.

    Read the article

  • building objects from xml file at runtime and intializing, in one pass?

    - by KaluSingh Gabbar
    I have to parse the XML file and build objects representation based on that, now once I get all these data I create entries in various database for these data objects. I have to do second pass over that for value as in the first pass all I could do is build the assets in various databases. and in second pass I get the values for all the data and put it in the database. I have a feeling that this can be done in a single pass but I just want to see what are your opinions. As I am just a student who started with professional work, experienced ppl please help. Can someone who have ideas or done similar work, please provide some light on the topic so that I can think over the possibility of the work and get the prototype going based on your suggestion. Thanks a lot for your precious time, I honestly appreciate it.

    Read the article

  • MySQL driver for Rails in Windows 7 x64

    - by Darth
    I've got problem with connecting to MySQL database on my freshly installed Windows 7 machine. I'm getting this error when I try to migrate my database. !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! 193: %1 is not valid Win32 application - C:/Ruby/lib/ruby/gems/1.8/gems/mysql-2.8.1-x86-mswin32/lib/1.8/mysql_api.so I currently have installed ruby 1.8.6 (2008-08-11 patchlevel 287) [i386-mswin32] mysql version 5.0.86 for Win64 gem 1.3.1 mysql-2.8.1-x86-mswin32

    Read the article

  • RODBC sqlSave and column names

    - by waanders
    I've a question about using sqlSave. How maps RODBC data in the data frame to the database table columns? If I've a table with columns X and Y and a data frame with columns X and Y, RODBC puts X into X and Y into Y (I found out by trail-and-error). But can I explicitly tell R how to map data.frame columns to database table columns, like put A in X and B in Y. I'm rather new to R and think the RODBC manual is a bit cryptic. Nor can I find an example on the internet.

    Read the article

  • Which collation should I use to store these country names in MySQL?

    - by morpheous
    I am trying to store a list of countries in a mySQL database. I am having problems storing (non English) names like these: São Tomé and Príncipe República de El Salvador They are stored with strange characters in the db, (and therefore output strangely in my HTML pages). I have tried using different combinations of collations for the database and the MySQL connection collation: The "obvious" setting was to use utf8_unicode_ci for both the databse and the connection information. To my utter surprise, that did not solve the problem. Does anyone know how to resolve this issue?

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • Search relevance from XML docs (XQuery?) vs MySQL

    - by Marius
    Hello there, I have a website where documents are saved in xml documents, all with the same structure. I need a search engine where I am able to choose documents with the highest relevance according to the key words given by a searching user. I thought it could (?) be a good idea to have one using XQuery rather than having the information stored twice (in XML docs + mysql database) and querying the mysql database for relevance searches. Is XQuery any good for this, and how, and what speed can I expect on +1000 documents of about 7kb each. Thank you for your time. Kind regards

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • No unique bean of type [javax.persistence.EntityManager] is defined

    - by sebajb
    I am using JUnit 4 to test Dao Access with Spring (annotations) and JPA (hibernate). The datasource is configured through JNDI(Weblogic) with an ORacle(Backend). This persistence is configured with just the name and a RESOURCE_LOCAL transaction-type The application context file contains notations for annotations, JPA config, transactions, and default package and configuration for annotation detection. I am using Junit4 like so: ApplicationContext <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="workRequest"/> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="${database.target}"/> <property name="showSql" value="${database.showSql}" /> <property name="generateDdl" value="${database.generateDdl}" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>workRequest</value> </property> <property name="jndiEnvironment"> <props> <prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop> <prop key="java.naming.provider.url">t3://localhost:7001</prop> </props> </property> </bean> <bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> JUnit TestCase @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:applicationContext.xml" }) public class AssignmentDaoTest { private AssignmentDao assignmentDao; @Test public void readAll() { assertNotNull("assignmentDao cannot be null", assignmentDao); List assignments = assignmentDao.findAll(); assertNotNull("There are no assignments yet", assignments); } } regardless of what changes I make I get: No unique bean of type [javax.persistence.EntityManager] is defined Any hint on what this could be. I am running the tests inside eclipse.

    Read the article

  • How to convert full outer join query to O-R query?

    - by Kugel
    I'm converting relational database into object-relational in Oracle. I have a query that uses full outer join in the old one. Is it possible to write the same query for O-R database without explicitly using full outer join? For normal inner join it simple, I just use dot notation together with ref/deref. I'm interested in this in general so let's say the relational query is: select a.attr, b.attr from a full outer join b on (a.fk = b.pk); I want to know if it's a good idea to do it this way: select a.attr, b.attr from a_obj a full outer join b_obj b on (a.b_ref = ref(b));

    Read the article

  • Reusing slot numbers in Linux software RAID arrays

    - by thkala
    When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array. At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet: ... md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1] 25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] ... mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird. I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array? UPDATE: I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

    Read the article

  • fluent nhibernate not caching queries in asp.net mvc

    - by AWC
    I'm using a fluent nhibernate with asp.net mvc and I not seeing anything been cached when making queries against the database. I'm not currently using an L2 cache implementation. Should I see queries being cached without configuring an out of process L2 cache? Mapping are like this: Table("ApplicationCategories"); Not.LazyLoad(); Cache.ReadWrite().IncludeAll(); Id(x => x.Id); Map(x => x.Name).Not.Nullable(); Map(x => x.Description).Nullable(); Example Criteria: return session .CreateCriteria<ApplicationCategory>() .Add(Restrictions.Eq("Name", _name)) .SetCacheable(true); Everytime I make a request for an application cateogry by name it is hitting the database is this expected behaviour?

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • How to config Remote BLOB Storage(RBS) with Microsoft Dynamics CRM 4.0 ?

    - by jk
    Hi We have working site for Dynamic crm 4.0 and in it we are storing image into the database. Now database is growing very fast and server is dying.. now I want to enable the Remote BLOB Storage with Dynamic CRM 4.0. for that I tried to install RBS for testing but everywhere is configure with Sharepoint 2010 not with Dynamic Crm. Does anybody know how to install and configure with Dyanmic CRM 4.0? Does RBS with Standard Edition of SQL Server 2008? I followed following path to install but it with Sharepoint? http://technet.microsoft.com/en-us/library/ee663474.aspx Any help is appreciate. Thanks

    Read the article

  • How do I list all tables in all databases in SQL Server in a single result set?

    - by msorens
    I am looking for T-SQL code to list all tables in all databases in SQL Server (at least in SS2005 and SS2008; would be nice to also apply to SS2000). The catch, however, is that I would like a single result set. This precludes the otherwise excellent answer from Pinal Dave: sp_msforeachdb 'select "?" AS db, * from [?].sys.tables' The above stored proc generates one result set per database, which is fine if you are in an IDE like SSMS that can display multiple result sets. However, I want a single result set because I want a query that is essentially a "find" tool: if I add a clause like WHERE tablename like '%accounts' then it would tell me where to find my BillAccounts, ClientAccounts, and VendorAccounts tables regardless of which database they reside in.

    Read the article

  • System.Data.DataException thrown when launching application after install

    - by jwarzech
    I currently have a (Visual Studio 2008) installer that has a 'Primary output...' custom action under 'Install' with 'InstallerClass' set to false. During the install I get a System.Data.DataException. When I run the debugger I am getting an exception being thrown from a bit of database access code that runs when the application starts. The database access is using Windows Authentication and works correctly when the application is started manually. Is this something to do with permission settings not allowing db access when launched from the installer? (I'm out of ideas)

    Read the article

  • Best approach to limit users to a single node of a given content type in Drupal

    - by Chaulky
    I need to limit users to a single node of a given content type. So a user can only create one node of TypeX. I've come up with two approaches. Which would be better to use... 1) Edit the node/add/typex menu item to check the database to see if the user has already created a node of TypeX, as well as if they have permissions to create it. 2) When a user creates a node of TypeX, assign them to a different role that doesn't have permissions to create that type of node. In approach 1, I have to make an additional database call on every page load to see if they should be able to see the "Create TypeX" (node/add/typex). But in approach 2, I have to maintain two separate roles. Which approach would you use?

    Read the article

  • Hibernate/JPA DB Schema Generation Best Practices

    - by Bytecode Ninja
    I just wanted to hear the opinion of Hibernate experts about DB schema generation best practices for Hibernate/JPA based projects. Especially: What strategy to use when the project has just started? Is it recommended to let Hibernate automatically generate the schema in this phase or is it better to create the database tables manually from earliest phases of the project? Pretending that throughout the project the schema was being generated using Hibernate, is it better to disable automatic schema generation and manually create the database schema just before the system is released into production? And after the system has been released into production, what is the best practice for maintaining the entity classes and the DB schema (e.g. adding/renaming/updating columns, renaming tables, etc.)? Thanks in advance.

    Read the article

  • Script.PostDeployment.sql not executing in CI build VS 2010

    - by Suirtimed
    I have a database project in VS 2010 (SQL 2008). The local Deploy Solution action works and executes all of the SQL in the Script.PostDeployment.sql file. When I check changes in, the Build Definition for the continuous integration server executes. The database is deployed into the CI environment, but the PostDeployment script doesn't get executed. I wasn't able to find anything specific to this particular scenario. I also expect I'll need to provide additional information unless this is a trivial problem that I missed somewhere. Additional Information: The build is executing vsdbcmd.exe to deploy. The deployment manifest references the PostDeployment.sql file and it's present in the path with the rest of the files. Here is a reference to a thread on social.msdn.microsoft.com regarding this problem.

    Read the article

  • Apache basic auth, mod_authn_dbd and password salt

    - by Cristian Vrabie
    Using Apache mod_auth_basic and mod_authn_dbd you can authenticate a user by looking up that user's password in the database. I see that working if the password is held in clear, but what if we use a random string as a salt (also stored in the database) then store the hash of the concatenation? mod_authn_dbd requires you to specify a query to select that password not to decide if the user is authenticated of not. So you cannot use that query to concatenate the user provided password with the salt then compare with the stored hash. AuthDBDUserRealmQuery "SELECT password FROM authn WHERE user = %s AND realm = %s" Is there a way to make this work?

    Read the article

  • Method of documentation for SQL Stored Procedures

    - by Chapso
    I work in a location where a single person is responsible for creating and maintaining all stored procedures for SQL servers, and is the conduit between software developers and the database. There are a lot of stored procedures in place, and with a database diagram it is simple enough 90% of the time to figure out what the stored procedure needs for arguments/returns as output. For the other 10% of the time, however, it would be helpful to have a reference. Since the DBA is a busy guy (aren't we all), it would be good to have some program which documents the stored procedures to a file so that the developers can see it without being able to access the SPs themselves. The question is, does anyone know of a good program to accomplish this? Basically what we need is something that gives the name of the SP, the argument list and the output, both with datatypes and a nullable flag.

    Read the article

  • How to persist non-trivial fields in Play Framework

    - by AlexR
    I am trying to persist complex objects using Ebeans in Play Framework (2.03). In particular, I've created a class that contains a field of type weka.classifier.Classifier (Weka is a popular machine learning library - see http://weka.sourceforge.net/doc/weka/classifiers/Classifier.html). Classifier implements Serializeable so I hoped that I can get away with something like @Entity @Table(name = "classifiers") public class ClassifierData extends Model { @Id public Long id; public Classifier classifier; } However, the Evolutions script suggests the following database structure: create table classifiers ( id bigint auto_increment not null, constraint pk_classifiers primary key (id)) ) In other words, it ignores the field of type Classifier. (The database is MySQL if it makes any difference) What should I do to store complex serializeable objects using Ebean/Evolutions/PlayFramework?

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >