Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 597/1260 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • insert a BLOB via a sql script ?

    - by David Michel
    Hi, I have an H2 database (http://www.h2database.com) and I'd like to insert a file into a BLOB field via a plain simple sql script (to populate a test database for instance). I know how to do that via the code but I cannot find how to do the sql script itself. I tried to pass the path , i.e. INSERT INTO mytable (id,name,file) VALUES(1,'file.xml',/my/local/path/file.xml); but this fails. Within the code (java for instance), it's easy to create a File object and pass that in, but directly from a sql script, I'm stuck... Any idea ? David

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • How do I create an Access 2003 MDE programmatically or by command line in Access 2007?

    - by Ned Ryerson
    I have a legacy Access 2003 database file that must remain in that format to preserve its menus and toolbars. I have recently moved to Access 2007 in my build environment and will be deploying the compiled Access 2003 program with the Access 2007 runtime. In Access 2003, I could script the process of creating an MDE with the Access Developer Extensions (WZADE.mde) using the command line and an .xml file of build preferences (without creating an install package). The Access 2007 developer extensions do not seem to offer a similar option. I can "Package a Solution", but it creates an accdr and buries it in a CD installer. I've tried programmatic options like Docmd.RunCommand acMakeMDEFILe and Syscmd(603, mdbpath, mdepath) but they no longer work in Access 2007. Of course, i can manually create an MDE using Database ToolsCreate MDE, but that is no scriptable as far as I can tell.

    Read the article

  • How to set up hibernate to use Glassfish connection pool?

    - by jschoen
    I have set up a connection pool in Glassfish with a jndi resource for it also setup. I am stumped on how to configure hibernate to go get it. I have come across alot of write ups to configure it to use C3P0 connection pool. Well I am lost. I found that I need to set: hibernate.connection.datasource hibernate.jndi.url hibernate.jndi.class hibernate.connection.username hibernate.connection.password Would datasource be the same as the hibernate.connection.datasource set in the connection pool? What would hibernate.jndi.class be? Are hibernate.connection.username and hibernate.connection.password for the connection to the database or to the appserver? I assume this is to the database, but why do I need them since that is all set in the appserver?

    Read the article

  • use of EntityManagerFactory causing duplicate primary key exceptions

    - by bradd
    Hey guys, my goal is create an EntityManager using properties dependent on which database is in use. I've seen something like this done in all my Google searches(I made the code more basic for the purpose of this question): @PersistenceUnit private EntityManagerFactory emf; private EntityManager em; private Properties props; @PostConstruct public void createEntityManager(){ //if oracle set oracle properties else set postgres properties emf = Persistence.createEntityManagerFactory("app-x"); em = emf.createEntityManager(props); } This works and I can load Oracle or Postgres properties successfully and I can Select from either database. HOWEVER, I am running into issues when doing INSERT statements. Whenever an INSERT is done I get a duplicate primary key exception.. every time! Can anyone shed some light on why this may be happening? Thanks -Brad

    Read the article

  • Fastest distance lookup given latitude/longitude?

    - by Ryan Detzel
    I currently have just under a million locations in a mysql database all with longitude and latitude information. With this I use another lat/lng to find the distance of certain places in the database but it's not as fast as I want it to be especially with 100+ hits a second. Is there a faster formula or possibly a faster system other than mysql for this? The formula I'm using is this. select name, ( 3959 * acos( cos( radians(42.290763) ) * cos( radians( locations.lat ) ) * cos( radians( locations.lng ) - radians(-71.35368) ) + sin( radians(42.290763) ) * sin( radians( locations.lat ) ) ) ) AS distance from locations where active = 1 HAVING distance < 10 ORDER BY distance;

    Read the article

  • How should one import large amounts of data for FIT/Fitnesse tests?

    - by Lachlan
    We have a scheduling engine with large amounts of test data to test all the scenarios, so test automation is critical. We're currently hoping to use FIT/Fitnesse. However a single test has quite a large table of test data, so it doesn't fit very well into the mould of "two or three inputs, one or more outputs" that Fitnesse uses in its examples. Hopefully the other functionality of Fitnesse makes it worth using it. I hear that there is a way to initialize an application for a FIT test with an Excel spreadsheet - not the Spreadsheet to Fitness function, mind you - but I haven't been able to find it so far. Once the whole spreadsheet is loaded into the application, and the application does its thing, we plan to compare either a number of output rows, or perhaps just the last row, to see if the test passes. The application is currently pulling test data from a database for manual tests, but writing to a database, then initializing from it, is not preferred because of the performance impact. The application is written in C#.

    Read the article

  • store multiple id's from first aspx page to next aspx page

    - by poller
    i have my first aspx page that has data thatthe user fills in. it is in format of textbox's and at the end of it all the user clicks submit and all data goes in the database. In the database each record gets an ID field. Now when the users clicks submit and goes to the next page, i want the ID's (they could be 1 to 1000+) from the DB that he just inserted and have them available on the second page. how can i take all the id's from page 1 to page 2? can i do it in session? or something else. Please put some sample code so i can understand better.

    Read the article

  • Why does GetSqlDecimal throw when GetDecimal doesn't?

    - by I. J. Kennedy
    I have a database table that has a column of type money, allowing nulls. Using a SqlDataReader named reader, I can do decimal d = reader.GetDecimal(1); which works, unless of course we're reading a null. If I try using SqlDecimal instead--and I thought the whole point of the SqlTypes was to deal with nulls--then I get an invalid cast, whether or not the value is null. SqlDecimal s = reader.GetSqlDecimal(1); // throws an invalid cast exception What am I doing wrong? Do I really have to write a conditional statement to shepherd the value from the database to a SqlDecimal variable?

    Read the article

  • Drupal node_save and special characters.

    - by Pierre
    Hello, i'm trying to create nodes and taxonomy terms through a custom php script by using the node_save() function. I'm working on drupal 6. It's working well (thanks to previous questions on stackoverflow) except for accented letters. Indeed, when a title or a taxonomy term contain "é", "è" or "à", the sentence is cut before those special characters. For example, a title like that: "Bonjour les éléphants" will create a node with "Bonjour les " as title. I don't know if it's linked to my database or if i have to use a special encoding in php (iconv() blabla) The fact is, for drupal titles, i can not use html encoding (for example: é is é in html) because drupal will render &eacute and not é... When i create a taxonomy or a title manually, i have no problems and the accented letter is saved in the database as "é". Soo if you can help me to create terms and title with accented letters, that will be great : ) Thank you !

    Read the article

  • Naming conventions and field naming question for CakePHP

    - by jphenow
    Okay so two questions very related: 1) Does following the naming convention for classes, controllers, database fields, etc. affect the framework's ability to work the way it was intended? (I'm a little new to working with a framework from the beginning of app development) 2) This question is more important if 1 is a yes. Say I have a table, A, that has 2 foreign keys pointing at the same table, B, but different entries (they're like edges of a graph that point at two vertices) how would I follow the naming convention of their database fields? All I can think to do is something like vertex_1_id and vertex_2_id but I don't know how the framework would handle that if the naming conventions are necessary for its functioning correctly.

    Read the article

  • Invalid UTF-8 for Postgres, Perl thinks they're ok

    - by gorilla
    I'm running perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class. These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5 I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag & ignore the bad titles. if(not utf8::valid($title)){ $title="Invalid UTF-8"; } $data->title($title); $data->update(); However perl seems to think that the strings are valid, but it still throws the exceptions. How can I get perl to detect the bad UTF-8?

    Read the article

  • iPhone SDK mySQL

    - by Kevin
    Hello everybody, I'm starting on a new iPhone project, and this application mostly relies on mySQL. I have a mysql database running on my computer, and I need this application to send queries to the server to gain information. One example is creating and logging into an account. I have successfully done this on my windows vb.net application, but I know that objective c will be harder. This is basically getting text from the sql database and displaying it on the iPhone application. If someone could simply help me on that, I'll easily get started. So I hope that you guys help me out here :). Thanks Kevin

    Read the article

  • SQL query optimization

    - by nvtthang
    I have a problem with my SQL query that take time to get all records from database. Any body help me. Below is a sample of database: order(order_id, order_nm) customer(customer_id, customer_nm) orderDetail(orderDetail_id, order_id, orderDate, customer_id, Comment) I want to get latest customer and order detail information. Here is may solution: I've created a function that GetLatestOrderByCustomer(CusID) to get lastest Customer information. CREATE FUNCTION [dbo].[GetLatestOrderByCustomer] ( @cus_id int ) RETURNS varchar(255) AS BEGIN DECLARE @ResultVar varchar(255) SELECT @ResultVar = tmp.comment FROM ( SELECT TOP 1 orderDate, comment FROM orderDetail WHERE orderDetail.customer_id = @cust_id ) tmp -- Return the result of the function RETURN @ResultVar END Below is my SQL query SELECT customer.customer_id , customer.customer_nm , dbo.GetLatestOrderByCustomer(customer.customer_id) FROM Customer LEFT JOIN orderDetail ON orderDetail.customer_id = customer.customer_id It's take time to run the function. Could anybody suggest me any solutions to make it better? Thanks in advance.

    Read the article

  • Simple question - XSB Prolog

    - by KP65
    Hello! I'm diving into the world of prolog headfirst but I seem to have hit shallow water! I'm looking at database manipulation in prolog with regards to this tutorial:Learn Prolog Now! It states that I can see my database by entering listing So i tried it and it should basically output everything in my .P file(facts, rules), but this is what i get, here are my sequence of commands: ? consult('D:\Prolog\testfile.P'). [testfile.P loaded] ? listing. library_directory(C:blahblahpathtoXSB) library_directory(C:blahblahXSBpath) {this is listed around 5 times)} shouldn't this command display what is in testfile.P, according to the tutorial? also, after consult testfile.P i should be ableto use assert to add more facts but it doesnt actually change anything in the testfile.P..? any ideas

    Read the article

  • suggestions for Membership in ASP.NET MVC application

    - by mare
    With this question I am mostly looking for answers from people that have implemented the out-of-the-box ASP.NET membership in their own database - I've set up the tables inside my database and as far as I can see they contain mostly what I need but not everything. I will have the notion of a Firm (Company) to which Users will belong so I will have to associate the aspnet_Users with my Firms table (each user will be a member of exactly one firm). If possible, provide some guidelines how did you do it and what I might run into if I have to modify the table design at some point in the future. Preferably I will be using the default Membership provider. I am having trouble to decide whether to go from scratch or use what ASP.NET already offers.

    Read the article

  • C# Design Questions

    - by guazz
    How to approach unit testing of private methods? I have a class that loads Employee data into a database. Here is a sample: public class EmployeeFacade { public Employees EmployeeRepository = new Employees(); public TaxDatas TaxRepository = new TaxDatas(); public Accounts AccountRepository = new Accounts(); //and so on for about 20 more repositories etc. public bool LoadAllEmployeeData(Employee employee) { if (employee == null) throw new Exception("..."); EmployeeRepository emps = new EmployeeRepository(); bool exists = emps.FetchExisting(emps.Id); if (!exists) { emps.AddNew(); } try { emps.Id = employee.Id; emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName; emps.SomeOtherAttribute; } catch() {} try { emps.Save(); } catch(){} try { LoadorUpdateTaxData(employee.TaxData); } catch() {} try { LoadorUpdateAccountData(employee.AccountData); } catch() {} ... etc. for about 20 more other employee objects } private bool LoadorUpdateTaxData(employeeId, TaxData taxData) { if (taxData == null) throw new Exception("..."); ...same format as above but using AccountRepository } private bool LoadorUpdateAccountData(employee.TaxData) { ...same format as above but using TaxRepository } } I am writing an application to take serialised objects(e.g. Employee above) and load the data to the database. I have a few design question that I would like opinions on: A - I am calling this class "EmployeeFacade" because I am (attempting?) to use the facade pattern. Is it good practace to name the pattern on the class name? B - Is it good to call the concrete entities of my DAL layer classes "Repositories" e.g. "EmployeeRepository" ? C - Is using the repositories in this way sensible or should I create a method on the repository itself to take, say, the Employee and then load the data from there e.g. EmployeeRepository.LoadAllEmployeeData(Employee employee)? I am aim for cohesive class and but this will requrie the repository to have knowledge of the Employee object which may not be good? D - Is there any nice way around of not having to check if an object is null at the begining of each method? E - I have a EmployeeRepository, TaxRepository, AccountRepository declared as public for unit testing purpose. These are really private enities but I need to be able to substitute these with stubs so that the won't write to my database(I overload the save() method to do nothing). Is there anyway around this or do I have to expose them? F - How can I test the private methods - or is this done (something tells me it's not)? G- "emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName;" this breaks the Law of Demeter but how do I adjust my objects to abide by the law?

    Read the article

  • IEditableCollectionView.AddNew() Throwing ArgumentNullException

    - by Eugarps
    In the context of Silverlight RIA using DomainContext and, the code as follows: private void AddProductButton_Click(object sender, RoutedEventArgs e) { var target = (Web.LocatorProduct)((IEditableCollectionView)ProductSource.DataView).AddNew(); target.Locator = LocatorID; target.Product = NewProduct.Text.ToUpper(); ((IEditableCollectionView)ProductSource.DataView).CommitNew(); } Is throwing ArgumentNullException in AddNew(), CreateIdentity() further up on the stack (a generated method) due to product being null. Product and LocatorID are, in combination, the primary key. I'm guessing that EF is not allowing me to generate a new item without meeting database contraints? How does this make sense if I need to obtain a primary key from the user? I have control over all tiers of the application, so suggestions on database design if needed are also welcomed.

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • Logging strategy vs. performance

    - by vtortola
    Hi, I'm developing a web application that has to support lots of simultaneous requests, and I'd like to keep it fast enough. I have now to implement a logging strategy, I'm gonna use log4net, but ... what and how should I log? I mean: How logging impacts in performance? is it possible/recomendable logging using async calls? Is better use a text file or a database? Is it possible to do it conditional? for example, default log to the database, and if it fails, the switch to a text file. What about multithreading? should I care about synchronization when I use log4net? or it's thread safe out of the box? In the requirements appear that the application should cache a couple of things per request, and I'm afraid of the performance impact of that. Cheers.

    Read the article

  • Best practice to display POI in iPhone's MapKit?

    - by iamj4de
    Assuming I have a database of POI with their respective coordinates (longitude & latitude). What would be the "standard" way to display the POI as annotations around the user's current location? To elaborate: Given a zoom level, I guess I have to search through the database for all POI whose distance to the current location < a certain threshold, then create annotations for them. Or is there any smarter way? If the user zooms in/out, moves the map... I will need to redo the whole thing again? It seems that MapKit has a mechanism to cache/reuse annotations. Should I create a lot of them right away and let MapKit decides what to render when the visible region changes? I guess this would make the transition smoother, but also consumes more memory. What is your experience with this? Thanks.

    Read the article

  • How to avoid "The name 'ConfigurationManager' does not exist in the current context" error?

    - by 5YrsLaterDBA
    I am using VS2008. I have a project connect with a database and the connection string is read from App.config via ConfigurationManager. We are using L2E. Now I added a helper project, AndeDataViewer, to have a simple UI to display data from the database for testing/verification purpose. I don't want to create another set of Entity Data Model in the helper project. I just added all related files as a link in the new helper project. When I compile, I got the following error: Error 15 The name 'ConfigurationManager' does not exist in the current context C:\workspace\SystemSoftware\SystemSoftware\src\systeminfo\RuntimeInfo.cs 24 40 AndeDataViewer I think I may need to add another project setting/config related file's link to the helper project from the main project? There is no App.config file in the new helper project. But it looks I cannot add that file's link to the helper project. Any ideas?

    Read the article

  • Drobo FS vs. MacBook Pro: Finder works, Drobo Dashboard doesn't

    - by dash-tom-bang
    Does anyone have any experience with the new Drobo FS, specifically using it from a MacBook Pro? My experience thus far is this: Set up the Drobo Dashboard software (hereinafter called simply 'Dashboard') on my WinXP machine, which is hard-wired to the network to do the data migration from my NAS-that's-being-replaced (a 250G SimpleShare which works well enough but I was always afraid of losing the one disk). The Dashboard seems to work ok, except that the DroboCopy function doesn't work at all. This is the backup solution, which I can configure, and if I launch it (e.g. to back up from the old NAS to the Drobo) it spins the NAS, seeking the drive all over hell and creation, until finally giving up an hour+ later with zero files copied. Selecting only a subset of the data yields the same effect albeit more quickly. On my Mac I installed the Dashboard software too, since most of my fiddling with the device will be from my couch in the living room. Finder connects to the box just fine, fwiw, but Dashboard just sits there, "waiting for connection." This is considerably more bothersome than the above paragraph, but I figured I'd give whatever information I have. Drobo is insisting that I send them this "Debug Log" file that their software generates. Does anyone know what's in it? It's encrypted and they won't tell me, which spooks me just a bit; not like I'm terribly concerned about privacy but I don't want to be sending personal information out to every clown who says they "need" it in order to help me. thanks a ton, -tom!

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >