Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 624/1255 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Launch x64 Windows application in C# while the project is set to x86

    - by daydr3am3r
    Hi all I'm trying to launch the osk.exe and I keep getting "Could not start osk" message. The problem is that my project is set to x86 (i'm using a ms access database). If I switch to x64 or Any CPU everything works fine but the database will no longer work. I tried this using System.Diagnostics; private void btnOSK_Click(object sender, EventArgs e) { Process.Start("osk.exe"); Process.Start(@"C:\windows\system32\osk.exe"); } I also tried to run SysWOWW\osk but this also didn't work. Besides my application should run on both x86 and x64 machines. Is there any way to bypass this? It's really frustrating.

    Read the article

  • Best way to code a webservice in weblogic?

    - by John
    I am new to Weblogic and J2ee. I need to build a webservice that simply runs a query on the backend database (DB2 zOS) and returns the results. Being new to this I have a few questions. 1) What is the best way to build the webservice? 2) How do I connect to the database with weblogic. 3) Is there a way to cache the data returned so that the next request for the same data is pulled from cache? If googled for this but there seems to be many way to handle this. I am looking for the best way that can handle a high volume of requests. Any links to sample code would be helpful. - Thanks

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • Simplest way to extend doctrine for MVC Models

    - by RobertPitt
    Im developing my own framework that uses namespaces. Doctrine is already integrated into my auto loading system and im now at the stage where ill be creating the model system for my application Usually i would create a simple model like so: namespace Application\Models; class Users extends \Framework\Models\Database{} which would inherit all the default database model methods, But with Doctrine im still learning how it all works, as its not just a simple DBAL. I need to understand whats the part of doctrine my classes would extend where i can do the following: namespace Application\Models; class Users Extends Doctrine\Something\Table { public $__table_name = "users"; } And thus within the controller i would be able to do the following: public function Display($uid) { $User = $this->Model->Users->findOne(array("id" => (int)$id)); } Anyone help me get my head around this ?

    Read the article

  • Need Help with Page Life Cycle(I think it is screwing me up)

    - by chobo2
    Hi I have dragged a empty asp.net table onto my webform. I generate all the rows in the code behind those. Now my table gets filled up and has dropdown lists. When the user hits save I go through all the rows and update the values from the dropdownlist in the db. This works all great. However if 2 columns have each have "Present" then those 2 columns should be not be shown anymore and 2 new columns get put in its place with other dropdown lists. This all works. However you have to refresh the entire page to for the 2 columns that should go away to go away. So what I tried to do is at the end of the button click event. Clear the whole table and then regenerate it. However when I do this then my values are not saved to the database anymore for whatever reason. if (IsPostBack == false) { // check if dummy variables exist in db- If true just generate tables with values in db. If not generate them. } else { // grab the values from the database // generate tables with the values } btn click event { go through all rows in table(foreach loop) update each column in the database with cells in each row. while in foreach loop. //done } So this is how it goes and it works expect(all correct values are saved) the table is just not updated to the user. Does not work if (IsPostBack == false) { // same code as above } // if postback is true do nothing. By the time it gets to the click event it says there is zero rows in the table so nothing happens. btn click event { // same code } Fails also. if (IsPostBack == false) { // same code as above } else { // same code as above but moved into its own method. gernerateTable(); } btn click event { // update all rows // once done clear the Tables rows // call generateTable() } This last one does nothing as for some reason it does not update anything. I don't understand why. So what am I doing wrong with this life cycle something in my process is wrong. The code works just not when I want the table to be updated right away.

    Read the article

  • Can I Use ASP.NET Wizard Control to Insert Data into Multiple Tables?

    - by SidC
    Hello All, I have an ASP.NET 3.5 webforms project written in VB that involves a multi-table SQL Server insert. That is, I want the customer to input all their contact information, order details etc. into one control (thinking wizard control). Then, I want to call a stored procedure that does the insert into the respective database tables. I'm familiar and comfortable with the ASP.NET wizard control. However, all the examples I've seen in my searches pertain to inserting data into one table. Questions: 1. Given a typical order process - customer information, order information, order details - should a wizard control be used to insert data into multiple database tables? If not, what controls/workflow do you suggest? 2. I've set primary keys and indexes on my order details, orders and customers tables. Is there special stored procedure syntax to use to ensure that referential integrity is maintained through the insert process? Thanks, Sid

    Read the article

  • Is it a good idea to close and open hibernate sessions frequently?

    - by Gaurav
    Hi, I'm developing an application which requires that state of entities be read from a database at frequent intervals or triggers. However, once hibernate reads the state, it doesn't re-read it unless I explicitly close the session and read the entity in a new session. Is it a good idea to open a session everytime I want to read the entity and then close it afterwards? How much of an overhead does this put on the application and the database (we use a c3p0 connection pool also)? Will it be enough to simply evict the entity from the session before reading it again?

    Read the article

  • Storing HTML in MySQL using Java

    - by mpcabd
    Hello there again, So, I'm working on a project now where I should store webpages inside a database, I'm using crawler4j to crawl and Proxool along with MySQL Java Connector to connect to my database. When I tested the application I got: com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column 'HTMLData'. The HTMLData column wasTEXT. When I changed the HTMLData column to LONGTEXT the error was gone, but I'm afraid it might get back in the future. Any idea on how to do that perfectly so I don't worry about that error (or any other similar error) in the future? Thanks :)

    Read the article

  • Human readable URL causes a problem in Ruby on Rails

    - by TK
    I have a basic CRUD with "Company" model. To make the company name show up, I did def to_param name.parameterize end Then I accessed http://localhost:3000/companies/american-express which runs show action in the companies controller. Obviously this doesn't work because the show method is as following: def show @company = Company.find_by_id(params[:id]) end The params[:id] is american-express. This string is not stored anywhere. Do I need to store the short string (i.e., "american-express") in the database when I save the record? Or is there any way to retrieve the company data without saving the string in the database?

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • How to set utf8 in the auto-generated PHP code of flash builder 4 ?

    - by Mark
    HI, PHP problem here (I think): I've just created a Flex (Flash Builder) project with a datagrid linked to a database - the database is all utf8. When I run the project using the auto-generated code in flex4, the non-English part comes like ????? while the English part comes fine. The auto-generated PHP code uses mysqli. I've tried: $this->connection->set_charset('utf8'); or mysqli_query($this->connection,"SET NAMES utf8"); I also tried writing the code myself (I'm not a PHP guy): mysql_query("set names utf8"); was fine - but that's mysql and not mysqli (that's an "i" after the mysql) and I want to use the auto-generated code... any help is appreciated.

    Read the article

  • Sql Server XML-type column duplicate entry detection

    - by aaaa bbbb
    In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages. I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert. Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc. Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.

    Read the article

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

  • Custom attributes in Active Directory - determining usage/function and possible removal options?

    - by HopelessN00b
    I've bumped into a highly-customized Active Directory environment (2003 FL) that's got me wondering if there's any particularly easy way to figure out what a custom attribute's function is, and what, if anything, is "using" that particular attribute. And then what some good options for potentially removing custom attributes from the schema might be. Aside from a restore or starting from scratch. If such an option exists. For example, I think I can be fairly certain what the "isDumbass" attribute with a value of TRUE means, but not so much with "IRPextCONST", containing a value of 393684. Likewise, I'd think it should be pretty safe to delete the "isDumbass" attribute, but would like to a) be sure and b) find out what's querying or updating that value anyway, because I suspect that anything using that attribute might be next on the list of things to remove. Ideally, without having to run a search on the contents of every custom script and bit of source code I can get my hands on, of course. And finally, aside from rebuilding from scratch, or doing an authoritative AD restore from backups that don't exist... is there a way to delete a given custom attribute? (Not blank the value, but actually delete the attribute from the schema - some folks would rather not have attributes like "FaggotMeter" and "DouchebagCounter" hanging around.) I've been able to find and successfully test a method on Windows 2k, but it seems like Microsoft disabled this option in SP4, and the domain in question is a 2003 functional level.

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • Entity Framework 4 - Delete Object

    - by GibboK
    I have 3 Tables in my DataBase CmsMasterPages CmsMasterPagesAdvSlots (Pure Juction Table) CmsAdvSlots Here a Picture of my EDM: I need find out all objects CmsAdvSlot connected with a CmsMasterPage (it is working in my code posted belove), and DELETE the result (CmsAdvSlot) from the DataBase. My Problem is I am not able to DELETE this Objects when I found theme. Error: The object cannot be deleted because it was not found in the ObjectStateManager. int findMasterPageId = Convert.ToInt32(uxMasterPagesListSelector.SelectedValue); CmsMasterPage myMasterPage = context.CmsMasterPages.FirstOrDefault(x => x.MasterPageId == findMasterPageId); var resultAdvSlots = myMasterPage.CmsAdvSlots; // It is working until here foreach (var toDeleteAdv in resultAdvSlots) { context.DeleteObject(myMasterPage.CmsAdvSlots.Any()); // ERORR HERE!! context.SaveChanges(); } Any idea how to solve it? Thanks for your time! :-)

    Read the article

  • Design ideas for a versioned db schema with related tables also versioned

    - by vfilby
    Here is the drill, I want to version a database. I have done this before using multiple rows where the table primary key becomes a combination of the row id and either a datestamp or a version #. Now I want to version a table that depends on many other small tables. Versioning each table will be a giant PITA, so I am looking for good options to verion a schema where the data to be versioned spreads over multiple tables. All related tables are properly keyed with foreign key relationships. The database is currently on Sql Server 2005.

    Read the article

  • Turkish characters are not displayed correctly

    - by tfeseas
    MySql database uses utf-8 encoding and data are stored correctly.I use set_name utf8 query to make sure the data called are utf-8 encoded.all variables from database works fine as long as the header charset is utf-8,but the static html characters do not work properly.When i set header charset to ISO-8859-9 variables are displayed differenly while html characters work ok.can anyone help me? <?php header('Content-Type: text/html; charset=ISO-8859-9'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>noname</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

    Read the article

  • Finding mySQL duplicates, then merging data

    - by Michael Pasqualone
    I have a mySQL database with a tad under 2 million rows. The database is non-interactive, so efficiency isn't key. The (simplified) structure I have is: `id` int(11) NOT NULL auto_increment `category` varchar(64) NOT NULL `productListing` varchar(256) NOT NULL Now the problem I would like to solve is, I want to find duplicates on productListing field, merge the data on the category field into a single result - deleting the duplicates. So given the following data: +----+-----------+---------------------------+ | id | category | productListing | +----+-----------+---------------------------+ | 1 | Category1 | productGroup1 | | 2 | Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+-----------+---------------------------+ What I want to end up is with: +----+----------------------+---------------------------+ | id | category | productListing | +----+----------------------+---------------------------+ | 1 | Category1,Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+----------------------+---------------------------+ What's the most efficient way to do this either in pure mySQL query or php?

    Read the article

  • How to config Remote BLOB Storage(RBS) with Microsoft Dynamics CRM 4.0 ?

    - by jk
    Hi We have working site for Dynamic crm 4.0 and in it we are storing image into the database. Now database is growing very fast and server is dying.. now I want to enable the Remote BLOB Storage with Dynamic CRM 4.0. for that I tried to install RBS for testing but everywhere is configure with Sharepoint 2010 not with Dynamic Crm. Does anybody know how to install and configure with Dyanmic CRM 4.0? Does RBS with Standard Edition of SQL Server 2008? I followed following path to install but it with Sharepoint? http://technet.microsoft.com/en-us/library/ee663474.aspx Any help is appreciate. Thanks

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >