Search Results

Search found 31472 results on 1259 pages for 'offline database'.

Page 92/1259 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Opera Mobile, offline web app development, and memory

    - by Jake Krohn
    I'm developing a data collection app for use on a HP iPAQ 211. I'm doing it as an offline web app (go with what you know) using Opera Mobile 9.7 and Google Gears. Being it is an offline app, it is very dependent on Javascript for much of its behavior. I'm using the LocalServer, Database, and Geolocation components of Gears, as well as the JQuery core and a couple of plugins for form validation and other usability tweaks (no jQuery UI). I've tried to be conservative with my programming style and free up or close resources whenever possible, but Opera just slowly dies after about 10-20 minutes of use. The Javascript engine stops responding, pages only half-load, and eventually stop loading completely. I'm guessing it's a resource issue. Quitting and relaunching the browser solves the problem, but only temporarily. The iPAQ ships with 128 MB of RAM, about 85-87 MB of which is available immediately after a reset. With only Opera running, there still remains about 50 MB that is left unused. My questions are thus: Is it possible to get Opera to address this unused RAM? Are there configuration settings in Opera or in the Windows Registry itself that will help improve performance? I know where to tweak, but the descriptions of the opera:config variables that I've found are less than helpful. Is is laughable to ask about memory management and jQuery in the same sentence? If not, does anyone have any suggestions? Finally, are my plans too ambitious, given the platform I have to work with? I know that Gears and Windows Mobile 6 are on their way out, but they (theoretically) suffice for what I need to do. I could ditch them in favor of an iPhone/iPod Touch, Mobile Safari, and HTML5 but I'd like to try to make this work first. I didn't think that Opera was a dog when it comes to JS performance, but perhaps it's worse than I thought. That this motley collection of technologies works at all is a minor miracle, but it needs to be faster and more stable. I appreciate any suggestions.

    Read the article

  • Offline product catalog

    - by Ben
    I am looking for a way to automate the production of an offline product catalog using product data contained in an SQL Server database. I have thought about using both Crystal Reports and SQL Server Reporting Services for this but there may be something better suited for the job. There is a requirement to display product images also (currently stored in the database). I thought about perhaps doing a simple Word Mail Merge for this but am not sure how I will handle images. Suggestions appreciated Thanks Ben

    Read the article

  • Wondering how Facebook does the "Mutual friends" feature

    - by Pierre
    Hello, I'm currently developing an application to allow students to manage their courses, and I don't really know how to design the database for a specific feature. The client wants, a lot like Facebook, that when a student displays the list of people currently in a specific course, the people with the most mutual courses are displayed first. As an additional feature, I would like to add a search feature to allow students to search for another one, and displaying first in the search results the people with most mutual courses. I currently use MySQL, I plan to use Cassandra for some other features, and I also use Memcached for result caching. Thanks.

    Read the article

  • Backup & recovery of multiple MySQL databases (InnoDB & MyISAM)

    - by Cymon
    I am working on nightly and hourly backups of MySQL Databases. There are multiple MySQL databases which are either InnoDB or MyISAM (Note: Each database is either InnoDB or MyISAM for a reason). With the 2 different types I want to make sure I am grabbing everything that is needed for backup and recovery. Here is my current plan Nightly -mysqldump of each DB which is stored locally and remotely. Hourly -flush binary logs and store them locally and remotely. Weekly -expire binary logs older than a week. I feel like I am grabbing everything that is needed for the MyISAM databases but I am concerned about the InnoDB databases and the log files (ib_logfile0, ib_logfile1, ibdata1) they create. Should I backup these files? Nightly? Hourly? Both? Do I really need them if I am already doing the above nightly and hourly backups?

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • How do I correctly model data in SQL-based databases that have some columns in common, but also have

    - by Brandon Weiss
    For instance, let's say I have a User model. Users have things like logins, passwords, e-mail addresses, avatars, etc. But there are two types of Users that will be using this site, let's say Parents and Businesses. I need to store some different information for the Parents (e.g. childrens' names, domestic partner, salaries, etc.) than for the Businesses (e.g. industry, number of employees, etc.), but also some of it is the same, like logins and passwords. How do I correctly structure this in a SQL-based database? Thanks!

    Read the article

  • ER Diagram flaws

    - by spacker_lechuck
    I have the following ER Diagram for a bank database - customers may have several accounts, accounts may be held jointly by several customers, and each customer is associated with an account set and accounts are members of one or more account sets. What design rules are violated? What modifications should be made and why? So far, a few flaws I'm not sure about are: 1) Redundant owner-address attribute in AcctSets Entity. 2) This ER does not include accounts with multiple owners with different addresses. My Question is: How would I go about fixing these flaws and/or other flaws that I may be missing from my analysis? Thanks!

    Read the article

  • How to organize asp.net mvc project (using entity framework) and a corresponding database project?

    - by Bernie
    I recently switched to vs2010 and am experimenting with asp.net MVC2. I am building a simple website and use the entity framework to design the data model. From the .edmx file, I generate the database tables. After a few iterations, I decided that it would be nice to have version control of the database schema as well, and therefore I added a database project into which I imported the script that is generated from the datamodel. As a result, I have to generate the sql every time that I change the model and redo the import. The database project automatically updates the database. Although the manual steps to generate the sql and the import are annoying, this works pretty well, until I wanted to add the standard tables for user accounts/authorization etc. I can use the framework tools to add the necessary tables, views etc. to the database, but as I do not want to have them in the .edmx model I end up with a third manual step. Is anybody facing similar issues?

    Read the article

  • Table with a lot of attributes

    - by Robert
    Hi, I'm planing to build some database project. One of the tables have a lot of attributes. My question is: What is better, to divide the the class into 2 separate tables or put all of them into one table. below is an example create table User { id, name, surname,... show_name, show_photos, ...) or create table User { id, name, surname,... ) create table UserPrivacy {usr_id, show_name, show_photos, ...) The performance i suppose is similar due to i can use index.

    Read the article

  • SQL Complicated Group / Join by Category

    - by Mike Silvis
    I currently have a database structure with two important tables. 1) Food Types (Fruit, Vegetables, Meat) 2) Specific Foods (Apple, Oranges, Carrots, Lettuce, Steak, Pork) I am currently trying to build a SQL statement such that I can have the following. Fruit < Apple, Orange Vegetables < Carrots, Lettuce Meat < Steak, Port I have tried using a statement like the following Select * From Food_Type join (Select * From Foods) as Foods on Food_Type.Type_ID = Foods.Type_ID but this returns every Specific Food, while I only want the first 2 per category. So I basically need my subquery to have a limit statement in it so that it finds only the first 2 per category. However if I simply do the following Select * From Food_Type join (Select * From Foods LIMIT 2) as Foods on Food_Type.Type_ID = Foods.Type_ID My statement only returns 2 results total.

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • Two Tables Serving as one Model in Rails

    - by matsko
    Is is possible in rails to setup on model which is dependant on a join from two tables? This would mean that for the the model record to be found/updated/destroyed there would need to be both records in both database tables linked together in a join. The model would just be all the columns of both tables wrapped together which may then be used for the forms and so on. This way when the model gets created/updated it is just one form variable hash that gets applied to the model? Is this possible in Rails 2 or 3?

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • I've created a database table using Visual Studio for my C# program. Now what?

    - by Kevin
    Hi! I'm very new to C#, so please forgive me if I've overlooked something here. I've created a database using Visual Studio (add new item service-based database) called LoadForecast.mdf. I then created a table called ForecastsDB and added some fields. My main question is this: I've created a console application with the intention of writing some data to the newly created database. I've added LoadForecast.mdf as a data source for my program, but is there anything else I should do? I saw an example where the next step was adding a "data diagram", but this was for a visual application, not a console application. Do I still need to diagram the database for my console app? I just want to be able to write new records out to my database table and wasn't sure if there were any other things I needed to do for the VS environment to be "aware" of my database. Thanks for any advise!

    Read the article

  • Working with foreign keys - cannot insert

    - by Industrial
    Hi everyone! Doing my first tryouts with foreign keys in a mySQL database and are trying to do a insert, that fails for this reason: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails Does this mean that foreign keys restrict INSERTS as well as DELETES and/or UPDATES on each table that is enforced with foreign keys relations? Thanks! Updated description: Products ---------------------------- id | type ---------------------------- 0 | 0 1 | 3 ProductsToCategories ---------------------------- productid | categoryid ---------------------------- 0 | 0 1 | 1 Product table has following structure CREATE TABLE IF NOT EXISTS `alpha`.`products` ( `id` MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT , `type` TINYINT(2) UNSIGNED NOT NULL DEFAULT 0 , PRIMARY KEY (`id`) , CONSTRAINT `prodsku` FOREIGN KEY (`id` ) REFERENCES `alpha`.`productsToSku` (`product` ) ON DELETE CASCADE, ON UPDATE CASCADE) ENGINE = InnoDB;

    Read the article

  • How to deploy a Java Swing application with an embedded JavaDB database?

    - by Jonas
    I have implemented an Java Swing application that uses an embedded JavaDB database. The database need to be stored somewhere and the database tables need to be created at the first run. What is the preferred way to do these procedures? Should I always create the database in the local directory, and first check if the database file exist, and if it doesn't exist let the user create the tables (or at least show a message that the tables will be created). Or should I let the user choose a path? but then I have to save the path somewhere. Should I save the path with Preferences.systemRoot();, and check if that variable is set on startup? If the user choses a path and save it in the Preferences, can I get any problems with user permissions? or should it be safe wherever the user store the database? Or how do I handle this? Any other suggestions for this procedure?

    Read the article

  • How to optimize an SQL query with many thousands of WHERE clauses

    - by bugaboo
    I have a series of queries against a very mega large database, and I have hundreds-of-thousands of ORs in WHERE clauses. What is the best and easiest way to optimize such SQL queries? I found some articles about creating temporary tables and using joins, but I am unsure. I'm new to serious SQL, and have been cutting and pasting results from one into the next. SELECT doc_id, language, author, title FROM doc_text WHERE language='fr' OR language='es' SELECT doc_id, ref_id FROM doc_ref WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR ... SELECT ref_id, location_id FROM ref_master WHERE ref_id=098765 OR ref_id=987654 OR ref_id=876543 OR OR OR ... SELECT location_id, location_display_name FROM location SELECT doc_id, index_code, FROM doc_index WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR x100,000 These unoptimized query can take over 24 hours each. Cheers.

    Read the article

  • SQL Server many-to-many design recommendation

    - by Jean-Philippe Brabant
    I have a SQL Server database with two table : Users and Achievements. My users can have multiple achievements so it a many-to-many relation. At school we learned to create an associative table for that sort of relation. That mean creating a table with a UserID and an AchivementID. But if I have 500 users and 50 achievements that could lead to 25 000 row. As an alternative, I could add a binary field to my Users table. For example, if that field contained 10010 that would mean that this user unlocked the first and the fourth achievements. Is their other way ? And which one should I use.

    Read the article

  • Recommended Offline Form Format

    - by Blithe
    Hi, I would like to ask for a recommendation on offline form format which user can bring around. Once done, they will upload the file to the server and data will be extracted from there. We are currently looking at Word and Excel 2003 since all users have it on their machine but it seems that using interops on server is something to avoid. Suggestions?

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >