Search Results

Search found 31452 results on 1259 pages for 'database maintenance'.

Page 47/1259 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • SQL SERVER 2008 Introduction to Snapshot Database Restore From Snapshot

    Snapshot database is one of the most interesting concepts that I have used at some places recently.Here is a quick definition of the subject from Book On Line:A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • Oracle OpenWorld 2012: Focus On Database Security

    - by Troy Kitch
    Oracle OpenWorld 2012 is going to be the place to learn about Oracle Database Security solutions including Oracle Advanced Security with transparent data encryption, Database Vault, Audit Vault and Database Firewall, Label Security, and more. We've put together this Focus On Database Security document so you'll know when and where to attend the key database security sessions, and not miss a thing. 

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • New White Paper about Upgrade to Oracle Database 12c

    - by Mike Dietrich
    With the release of Oracle Database 12c many new collateral will be available right now including our new White Paper: White Paper:Upgrading to Oracle Database 12c This white paper outlines the methods available for you to upgrade and migrate your database to Oracle Database 12c.  Learn about different use cases and key factors to consider when choosing the method that best fits your requirements. And if you'd like to have a look into the new Oracle 12c documentation please find it here: Oracle Database 12c Documentation -Mike

    Read the article

  • Új adatbázis-biztonsági termék: Audit Vault and Database Firewall, lényegesen olcsóbban

    - by user645740
    Az Oracle összevonta az Audit Vault és a Database Firewall termékeket, még szélesebb felhasználói körnek elérhetové téve az adatbázisok biztonságának magasabb szintjét. Az új termék, az Oracle Audit Vault and Database Firewall (AVDF) mostantól kedvezobb áron érheto el. A jelentések megtekintéséhez restricted use-ban tartalmazza  a Business Intelligence Publisher licencet. Az adatgyujto, management szerver komponensek kiemelten védettek, az Audit Vault Server és a Database Firewall szerverekre restricted use-ban használhatók:Oracle Database Enterprise Edition, Database Vault, Partitioning, Advanced Compression és Advanced Security.

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • OpenWorld - Database Security Demonstrations in Moscone South Left

    - by Troy Kitch
    All this week, Oracle security experts will be giving live product demos of Oracle Database Security solutions in Moscone South Left, in the Oracle DEMOgrounds for "database." Demonstrations include Oracle Database Defense-in-Depth Security, Database Application Data Redaction, Transparent Data Encryption, Oracle Audit Vault and Database Firewall, Data Masking and Data Subsetting. Don't miss it!

    Read the article

  • Simple Steps to Prepare Mirror Database for Mirroring in SQL Server

    To prepare a database for mirroring, you need to perform the following steps: Script the restore of the latest full database backup, script the restore of every transaction log backup that has been made after that full database backup, copy the full database backup and transaction log backups to the mirror server, and run the restore scripts on the mirror server. In this tip I will walk through these steps and provide sample scripts to prepare a database for mirroring.

    Read the article

  • Different Ways to Restore a SQL Server Database

    This article describes the SQL Server database restore principles for full backups, differential backups and transaction log backups and how to perform the restores to get to a particular point in time. This tip describes SQL Server database restore principles on a database that is using the FULL recovery model. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Does anyone have database, programming language/framework suggestions for a GUI point of sale system

    - by Jason Down
    Our company has a point of sale system with many extras, such as ordering and receiving functionality, sales and order history etc. Our main issue is that the system was not designed properly from the ground up, so it takes too long to make fixes and handle requests from our customers. Also, the current technology we are using (Progress database, Progress 4GL for the language) incurs quite a bit of licensing expenses on our customers due to mutli-user license fees for database connections etc. After a lot of discussion it is looking like we will probably start over from scratch (while maintaining the current product at least for the time being). We are looking for a couple of things: Create the system with a nice GUI front end (it is currently CHUI and the application was not built in a way that allows us to redesign the front end... no layering or separation of business logic and gui...shudder). Create the system with the ability to modularize different functionality so the product doesn't have to include all features. This would keep the cost down for our current customers that want basic functionality and a lower price tag. The bells and whistles would be available for those that would want them. Use proper design patterns to make the product easy to add or change any part at any time (i.e. change the database or change the front end without needing to rewrite the application or most of it). This is a problem today because the Progress 4GL code is directly compiled against the database. Small changes in the database requires lots of code recompiling. Our new system will be Linux based, with a possibility of a client application providing functionality from one or more windows boxes. So what I'm looking for is any suggestions on which database and/or framework or programming language(s) someone might recommend for this sort of product. Anyone that has experience in this field might be able to point us in the right direction or even have some ideas of what to avoid. We have considered .NET and SQL Express (we don't need an enterprise level DB), but that would limit us to windows (as far as I know anyway). I have heard of Mono for writing .NET code in a Linux environment, but I don't know much about it yet. We've also considered a Java and MySql based implementation. To summarize we are looking to do the following: Keep licensing costs down on the technology we will use to develop the product (Oracle, yikes! MySQL, nice.) Deliver a solution that is easily maintainable and supportable. A solution that has a component capable of running on "old" hardware through a CHUI front end. (some of our customers have 40+ terminals which would be a ton of cash in order to convert over to a PC). Suggestions would be appreciated. Thanks [UPDATE] I should note that we are currently performing a total cost analysis. This question is intended to give us a couple of "educated" options to look into to include in or analysis. Anyone who could share experiences/suggestions about client/server setups would be appreciated (not just those who have experience with point of sale systems... that would just be a bonus).

    Read the article

  • What is the correct term for - server/client database sync via API?

    - by Daniel
    Forgive the vague question title. I've been programming mobile apps for 3 years now, and I've got a little too far from the web services and server side code then I probably should have. Anyway, I'm doing a personal project now and I want to create an web API for it. One of my requirements is to check for updates from my app, so I would send a timestamp to the API. I've used many APIs that my clients prepared for me and only now am I appreciating their work ! What is the term or technique used to create an API backed by a database which tracks changes via dates/timestamps, basically an effective way for me to query changes occurring since a timestamp. Simply put, I want that my app can call my API in order to sync new data and changed data from the server, to the app. The app would only have a timestamp of the last time it synced with the server. Would I have a log table for each data table in my database which adds a record for each change? Then I could query all changes with a timestamp superior to the one passed to the API. Can anyone point me in the right direction on this?

    Read the article

  • How to install Oracle Database 11g Express Edition on Ubuntu 12.10?

    - by Praneeth Pj
    I installed the Oracle database following the steps mentioned in this blog. Downloaded 11g express edition Created a new user oracle under the group dba. Following steps are executed using this. Unzipped oracle-xe-11.2.0-1.0.x86_64.rpm.zip and then converted the rpm to the Ubuntu package by running: sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm Created /sbin/chkconfig file and added the entries as specified there. Created /etc/sysctl.d/60-oracle.conf and added the entries as specified in the same link as above. Running the commands: ln -s /usr/bin/awk /bin/awk mkdir /var/lock/subsys touch /var/lock/subsys/listener .deb generated in step 3: sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb Left the default values as it is: sudo /etc/init.d/oracle-xe configure Set the following env variables in ~/.bashrc file: export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe export ORACLE_SID=XE export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh` export ORACLE_BASE=/u01/app/oracle export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export PATH=$ORACLE_HOME/bin:$PATH Running the commands: chown -R oracle:dba /var/tmp/.oracle chmod -R 755 /var/tmp/.oracle chown -R oracle:dba /tmp/.oracle chmod -R 755 /tmp/.oracle Starting Oracle Database 11g Express Edition instance: sudo service oracle-xe start sqlplus / as sysdba and got the following: SQL*Plus: Release 11.2.0.2.0 Production on Thu Jan 3 09:41:58 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. Now when exectuting any SQL statements on SQLplus, I end up with the following error: SQL> select * from dual; select * from dual * ERROR at line 1: ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial number: 0 I have increased the swap memory as specified here $ free -m total used free shared buffers cached Mem: 3901 3428 473 0 182 1988 -/+ buffers/cache: 1258 2643 Swap: 5066 0 5066

    Read the article

  • How to use function to connect to database and how to work with queries?

    - by Abhilash Shukla
    I am using functions to work with database.. Now the way i have defined the functions are as follows:- /** * Database definations */ define ('db_type', 'MYSQL'); define ('db_host', 'localhost'); define ('db_port', '3306'); define ('db_name', 'database'); define ('db_user', 'root'); define ('db_pass', 'password'); define ('db_table_prefix', ''); /** * Database Connect */ function db_connect($host = db_host, $port = db_port, $username = db_user, $password = db_pass, $database = db_name) { if(!$db = @mysql_connect($host.':'.$port, $username, $password)) { return FALSE; } if((strlen($database) > 0) AND (!@mysql_select_db($database, $db))) { return FALSE; } // set the correct charset encoding mysql_query('SET NAMES \'utf8\''); mysql_query('SET CHARACTER_SET \'utf8\''); return $db; } /** * Database Close */ function db_close($identifier) { return mysql_close($identifier); } /** * Database Query */ function db_query($query, $identifier) { return mysql_query($query, $identifier); } Now i want to know whether it is a good way to do this or not..... Also, while database connect i am using $host = db_host Is it ok? Secondly how i can use these functions, these all code is in my FUNCTIONS.php The Database Definitions and also the Database Connect... will it do the needful for me... Using these functions how will i be able to connect to database and using the query function... how will i able to execute a query? VERY IMPORTANT: How can i make mysql to mysqli, is it can be done by just adding an 'i' to mysql....Like:- @mysql_connect @mysqli_connect

    Read the article

  • What is the best approach for database design with lots of columns?

    - by Pratyush
    I am writing a query based financial application. It lets the user to write complicated equations (much like WHERE part of an SQL query) and find companies matching those criteria. For the above, I currently have more than 500 columns in the database table (each column representing a financial field). Example of Columns are: company_name, sales_annual_00, sales_annual_01, sales_annual_02, sales_annual_03, sales_annual_04, protit_annual_00, profit_annual1...(over 500 such columns). The number of rows is around 5000. Going forward, I would like to further increase the number of columns/financial-fields. For the above I would like to get help regarding: 1) What is the best database design approach? Is it ok to have these many number of columns? 2) How can it be normalized? (User can use any of these fields in search criteria). 3) Is it ok to stick with MySQL, or modern document based databases like MongoDB should be better for it? P.S. (Update): I have been using MySQL till now and a running example of the usage is at: http://screener.in/companies/89/Formula-- In above there around 500 fields/columns to create your query on, however, I seek to increase that number to much more in future.

    Read the article

  • Unable to edit a database row from JSF

    - by user1924104
    Hi guys i have a data table in JSF which displays all of the contents of my database table, it displays it fine, i also have a delete function that can successfully delete from the database fine and updates the data table fine however when i try to update the database i get the error java.lang.IllegalArgumentException: Cannot convert richard.test.User@129d62a7 of type class richard.test.User to long below is the code that i have been using to delete the rows in the database that is working fine : public void delete(long userID) { PreparedStatement ps = null; Connection con = null; if (userID != 0) { try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root"); String sql = "DELETE FROM user1 WHERE userId=" + userID; ps = con.prepareStatement(sql); int i = ps.executeUpdate(); if (i > 0) { System.out.println("Row deleted successfully"); } } catch (Exception e) { e.printStackTrace(); } finally { try { con.close(); ps.close(); } catch (Exception e) { e.printStackTrace(); } } } } i simply wanted to edit the above code so it would update the records instead of deleting them so i edited it to look like : public void editData(long userID) { PreparedStatement ps = null; Connection con = null; if (userID != 0) { try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", "root", "root"); String sql = "UPDATE user1 set name = '"+name+"', email = '"+ email +"', address = '"+address+"' WHERE userId=" + userID; ps = con.prepareStatement(sql); int i = ps.executeUpdate(); if (i > 0) { System.out.println("Row updated successfully"); } } catch (Exception e) { e.printStackTrace(); } finally { try { con.close(); ps.close(); } catch (Exception e) { e.printStackTrace(); } } } } and the xhmtl is : <p:dataTable id="dataTable" var="u" value="#{userBean.getUserList()}" paginator="true" rows="10" editable="true" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <p:column> <f:facet name="header"> User ID </f:facet> #{u.userID} </p:column> <p:column> <f:facet name="header"> Name </f:facet> #{u.name} </p:column> <p:column> <f:facet name="header"> Email </f:facet> #{u.email} </p:column> <p:column> <f:facet name="header"> Address </f:facet> #{u.address} </p:column> <p:column> <f:facet name="header"> Created Date </f:facet> #{u.created_date} </p:column> <p:column> <f:facet name="header"> Delete </f:facet> <h:commandButton value="Delete" action="#{user.delete(u.userID)}" /> </p:column> <p:column> <f:facet name="header"> Delete </f:facet> <h:commandButton value="Edit" action="#{user.editData(u)}" /> </p:column> currently when you press the edit button it will only update it with the same values as i haven't yet managed to get the datatable to be editable with the database, i have seen a few examples with an array list where the data table gets its values from but never a database so if you have any advice on this too it would be great thanks

    Read the article

  • Should I use a config file or database for storing business rules?

    - by foiseworth
    I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light-type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' = array(sphere, cube, cylinder), 'other-type' = value, 'etc = etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types; columns: id, name) some other way? Many thanks for any assistance / expertise offered.

    Read the article

  • How do you go from a so so programmer to a great one? [closed]

    - by Cervo
    How do you go from being an okay programmer to being able to write maintainable clean code? For example David Hansson was writing Basecamp when in the process he created Rails as part of writing Basecamp in a clean/maintainable way. But how do you know when there is value in a side project like that? I have a bachelors in computer science, and I am about to get a masters and I will say that colleges teach you to write code to solve problems, not neatly or anything. Basically you think of a problem, come up with a solution, and write it down...not necessarily the most maintainable way in the world. Also my first job was in a startup, and now my third is in a small team in a large company where the attitude was/is get it done yesterday (also most of my jobs are mainly database development with SQL with a few ASP.NET web pages/.NET apps on the side). So of course cut/paste is more favored than making things more cleanly. And they would rather have something yesterday even if you have to rewrite it next month rather than to have something in a week that lasts for a year. Also spaghetti code turns up all over the place, and it takes very smart people to write/understand/maintain spaghetti code...However it would be better to do things so simple/clean that even a caveman/woman could do maintenance. Also I get very bored/unmotivated having to go modify the same things cut/pasted in a few locations. Is this the type of skill that you need to learn by working with a serious software organization that has an emphasis on maintenance and maybe even an architect who designs a system architecture and reviews code? Could you really learn it by volunteering on an open source project (it seems to me that a full time programmer job is way more practice than a few hours a week on an open source project)? Is there some course where you can learn this? I can attest that graduate school and undergraduate school do not really emphasize clean software at all. They just teach the structures/algorithms and then send you off into the world to solve problems. Overall I think the first thing is learning to write clean/maintainable code within the bounds of the project in order to become a good programmer. Then the next thing is learning when you need to do a side project (like a framework) to make things more maintainable/clean even while you still deliver things for the deadline in order to become a great programmer. For example, you are making an SQL report and someone gives you 100 calculations for individual columns. At what point does it make sense to construct a domain specific language to encode the rules in simply and then generate all the SQL as opposed to cut/pasting the query from the table a bunch of times and then adjusting each query to do the appropriate calculations. This is the type of thing I would say a great programmer would know. He/she would maybe even know ways to avoid the domain specific language and to still do all the calculations without creating an unmaintainable mess or a ton of repetitive code to cut/paste everywhere.

    Read the article

  • How do I develop database-utilizing application in an agile/test-driven-development way?

    - by user39019
    I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? How do I design my tables/schema so that it is flexible with respect to changing requirements? Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • ???????:“Upgrade to Oracle Database 12c”

    - by hamsun
    ? ?:Brandye Barrington,2014?4?29? ??????????????“Upgrade to Oracle Database 12c”,??Oracle????(OCM ) ??????????Gwen Lazenby??????12??????????43?????,Gwen Lazenby? ?????Oracle???? 1Z0-060???????????, ? ?????????????????????,????????? ???????????????,? ?????? ?????????:Upgrade to Oracle Database 12c? ??????????????????????????Kaplan SelfTest? ???? ??????? ????:Upgrade to Oracle Database 12c, ???? ?????????:Upgrade to Oracle Database 12c? ?????????????????????????????Kaplan SelfTest? ???? ? ?????????????: ??0:???? ?? 1:?????????????? ?? 2:?????CDB?PDB ?? 3: ??CDB?PDB ?? 4:???????????????????????(?CDB/PDB? ?? 5:?CDB/PDB????????? ? ?? 6:??????????? ?? 7:??Unicode?EM Express? ???? ?? 8:????????????? ?? 9:????????????? ?? 10: ?? ?? 11:????????? ?? 12: ?? ?? 13:?????????????? ?? 14: Oracle Database 12c??RMAN? ??? ?? 15: ??????? ?? 16:ADR???? ?? 17:Oracle Database 12c?????? ?? 18:?? Oracle? ???? ?? 19: ??Oracle Database 12c  ?? 20:??Oracle? ??? ?? 21: ??????? ?? 22: ???? ?? 23: ???? ?? 24: ?????? ?? 25: ?????? ?? 26: ?? ?? 27: ?? ?? 28: ??????? ?? 29: ? ??????????? ?? 30: ??????? ?? 31: ?????? ?? 32: ???????? ?? 33:??RMAN? ???????? ?? 34: ??RMAN???? ?? 35: ???? ?? 36: ?????? ?? 37: ?????? ?? 38: ???????? ?? 39: ???? ?? 40: ?????? ?? 41:??SQL ? ?????? ?? 42:?????Direct NFS ?? 43:??Oracle Data Pump? ??? ????????Oracle Database Administrator Certified Professional?????Oracle Database 12c,????????????????,???????????,????????????????,?????????? ????,?????????????????? ? ??? ?????: Certification Exam Prep Seminar: Upgrade to Oracle Database 12c ??????????: Exam Prep Seminar Value Package: Upgrade to Oracle Database 12c ????: 1Z0-060 Upgrade to 12c - New Features of Oracle Database 12c ????:Oracle Database 12c Administrator Certified Professional

    Read the article

  • Copy Selective Data from Database to Invoice, Based on Certain Criteria

    - by Scott
    For starters, here is an example of a microsoft excel database I am working with: Month/Address/Name/Description/Amount January/123 Street/Fred/Painting/100 January/456 Avenue/Scott/Flooring/400 January/789 Road/Scott/Plumbing/100 February/123 Street/Fred/Flooring/600 February/246 Lane/Fred/Electrical/300 March/789 Road/Scott/Drywall/150 What I want to be able to do is selectively copy info from this databse to invoices (also excel). The invoice has three columns: Address/Description/Amount. I want to be able to automatically fill the invoices in as the database is filled in (either automatically, or if I have to actually manually run the macro to do it, that might be fine). Each name (Scott, Fred, etc.) will have their own set of 12 invoices for the year. So, e.g., I want to be able to produce a January invoice for all work done for Scott in January, showing the address, the description and the amount, line by line. So every time work on Scott's address(es) is done, the database is filled in, and i want it to "send" that information to the invoice on the next available line, filling in only the Address/Description/Amount columns from the database. Fred's invoice should fill in as any work is done on Fred's addresses. And once the month changes, the next invoice should start filling in. So first I need to filter the data by the month and the name (and there is actually one more column to filter by, but let's keep this example simpler). Then I need to list the remaining data on the invoice, but only certain cells from the rows that are now left. Help anyone?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >