Search Results

Search found 34093 results on 1364 pages for 'database architecture'.

Page 72/1364 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • How to update a string property of an sqlite database item

    - by Thomas Joos
    hi all, I'm trying to write an application that checks if there is an active internet connection. If so it reads an xml and checks every 'lastupdated' item ( php generated string ). It compares it to the database items and if there is a new value, this particular item needs to be updated. My code seems to work ( compiles, no error messages, no failures, .. ) but I notice that the particular property does not change, it becomese (null). When I output the binded string value it returns the correct string value I want to update into the db.. Any idea what I'm doing wrong? const char *sql = "update myTable Set last_updated=? Where node_id =?"; sqlite3_stmt *statement; // Preparing a statement compiles the SQL query into a byte-code program in the SQLite library. // The third parameter is either the length of the SQL string or -1 to read up to the first null terminator. if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) == SQLITE_OK){ NSLog(@"last updated item: %@", [d lastupdated]); sqlite3_bind_text(statement, 1, [d lastupdated],-1,SQLITE_TRANSIENT); sqlite3_bind_int (statement, 2, [d node_id]); }else { NSLog(@"SQLite statement error!"); } if(SQLITE_DONE != sqlite3_step(statement)){ NSAssert1(0, @"Error while updating. '%s'", sqlite3_errmsg(database)); }else { NSLog(@"SQLite Update done!"); }

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • Database Tutorial: The method open() is undefined for the type MainActivity.DBAdapter

    - by user2203633
    I am trying to do this database tutorial on SQLite Eclipse: https://www.youtube.com/watch?v=j-IV87qQ00M But I get a few errors at the end.. at db.ppen(); i get error: The method open() is undefined for the type MainActivity.DBAdapter and similar for insert record and close. MainActivity: package com.example.studentdatabase; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import android.app.Activity; import android.app.ListActivity; import android.content.Intent; import android.database.Cursor; import android.os.Bundle; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.BaseAdapter; import android.widget.Button; import android.widget.Toast; public class MainActivity extends Activity { /** Called when the activity is first created. */ //DBAdapter db = new DBAdapter(this); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Button addBtn = (Button)findViewById(R.id.add); addBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent i = new Intent(MainActivity.this, addassignment.class); startActivity(i); } }); try { String destPath = "/data/data/" + getPackageName() + "/databases/AssignmentDB"; File f = new File(destPath); if (!f.exists()) { CopyDB( getBaseContext().getAssets().open("mydb"), new FileOutputStream(destPath)); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } DBAdapter db = new DBAdapter(); //---add an assignment--- db.open(); long id = db.insertRecord("Hello World", "2/18/2012", "DPR 224", "First Android Project"); id = db.insertRecord("Workbook Exercises", "3/1/2012", "MAT 100", "Do odd numbers"); db.close(); //---get all Records--- /* db.open(); Cursor c = db.getAllRecords(); if (c.moveToFirst()) { do { DisplayRecord(c); } while (c.moveToNext()); } db.close(); */ /* //---get a Record--- db.open(); Cursor c = db.getRecord(2); if (c.moveToFirst()) DisplayRecord(c); else Toast.makeText(this, "No Assignments found", Toast.LENGTH_LONG).show(); db.close(); */ //---update Record--- /* db.open(); if (db.updateRecord(1, "Hello Android", "2/19/2012", "DPR 224", "First Android Project")) Toast.makeText(this, "Update successful.", Toast.LENGTH_LONG).show(); else Toast.makeText(this, "Update failed.", Toast.LENGTH_LONG).show(); db.close(); */ /* //---delete a Record--- db.open(); if (db.deleteRecord(1)) Toast.makeText(this, "Delete successful.", Toast.LENGTH_LONG).show(); else Toast.makeText(this, "Delete failed.", Toast.LENGTH_LONG).show(); db.close(); */ } private class DBAdapter extends BaseAdapter { private LayoutInflater mInflater; //private ArrayList<> @Override public int getCount() { return 0; } @Override public Object getItem(int arg0) { return null; } @Override public long getItemId(int arg0) { return 0; } @Override public View getView(int arg0, View arg1, ViewGroup arg2) { return null; } } public void CopyDB(InputStream inputStream, OutputStream outputStream) throws IOException { //---copy 1K bytes at a time--- byte[] buffer = new byte[1024]; int length; while ((length = inputStream.read(buffer)) > 0) { outputStream.write(buffer, 0, length); } inputStream.close(); outputStream.close(); } public void DisplayRecord(Cursor c) { Toast.makeText(this, "id: " + c.getString(0) + "\n" + "Title: " + c.getString(1) + "\n" + "Due Date: " + c.getString(2), Toast.LENGTH_SHORT).show(); } public void addAssignment(View view) { Intent i = new Intent("com.pinchtapzoom.addassignment"); startActivity(i); Log.d("TAG", "Clicked"); } } DBAdapter code: package com.example.studentdatabase; public class DBAdapter { public static final String KEY_ROWID = "id"; public static final String KEY_TITLE = "title"; public static final String KEY_DUEDATE = "duedate"; public static final String KEY_COURSE = "course"; public static final String KEY_NOTES = "notes"; private static final String TAG = "DBAdapter"; private static final String DATABASE_NAME = "AssignmentsDB"; private static final String DATABASE_TABLE = "assignments"; private static final int DATABASE_VERSION = 2; private static final String DATABASE_CREATE = "create table if not exists assignments (id integer primary key autoincrement, " + "title VARCHAR not null, duedate date, course VARCHAR, notes VARCHAR );"; private final Context context; private DatabaseHelper DBHelper; private SQLiteDatabase db; public DBAdapter(Context ctx) { this.context = ctx; DBHelper = new DatabaseHelper(context); } private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { try { db.execSQL(DATABASE_CREATE); } catch (SQLException e) { e.printStackTrace(); } } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS contacts"); onCreate(db); } } //---opens the database--- public DBAdapter open() throws SQLException { db = DBHelper.getWritableDatabase(); return this; } //---closes the database--- public void close() { DBHelper.close(); } //---insert a record into the database--- public long insertRecord(String title, String duedate, String course, String notes) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_DUEDATE, duedate); initialValues.put(KEY_COURSE, course); initialValues.put(KEY_NOTES, notes); return db.insert(DATABASE_TABLE, null, initialValues); } //---deletes a particular record--- public boolean deleteContact(long rowId) { return db.delete(DATABASE_TABLE, KEY_ROWID + "=" + rowId, null) > 0; } //---retrieves all the records--- public Cursor getAllRecords() { return db.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_TITLE, KEY_DUEDATE, KEY_COURSE, KEY_NOTES}, null, null, null, null, null); } //---retrieves a particular record--- public Cursor getRecord(long rowId) throws SQLException { Cursor mCursor = db.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_TITLE, KEY_DUEDATE, KEY_COURSE, KEY_NOTES}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } //---updates a record--- public boolean updateRecord(long rowId, String title, String duedate, String course, String notes) { ContentValues args = new ContentValues(); args.put(KEY_TITLE, title); args.put(KEY_DUEDATE, duedate); args.put(KEY_COURSE, course); args.put(KEY_NOTES, notes); return db.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } } addassignment code: package com.example.studentdatabase; public class addassignment extends Activity { DBAdapter db = new DBAdapter(this); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.add); } public void addAssignment(View v) { Log.d("test", "adding"); //get data from form EditText nameTxt = (EditText)findViewById(R.id.editTitle); EditText dateTxt = (EditText)findViewById(R.id.editDuedate); EditText courseTxt = (EditText)findViewById(R.id.editCourse); EditText notesTxt = (EditText)findViewById(R.id.editNotes); db.open(); long id = db.insertRecord(nameTxt.getText().toString(), dateTxt.getText().toString(), courseTxt.getText().toString(), notesTxt.getText().toString()); db.close(); nameTxt.setText(""); dateTxt.setText(""); courseTxt.setText(""); notesTxt.setText(""); Toast.makeText(addassignment.this,"Assignment Added", Toast.LENGTH_LONG).show(); } public void viewAssignments(View v) { Intent i = new Intent(this, MainActivity.class); startActivity(i); } } What is wrong here? Thanks in advance.

    Read the article

  • How can I build a voting system to support multiple types of objects to vote on?

    - by Kyle Hayes
    I'm really looking for something very similar to the way SO is setup where a few different kinds of things can be voted on (questions AND answers). What kind of DB schema, generally, could I use to support voting on many different kinds of objects? Would I have a single Vote table that would have references to other objects in the database? Or do I have to have or should have a separate vote table for each of the objects I would like to vote on.

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • Entity Framework and multi-tenancy database design

    - by Junto
    I am looking at multi-tenancy database schema design for an SaaS concept. It will be ASP.NET MVC - EF, but that isn't so important. Below you can see an example database schema (the Tenant being the Company). The CompanyId is replicated throughout the schema and the primary key has been placed on both the natural key, plus the tenant Id. Plugging this schema into the Entity Framework gives the following errors when I add the tables into the Entity Model file (Model1.edmx): The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Product' uses the set of foreign keys '{ProductId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The question is in two parts: Is my database design incorrect? Should I refrain from these compound primary keys? I'm questioning my sanity regarding the fundamental schema design (frazzled brain syndrome). Please feel free to suggest the 'idealized' schema. Alternatively, if the database design is correct, then is EF unable to match the keys because it perceives these foreign keys as a potential mis-configured 1:1 relationships (incorrectly)? In which case, is this an EF bug and how can I work around it?

    Read the article

  • Pulling record from mySQL database only working for userid and not email

    - by user2908467
    This function works because I search by userid: private void showList_Click(object sender, EventArgs e) { int id = 0; for (int i = 0; i <= sqlClient.Count("UserList"); i++) { Dictionary<string, string> dik = sqlClient.Select("UserList", "userid = " + id); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); userList.AppendText(string.Join(Environment.NewLine, lines)); userList.AppendText(Environment.NewLine); userList.AppendText("--------------------------------------"); id++; } } This function does not work because I search by email: private void login_Click(object sender, EventArgs e) { string email = lemail.Text; Dictionary<string, string> dik = sqlClient.Select("UserList", "firstname = " + email); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); logged.AppendText(string.Join(Environment.NewLine, lines)); } This is the error message I receive when I click on the login button: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@aol.com' at line 1 The email I searched for in the database was "[email protected]" without quotes. I'm lead to believe by the error message the @ sign is causing conflict as I know it is a special character but I am having a hard time figuring out what phrase to search for to help me. Also, here is the function that is being called: public Dictionary<string, string> Select(string table, string WHERE) { //This methods selects from the database, it retrieves data from it. //You must make a dictionary to use this since it both saves the column //and the value. i.e. "age" and "33" so you can easily search for values. //Example: SELECT * FROM names WHERE name='John Smith' // This example would retrieve all data about the entry with the name "John Smith" //Code = Dictionary<string, string> myDictionary = Select("names", "name='John Smith'"); //This code creates a dictionary and fills it with info from the database. string query = "SELECT * FROM " + table + " WHERE " + WHERE + ""; Dictionary<string, string> selectResult = new Dictionary<string, string>(); if (this.Open()) { MySqlCommand cmd = new MySqlCommand(query, conn); MySqlDataReader dataReader = cmd.ExecuteReader(); try { while (dataReader.Read()) { for (int i = 0; i < dataReader.FieldCount; i++) { selectResult.Add(dataReader.GetName(i).ToString(), dataReader.GetValue(i).ToString()); } } dataReader.Close(); } catch { } this.Close(); return selectResult; } else { return selectResult; } } My database table is called "UserList" The fields in order are as follows: userid, email, password, lastname, firstname Any help would be greatly appreciated. This site is amazing!

    Read the article

  • What are some arguments AGAINST using EntityFramework?

    - by Rachel
    The application I am currently building has been using Stored procedures and hand-crafted class models to represent database objects. Some people have suggested using Entity Framework and I am considering switching to that since I am not that far into the project. My problem is, I feel the people arguing for EF are only telling me the good side of things, not the bad side :) My main concerns are: We want Client-Side validation using DataAnnotations, and it sounds like I have to create the client-side models anyways so I am not sure that EF would save that much coding time We would like to keep the classes as small as possible when going over the network, and I have read that using EF often includes extra data that is not needed We have a complex database layer which crosses multiple databases, and I am not sure EF can handle this. We have one Common database with things like Users, StatusCodes, Types, etc and multiple instances of our main databases for different instances of the application. SELECT queries can and will query across all instances of the databases, however users can only modify objects that are in the database they are currently working on. They can switch databases without reloading the application. Object modes are very complex and there are often quite a few joins involved Arguments for EF are: Concurrency. I wouldn't have to code in checks to see if the record was updated before each save Code Generation. EF can generate partial class models and POCOs for me, however I am not positive this would really save me that much time since I think we would still need to create the client-side models for validation and some custom parsing methods. Speed of development since we wouldn't need to create the CRUD stored procedures for every database object Our current architecture consists of a WPF Service which handles database calls via parameterized Stored Procedures, POCO objects that go to/from the WCF service and the WPF client, and the WPF client itself which transforms POCOs into class Models for the purpose of Validation and DataBinding.

    Read the article

  • Using json as database with EF, how can I link EF and the json file during DbContext initialization?

    - by blacai
    For a personal testing-project I am considering to create a SPA with the following technologies: ASP.NET MVC + EF + WebAPI + AngularJS. The project will make use of small amount of data, so I was thinking I could use just a .json file as storage. But I am not sure about how to proceed with the link between EF and the json file in the initialization of the DbContext. I found a stackoverflow related question: http://stackoverflow.com/questions/13899342/can-we-use-json-as-a-database I know the basics of edit files and store data inside. What I tried is to get the data from the json file in the initilizer method and create the objects one by one. This is more a doubt about how this works if I save/update an object in the dbcontext, do I need to go through all the elements and add/update it manually? Is it better to rewrite the complete file? According to this http://stackoverflow.com/questions/7895335/append-data-to-a-json-file-with-php it is not a good practice to use json/XML for data wich will be manipulated. Anyone has experience with anything similar? Is this a really bad idea and I should use another kind of data-storage?

    Read the article

  • In an online questionnaire, what is a best way to design a database for keeping track of users all attempts?

    - by user1990525
    We have a web app where users can take online exams. Exam admin will create a questionnaire. A questionnaire can have many Questions. Each question is a multiple choice question (MCQ). Lets say an admin creates a questionnaire with 10 questions. Users attempt those questions. Now, unlike real exams users can attempt single questionnaire multiple times. And we have to keep track of his all attempts. e.g. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 1 June 2013 1 1 1 2 b 1 June 2013 1 1 1 1 c 2 June 2013 2 1 1 2 d 2 June 2013 2 Now it can also happen that after user has attempted same questionnare twice, admin can delete a question from same questionnaire, but users attempt history should still have reference to that so that user can see his that question in his attempt history in spite of admin deleting that question. If user now attempts this changed questionnaire he should see only 1 question. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 3 June 2013 3 Also, after this user modified some part of question, users attempt history should show question before modification while any new attempt should show modified question. How do we manage this at the database level? My first gut feeling was that, For deletes, do not do physical delete, just make a question inactive so that history can still keep track of users attempt. For modifications, create versions for questions and each new attempt refres to latest version of each question and history keeping reference to version of question at attempt time.

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

  • Database development/admin - What exactly should I be trying to learn? [closed]

    - by Sauron
    I've been a bit weary about approaching learning databases. I've dabbled into them before, and "DATA" in itself appeals to me a lot. Maintaining/searching/moving, everything about it in the abstract sense I love. (This isn't a career question, this is a learning question.) But as RDBMS's begin to be easier to maintain, and with tools that "anyone" can use to manage data I feared for the database admin jobs. (Yet I see jobs for SQL everywhere!). I know "No-SQL" was a big deal but it kinda has its niche. But that leaves me here... unsure of really "what" to study. What tools should I have in my tool belt? I'm sure RDBMS's will be around in both use and maintaining legacy code. But what else? Obviously data will always be around, but is this a secure field or a dying one? And what should I be concentrating on?

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • What is the process of planning software called? Or what is the job title of someone who does software planning?

    - by Ryan
    For example, let's say a non-technical person comes to me with their rough initial specification. And I sit down with them over a couple weeks and help them hone, formalize and better plan the application that they want built. What is this called? Information architecture, software architecture, specification writing, software planning, requirements analysis? What is the best, most recognizable term for this?

    Read the article

  • Populate Android Database From CSV file?

    - by MoMo
    Is it possible to take a csv file stored in the res/raw resource directory and use it to populate a table in the sqlite3 database? My thought was that, if there was a way to do a bulk import for the entire file into the table then that would be cleaner and faster than iterating over each line in the file and executing individual insert statements... I've found that there is a sqlite import command that allows this: http://stackoverflow.com/questions/1045910/how-can-i-import-load-a-sql-or-csv-file-into-sqlite ...but I'm having trouble applying those statements in my Android application. My first thought was to try something like the following...but no luck: db.execSQL("CREATE TABLE " + TABLE_NAME + "(id INTEGER PRIMARY KEY, name TEXT)"); db.execSQL(".mode csv"); db.execSQL(".import res/raw/MyFile.csv " + TABLE_NAME); Is this possible? Should I be trying a different approach to populate my database? Thanks for you answers!

    Read the article

  • Erlang: 2 database on 2 webserver ??

    - by xRobot
    I have created a blogging system with php+postgresql. Now I want to add a web chat ( in REAL TIME for Million of users simultaneously ) where every message is saved in database. I am thinking to use Erlang+Mnesia on a different webserver for this issue. Message's table will be like this: message_id, user_id, message, date user_id should be related with users table in Postgresql database in another webserver. How can I do that without lose performance ? If you have any other creative solutions tell me please ;).

    Read the article

  • Delphi connection to OpenEdge Progress-4GL Database

    - by Cesar Marrero
    Folks: Has anyone had success connecting to a Progress-4GL database with Delphi?   I've been unable to establish any connection with the ODBC driver provided by the vendor (Progress OpenEdge 10.1C Driver). I've entered (what I believe are) the right parameters, but keep on getting an error whenever I test the connection: "[DataDirect][ODBC Progress OpenEdge Wire Protocol driver] Socket closed." Background: I've been tasked to re-design a 13-year-old application, but the original programmer did not provide any supporting documents, passwords, configuration setup, etc. (I'm on my own)!   To make things worse, online help and useful documentation about Progress is scarce (I had never heard about this database until now). I want to examine the existing data, maybe create an ERD to familiarize myself with the schema, but I can't even access the data outside of the OpenEdge code. Any help is appreciated!

    Read the article

  • Store and retrieve images from file system instead of database

    - by Karsten
    Hi I need a place to store images. My first thought was to use the database, but many seems to recommend using the filesystem. This seems to fit my case, but how do I implement it? The filenames need to be unique, how to do that. Should I use a guid? How to retrieve the files, should I go directly to the database using the filename, make a aspx page and passing either filename or primary key as a querystring and then read the file. What about client side caching, is that enabled when using a page like image.aspx?id=123 ? How do I delete the files, when the associated record is deleted? I guess there are many other things that I still haven't thought about. Links, samples and guidelines are very welcome!

    Read the article

  • Database entries existence depends on time / boolean value of a field changed automatically

    - by lisak
    Hey, I have this situation here. An auction system listing orders that are "active" (their deadline didn't occur yet) There is a lot of orders so it is better to have a field "active" instead of listing them based on time queries I'm not a database expert, just a user. What is the best way to implement this scenario ? Do I have to manually check the "deadLine" field and change "active" status every once in a while ? Is Mysql able to change the field automatically ? How demanding are queries of type "select orders where "deadline" has passed " Do I need to use TIMESTAMP (long data type of number of milisecond since UTC epoch time or DATETIME for the queries to the database to be more efficient ? Finally I have to move old order entries to a different backup table .

    Read the article

  • Database design - alternatives for Entity Attribute Value (EAV)

    - by Bob
    Hi, see http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters for similar topic. My question: i want to design a database, that will be used for a production facility of different types of products where each product has its own (number of) parameters. because i want the serial numbers to be in one tabel for overview purposes i have a problem with these different paraeters . One solution could be EAV, but it has its downsides, certainly because we have +- 5 products with every product +- 20.000 serial numbers (records). it looks a bit overkill to me... I just don't know how one could design a database so that you have an attribute in a mastertable that says: 'hey, you could find details of this record in THAT detail-table". 'in a way that you qould easely query the results) currenty i am using Visual Basic & Acces 2007. but i'm going to Visual Basic & MySQL. thanks for your help. Bob

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >