Search Results

Search found 33139 results on 1326 pages for 'embedded database'.

Page 177/1326 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • HFT strategy coding on hardware

    - by bsobaid
    Hardware accelaration and embedded programming has mostly been used so far to parse datafeed and/or to route orders to exchange. Have there been attempts to write simpler HFT strategies such as equity market-making in hardware? Have they been successful? Which companies are doing this and what kind of programming model is used?

    Read the article

  • How to determine the event of the linux devices?

    - by sasayins
    Hi, I am learning some embedded programming. I am using Linux as my platform and I want to create a daemon program that will check if a particular device(magstrife, keypad, etc) is active. Like for example, my daemon program is running in the background, then when I make a keypress event, my deamon app will do something. What implementation should I do to create this app? And how can I check the event of the devices? Thanks.

    Read the article

  • Concatenative language inrepreter in Java

    - by Vojislav Stojkovic
    I'm interested in finding a concatenative language interpreter in Java. Ideally, it should satisfy the following conditions: It has an interpreter, not (only) a bytecode compiler for JVM. The language itself has decent documentation, not only a few examples and a "I'll document the rest someday" notice. The project is not completely abandoned. In short, I'm looking for a reasonably "alive" concatenative language that can be embedded into Java easily.

    Read the article

  • Is 0x9B (155decimal) a special control character? Why is it missing from ascii tables?

    - by Chris
    Hi, I'm working on an embedded system, and i'm having dramas getting it to send a certain chunk of data across the serial port. I narrowed it down and found that if a 0x9B is present in the message, it corrupts the message. So i then look up 0x9b (155) on http://www.asciitable.com/, and it's missing! Isn't that a bizarre coincidence! Any ideas, is this a special character or something?

    Read the article

  • In .NET, How do I separate xml from compiled resource file?

    - by Brandon
    We have a customer that would like to modify application user messages that we store in .resx files. I'm thinking this can't be done since the xml file behind the .resx is embedded in a compiled dll. Am I correct? Or, is there a way keep the xml outside of the compiled dll? I realize this can easily be done by other means but I like the ease of the resx file--the classes/properties are created for you.

    Read the article

  • Where do I put all these function-like #defines, in C?

    - by Tristan
    I'm working with an embedded system, and I'm ending up with a ton of HW-interfacing #define functions. I want to put all of these into a separate file (for OOP-ness), but I don't know the best way to #include that. Do I just put them all into a .c file, then include that? Seems silly to put these in a .h file.

    Read the article

  • Embedding Videos onto your website

    - by Bridget
    I have a website where I am embedding a video onto. I am just wondering if having an embedded video on the page would make the video load and buffer, and run less smoothly, than if the video where actually placed on the page? Thanks

    Read the article

  • Algorithm to rotate an image 90 degrees in place? (No extra memory)

    - by user9876
    In an embedded C app, I have a large image that I'd like to rotate by 90 degrees. Currently I use the well-known simple algorithm to do this. However, this algorithm requires me to make another copy of the image. I'd like to avoid allocating memory for a copy, I'd rather rotate it in-place. Since the image isn't square, this is tricky. Does anyone know of a suitable algorithm?

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

  • 9/18 Live Webcast: Three Compelling Reasons to Upgrade to Oracle Database 11g - Still time to register

    - by jgelhaus
    If you or your organization is still working with Oracle Database 10g or an even older version, now is the time to upgrade. Oracle Database 11g offers a wide variety of advantages to enhance your operation. Join us 10 am PT / 1pm ET September 18th for this live Webcast and learn about what you’re missing: the business, operational, and technical benefits. With Oracle Database 11g, you can: Upgrade with zero downtime Improve application performance and database security Reduce the amount of storage required Save time and money Register today 

    Read the article

  • Delivery of JMS message before the transaction is committed

    - by ewernli
    Hi, I have a very simple scenario involving a database and a JMS in an application server (Glassfish). The scenario is dead simple: 1. an EJB inserts a row in the database and sends a message. 2. when the message is delivered with an MDB, the row is read and updated. The problem is that sometimes the message is delivered before the insert has been committed in the database. This is actually understandable if we consider the 2 phase commit protocol: 1. prepare JMS 2. prepare database 3. commit JMS 4. ( tiny little gap where message can be delivered before insert has been committed) 5. commit database I've discussed this problem with others, but the answer was always: "Strange, it should work out of the box". My questions are then: How could it work out-of-the box? My scenario sounds fairly simple, why isn't there more people with similar troubles? Am I doing something wrong? Is there a way to solve this issue correctly? Here are a bit more details about my understanding of the problem: This timing issue exist only if the participant are treated in this order. If the 2PC treats the participants in the reverse order (database first then message broker) that should be fine. The problem was randomly happening but completely reproducible. I found no way to control the order of the participants in the distributed transactions in the JTA, JCA and JPA specifications neither in the Glassfish documentation. We could assume they will be enlisted in the distributed transaction according to the order when they are used, but with an ORM such as JPA, it's difficult to know when the data are flushed and when the database connection is really used. Any idea?

    Read the article

  • Android: NullPointerException error in getting data in database

    - by Gil Viernes Marcelo
    This what happens in the system. 1. Admin login this is in other activity but i will not post it coz it has nothing to do with this (no problem) 2. Register user in system (using database no problem) 3. Click add user button (where existing user who register must display its name in ListView) Problem: When I click adduser to see if the system registered the user, it force close. CurrentUser.java package com.example.istronggyminstructor; import java.util.ArrayList; import android.os.Bundle; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.database.Cursor; import android.view.Gravity; import android.view.LayoutInflater; import android.view.Menu; import android.view.View; import android.view.View.OnClickListener; import android.view.ViewGroup; import android.view.ViewGroup.LayoutParams; import android.view.WindowManager; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ListView; import android.widget.PopupWindow; import android.widget.TextView; import android.widget.Toast; import java.util.ArrayList; import java.util.List; import java.util.Random; import com.example.istronggyminstructor.registeredUserList.Users; import android.content.ContentValues; import android.database.Cursor; import android.database.SQLException; import android.database.sqlite.SQLiteDatabase; public class CurrentUsers extends Activity { private Button register; private Button adduser; EditText getusertext, getpass, getweight, textdisp; View popupview, popupview2; public static ArrayList<String> ArrayofName = new ArrayList<String>(); protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_current_users); register = (Button) findViewById(R.id.regbut); adduser = (Button) findViewById(R.id.addbut); register.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { LayoutInflater inflator = (LayoutInflater) getBaseContext() .getSystemService(LAYOUT_INFLATER_SERVICE); popupview = inflator.inflate(R.layout.popup, null); final PopupWindow popupWindow = new PopupWindow(popupview, LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT); popupWindow.showAtLocation(popupview, Gravity.CENTER, 0, 0); popupWindow.setFocusable(true); popupWindow.update(); Button dismissbtn = (Button) popupview.findViewById(R.id.close); dismissbtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { popupWindow.dismiss(); } }); popupWindow.showAsDropDown(register, 50, -30); } }); //Thread.setDefaultUncaughtExceptionHandler(new forceclose(this)); } public void registerUser(View v) { EditText username = (EditText) popupview.findViewById(R.id.usertext); EditText password = (EditText) popupview .findViewById(R.id.passwordtext); EditText weight = (EditText) popupview.findViewById(R.id.weight); String getUsername = username.getText().toString(); String getPassword = password.getText().toString(); String getWeight = weight.getText().toString(); dataHandler dbHandler = new dataHandler(this, null, null, 1); Users user = new Users(getUsername, getPassword, Integer.parseInt(getWeight)); dbHandler.addUsers(user); Toast.makeText(getApplicationContext(), "Registering...", Toast.LENGTH_SHORT).show(); } public void onClick_addUser(View v) { LayoutInflater inflator = (LayoutInflater) getBaseContext() .getSystemService(LAYOUT_INFLATER_SERVICE); popupview2 = inflator.inflate(R.layout.popup2, null); final PopupWindow popupWindow = new PopupWindow(popupview2, LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT); popupWindow.showAtLocation(popupview2, Gravity.CENTER, 0, -10); popupWindow.setFocusable(true); popupWindow.update(); Button dismissbtn = (Button) popupview2.findViewById(R.id.close2); dismissbtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { popupWindow.dismiss(); } }); popupWindow.showAsDropDown(register, 50, -30); dataHandler dbHandler = new dataHandler(this, null, null, 1); dbHandler.getAllUsers(); ListView list = (ListView)findViewById(R.layout.popup2); ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, ArrayofName); list.setAdapter(adapter); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.current_users, menu); return true; } } registeredUserList.java package com.example.istronggyminstructor; public class registeredUserList { public static class Users { private static int _id; private static String _users; private static String _password; private static int _weight; private static String[] _workoutlists; private static int _score; public Users() { } public Users(String username, String password, int weight) { _users = username; _password = password; _weight = weight; } public int getId() { return _id; } public static void setId(int id) { _id = id; } public String getUsers() { return _users; } public static void setUsers(String users) { _users = users; } public String getPassword(){ return _password; } public void setPassword(String password){ _password = password; } public int getWeight(){ return _weight; } public static void setWeight(int weight){ _weight = weight; } public String[] getWorkoutLists(){ return _workoutlists; } public void setWorkoutLists(String[] workoutlists){ _workoutlists = workoutlists; } public int score(){ return _score; } public void score(int score){ _score = score; } } } dataHandler.java package com.example.istronggyminstructor; import java.util.ArrayList; import java.util.List; import com.example.istronggyminstructor.registeredUserList.Users; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteDatabase.CursorFactory; import android.database.sqlite.SQLiteOpenHelper; public class dataHandler extends SQLiteOpenHelper { private static final int DATABASE_VERSION = 1; private static final String DATABASE_NAME = "userInfo.db"; public static final String TABLE_USERINFO = "user"; public static final String COLUMN_ID = "_id"; public static final String COLUMN_USERNAME = "username"; public static final String COLUMN_PASSWORD = "password"; public static final String COLUMN_WEIGHT = "weight"; public dataHandler(Context context, String name, CursorFactory factory, int version) { super(context, DATABASE_NAME, factory, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { String CREATE_USER_TABLE = "CREATE TABLE " + TABLE_USERINFO + " (" + COLUMN_ID + " INTEGER PRIMARY KEY, " + COLUMN_USERNAME + " TEXT," + COLUMN_PASSWORD + " TEXT, " + COLUMN_WEIGHT + " INTEGER " + ");"; db.execSQL(CREATE_USER_TABLE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS " + TABLE_USERINFO); onCreate(db); } public void addUsers(Users user) { ContentValues values = new ContentValues(); values.put(COLUMN_USERNAME, user.getUsers()); values.put(COLUMN_PASSWORD, user.getPassword()); values.put(COLUMN_WEIGHT, user.getWeight()); SQLiteDatabase db = this.getWritableDatabase(); db.insert(TABLE_USERINFO, null, values); db.close(); } public Users findUsers(String username) { String query = "Select * FROM " + TABLE_USERINFO + " WHERE " + COLUMN_USERNAME + " = \"" + username + "\""; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(query, null); Users user = new Users(); if (cursor.moveToFirst()) { cursor.moveToFirst(); Users.setUsers(cursor.getString(1)); //Users.setWeight(Integer.parseInt(cursor.getString(3))); not yet needed cursor.close(); } else { user = null; } db.close(); return user; } public List<Users> getAllUsers(){ List<Users> user = new ArrayList(); String selectQuery = "SELECT * FROM " + TABLE_USERINFO; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(selectQuery, null); if (cursor.moveToFirst()) { do { Users users = new Users(); users.setUsers(cursor.getString(1)); String name = cursor.getString(1); CurrentUsers.ArrayofName.add(name); // Adding contact to list user.add(users); } while (cursor.moveToNext()); } // return user list return user; } public boolean deleteUsers(String username) { boolean result = false; String query = "Select * FROM " + TABLE_USERINFO + " WHERE " + COLUMN_USERNAME + " = \"" + username + "\""; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(query, null); Users user = new Users(); if (cursor.moveToFirst()) { Users.setId(Integer.parseInt(cursor.getString(0))); db.delete(TABLE_USERINFO, COLUMN_ID + " = ?", new String[] { String.valueOf(user.getId()) }); cursor.close(); result = true; } db.close(); return result; } } Logcat 08-20 03:23:23.293: E/AndroidRuntime(16363): FATAL EXCEPTION: main 08-20 03:23:23.293: E/AndroidRuntime(16363): java.lang.IllegalStateException: Could not execute method of the activity 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View$1.onClick(View.java:3599) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View.performClick(View.java:4204) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View$PerformClick.run(View.java:17355) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.os.Handler.handleCallback(Handler.java:725) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.os.Handler.dispatchMessage(Handler.java:92) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.os.Looper.loop(Looper.java:137) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.app.ActivityThread.main(ActivityThread.java:5041) 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invokeNative(Native Method) 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invoke(Method.java:511) 08-20 03:23:23.293: E/AndroidRuntime(16363): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793) 08-20 03:23:23.293: E/AndroidRuntime(16363): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560) 08-20 03:23:23.293: E/AndroidRuntime(16363): at dalvik.system.NativeStart.main(Native Method) 08-20 03:23:23.293: E/AndroidRuntime(16363): Caused by: java.lang.reflect.InvocationTargetException 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invokeNative(Native Method) 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invoke(Method.java:511) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View$1.onClick(View.java:3594) 08-20 03:23:23.293: E/AndroidRuntime(16363): ... 11 more 08-20 03:23:23.293: E/AndroidRuntime(16363): Caused by: java.lang.NullPointerException 08-20 03:23:23.293: E/AndroidRuntime(16363): at com.example.istronggyminstructor.CurrentUsers.onClick_addUser(CurrentUsers.java:118) 08-20 03:23:23.293: E/AndroidRuntime(16363): ... 14 more

    Read the article

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • Can't view database on SQL Server 2008 with domain user

    - by abatishchev
    I created a login for a domain user (domain admin) and added it to role serveradmin, but after logging in I still can't list databases getting next error: The database MyDB is not accessible. (ObjectExplorer) Program Location: at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.DatabaseNavigableItem.get_CanGetChildren() at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.NavigableItem.GetChildren(IGetChildrenRequest request) at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ExplorerHierarchyNode.BuildChildren(WaitHandle quitEvent) How can I fix that?

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • RT database scaling

    - by rplevy
    Recently I heard someone suggest that RT request tracker may have scalability issues due to its non-normalized database (someone at a Perl meeting I went to referred to it in a positive light as hyper-normalized, but I think he may have misunderstood what normalization is all about). On the other hand I know that large scale enterprises such as Perl's CPAN use RT. Do es this level of scale require special measures to be taken to handle what happens when the db grows too large? What have your experiences been?

    Read the article

  • Stand alone or free application to backup ADAM / AD LDS database files

    - by Darqer
    Do you know any small standalone and free tool, that can be run in console, to backup / restore ADAM / AD LDS database files (like adamntds.dit, edbres00001.jrs etc.). I tried to stop ADAM service and copy / paste these files to other location but afterwards I was unable to restore ADAM from these files. I know I could use on ws 2003 some backup tool that was provided by microsoft but it seems to be unavailable on ws 2008.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >