Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 516/1698 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • How do I read Unicode characters from an MS Access 2007 database through Java?

    - by Peter
    In Java, I have written a program that reads a UTF8 text file. The text file contains a SQL query of the SELECT kind. The program then executes the query on the Microsoft Access 2007 database and writes all fields of the first row to a UTF8 text file. The problem I have is when a row is returned that contains unicode characters, such as "?". These characters show up as "?" in the text file. I know that the text files are read and written correctly, because a dummy UTF8 character ("?") is read from the text file containing the SQL query and written to the text file containing the resulting row. The UTF8 character looks correct when the written text file is opened in Notepad, so the reading and writing of the text files are not part of the problem. This is how I connect to the database and how I execute the SQL query: ---- START CODE Connection c = DriverManager.getConnection("jdbc:odbc:Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:/database.accdb;Pwd=temp"); ResultSet r = c.createStatement().executeQuery(sql); ---- END CODE I have tried making a charSet property to the Connection but it makes no difference: ---- START CODE Properties p = new Properties(); p.put("charSet", "utf-8"); p.put("lc_ctype", "utf-8"); p.put("encoding", "utf-8"); Connection c = DriverManager.getConnection("...", p); ---- END CODE Tried with "utf8"/"UTF8"/"UTF-8", no difference. If I enter "UTF-16" I get the following exception: "java.lang.IllegalArgumentException: Illegal replacement". Been searching around for hours with no results and now turn my hope to you. Please help! I also accept workaround suggestions. =) What I want to be able to do is to make a Unicode query (for example one that searches for posts that contain the "?" character) and to have results with Unicode characters receieved and saved correctly. Thank you!

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • Given a database table where multiple rows have the same values and only the most recent record is to be returned

    - by Jim Lahman
    I have a table where there are multiple records with the same value but varying creation dates.  A sample of the database columns is shown here:   1: select lot_num, to_char(creation_dts,'DD-MON-YYYY HH24:MI:SS') as creation_date 2: from coil_setup 3: order by lot_num   LOT_NUM                        CREATION_DATE        ------------------------------ -------------------- 1435718.002                    24-NOV-2010 11:45:54 1440026.002                    17-NOV-2010 06:50:16 1440026.002                    08-NOV-2010 23:28:24 1526564.002                    01-DEC-2010 13:14:04 1526564.002                    08-NOV-2010 22:39:01 1526564.002                    01-NOV-2010 17:04:30 1605920.003                    29-DEC-2010 10:01:24 1945352.003                    14-DEC-2010 01:50:37 1945352.003                    09-DEC-2010 04:44:22 1952718.002                    25-OCT-2010 09:33:19 1953866.002                    20-OCT-2010 18:38:31 1953866.002                    18-OCT-2010 16:15:25   Notice that there are multiple instances of of the same lot number as shown in bold. To only return the most recent instance, issue this SQL statement: 1: select lot_num, to_char(creation_date,'DD-MON-YYYY HH24:MI:SS') as creation_date 2: from 3: ( 4: select rownum r, lot_num, max(creation_dts) as creation_date 5: from coil_setup group by rownum, lot_num 6: order by lot_num 7: ) 8: where r < 100  LOT_NUM                        CREATION_DATE        ------------------------------ -------------------- 2019416.002                    01-JUL-2010 00:01:24 2022336.003                    06-OCT-2010 15:25:01 2067230.002                    01-JUL-2010 00:36:48 2093114.003                    02-JUL-2010 20:10:51 2093982.002                    02-JUL-2010 14:46:11 2093984.002                    02-JUL-2010 14:43:18 2094466.003                    02-JUL-2010 20:04:48 2101074.003                    11-JUL-2010 09:02:16 2103746.002                    02-JUL-2010 15:07:48 2103758.003                    11-JUL-2010 09:02:13 2104636.002                    02-JUL-2010 15:11:25 2106688.003                    02-JUL-2010 13:55:27 2106882.003                    02-JUL-2010 13:48:47 2107258.002                    02-JUL-2010 12:59:48 2109372.003                    02-JUL-2010 20:49:12 2110182.003                    02-JUL-2010 19:59:19 2110184.003                    02-JUL-2010 20:01:03

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one

    - by B R Clouse
    Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy your databases in virtual machines or not. A common misconception we've encountered is the belief that moving to cloud computing can be accomplished by simply hosting one's current operating environment as-is within virtual machines, and then stacking those VMs together in a consolidated environment.  This solution is often described as "Infrastructure as a Service" (IaaS) because the building block for deployments is a VM, which behaves like a full complement of infrastructure.  This approach is easy to understand and may feel like a good first step, but it won't take your databases very far in the journey to cloud computing.  In fact, if you follow the IaaS fork in the road, your journey will end quickly, without realizing the full benefits of cloud computing.  The better option to is to rationalize the deployment stack so that VMs are needed only for exceptional cases.  By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share.  Now, the building block will be database instances or possibly schemas within databases.  These components are the platforms on which you will deploy workloads, hence this is known as "Platform as a Service" (PaaS). PaaS opens the door to higher degrees of consolidation than IaaS, because with PaaS you will not need to accommodate the footprint (operating system, hypervisor, processes, ...) that each VM brings with it.  You will also reduce your maintenance overheard if you move forward without the VMs and their O/Ses to patch and monitor.  So while IaaS simply shuffles complex and varied environments into VMs,  PaaS actually reduces complexity by rationalizing to the small possible set of components.  Now we're ready to look at the consolidation options that PaaS provides -- in our next blog posting.

    Read the article

  • Am I missing something about these considerations about Leaderboard's database's schema?

    - by misiMe
    I just finished to develop a mobile game, now I want to implement an online leaerboard using mysql. I'm wondering about the database's schema, I thought about some possibilities: (I didn't got in detail with syntax because my question is just about the logic of it) Name: string; Score: integer I thought to ask the name just the first time. If, in the future, you will modify that, it will call just an update to the name associated with your id. Leaderboard(ID, Name, Score) ID: integer autoincrement, PrimaryKey With this kind of idea maybe the db will grow fast because if you choose everytime a different name for the score, it will add a new entry. Leaderboard(PhoneId, Name, Score) Here PhoneId will be the unique identifier of the phone, PrimaryKey. A con of this choice is that if you want to play with your friends' phone, you can't put a different name for the score. Leaderboard(Name, Score) Here Name is PrimaryKey. With that, if you enter a name that already exists, you will be prompted to choose another one. Do you agree with this considerations? What will you do? Am I missing something?

    Read the article

  • Do cross reference database tables have a place in domain driven design?

    - by Mike Cellini
    First some background. Let's say we have a system where a customer is placing an order in a web interface. The items that customer is ordering can priced in various ways. Sometimes including the cost of delivery and sometimes not at all. That pricing effectively depends on a variety of factors including the vendor's own pricing model, that vendor's individual contracts with customers as well as that vendor's contracts with its own suppliers. Let's assume that once a customer places an order for a particular item and chooses a contract if any, the method of delivery can be determined by variables on those contracts. Those delivery methods also live in their own table in the database and have various properties consumed downstream. It makes sense that a cross reference or lookup table would store that information. That table would be loaded into the domain and could then be used to apply the appropriate delivery method while processing the order. Does this make sense in the context of domain driven design? Or is my thinking too relational? Is this logic that should be built into it's own class/method (I mean beyond apply the cross reference table data)?

    Read the article

  • How to update correctly an Entity Model after changes of database structure?

    - by Slauma
    I've made some changes in the table structure and especially the relationships between tables in my SQL Server database. Now I want to update my Entity model based on this new database structure. Right clicking on the edmx file I find the option "Update model from database". But when I do this I get kind of a 50% update: The new columns appear in the Entity classes but I am confused about a lot of navigation properties which are still there in the model although the corresponding foreign key relationships do not exist anymore in the database. Am I doing something wrong? Or is there another option to update the model including deletion of navigation properties? Or do I have to delete those navigation properties manually in the model files? I am using Entity Framework Version 1 (VS 2008 SP1). Thanks for help in advance!

    Read the article

  • Why is Reporting Services report vastly slower than its query?

    - by Telos
    I have a query that takes roughly 2 minutes to run. It's not terribly complex in terms of parameters or anything, and the report itself doesn't do any truly extensive processing. Basically just spits the data straight out in a nice format. (Actually one of the reports doesn't format the data at all, just returns a flat table meant to be manipulated in excel.) It's not returning a massive set of data either. Yet the report takes upwards of 30 minutes to run. What could cause this? This is SSRS 2005 against a SQL 2005 database btw. EDIT: OK, I found that with the addition of WITH (NOLOCK) in the report it takes the same time as the query does through SSMS. Why would the query be handled differently if it's coming from reporting services (or visual studio on my local machine) than if coming from SSMS on my local machine? I saw the query running in Activity Monitor a couple times in SLEEP_WAIT mode, but not blocked by anything... EDIT2: The connection string is: Data Source=SERVERNAME;Initial Catalog=DBName

    Read the article

  • Connecting to sql server database mdf file without installing sql server on client machine ?

    - by Shantanu Gupta
    I am creating a window application that need to use sql server database. I want to install this application to client machine without installing sql server so that my application can still connect to a database i.e mdf file that i will be providing at client system. How can i connect to a database(mdf) on client machine through my window application without installing sql server. ? I dont know is it possible or not. If possible what will be the connection string in that case. Database need not be used in network. Client mahine dont need any installation. Every thing needs to be run through pen drive

    Read the article

  • Pyramid.security: Is getting user info from a database with unauthenticated_userid(request) really secure?

    - by yourfriendzak
    I'm trying to make an accesible cache of user data using Pyramid doc's "Making A “User Object” Available as a Request Attribute" example. They're using this code to return a user object to set_request_property: from pyramid.security import unauthenticated_userid def get_user(request): # the below line is just an example, use your own method of # accessing a database connection here (this could even be another # request property such as request.db, implemented using this same # pattern). dbconn = request.registry.settings['dbconn'] userid = unauthenticated_userid(request) if userid is not None: # this should return None if the user doesn't exist # in the database return dbconn['users'].query({'id':userid}) I don't understand why they're using unauthenticated_userid(request) to lookup user info from the database...isn't that insecure? That means that user might not be logged in, so why are you using that ID to get there private info from the database? Shouldn't userid = authenticated_userid(request) be used instead to make sure the user is logged in? What's the advantage of using unauthenticated_userid(request)? Please help me understand what's going on here.

    Read the article

  • How to restore a file system level copy of a PostgreSQL database (not dump) to a different PC

    - by user782224
    I am new to PostgreSQL. I have to recover a database which was running in widows XP machine. I have the zip folder of postgres. I have extracted postgres installation in a different PC and started a using initDB and created a new database, I was able to login, but I am not able to see any old tables. Would you please post the steps you have used to start server in another windows XP machine and how to recover tables and data in the old data folder?

    Read the article

  • Is it possible to use ActiveObjects (or another ORM) with an embedded database & JWS technology ?

    - by phmr
    I would like to embed a database in my JWS application. As a matter of fact I have to use HSQL or SQLite. Hibernate may support (HSQL or SQLite, does it ?) but the workflow is rather complex for my application but maybe it's the way to go for my needs. In ActiveObjects database shoould be "linked" to by a path because of JDBC, but is it possible to specify a database that is inside a JAR and how ?

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • SQL server 2005 - user rights

    - by Paresh
    I have created one user named "tuser" with create database rights in SQL server 2005. and given the 'db_owner' database role of master and msdb database to "tuser". From this user login when I run the script for create database then it will create new database. But "tuser" don't have access that newly created database generated from script. Any one have any idea?, I want to write the script so "tuser" have access that new created database after creation and can have add user permission of newly created database. I want to give 'db_owner' database roles to "tuser" on that newly created database in the same script which create new database. The script run under 'tuser'.

    Read the article

  • How can I create a dynamic LINQ query in C# with possible multiple group by clauses?

    - by FordPrefect141
    I have been a programmer for some years now but I am a newcomer to LINQ and C# so forgive me if my question sounds particularly stupid. I hope someone may be able to point me in the right direction. My task is to come up with the ability to form a dynamic multiple group by linq query within a c# script using a generic list as a source. For example, say I have a list containing multiple items with the following structure: FieldChar1 - character FieldChar2 - character FieldChar3 - character FieldNum1 - numeric FieldNum2 - numeric In a nutshell I want to be able to create a LINQ query that will sum FieldNum1 and FieldNum2 grouped by any one, two or all three of the FieldChar fields that will be decided at runtime depending on the users requirements as well as selecting the FieldChar fields in the same query. I have the dynamic.cs in my project which icludes a GroupByMany extension method but I have to admit I am really not sure how to put these to use. I am able to get the desired results if I use a query with hard-wired group by requests but not dynamically. Apologies for any erroneous nomenclature, I am new to this language but any advice would be most welcome. Many thanks Alex

    Read the article

  • How does one convert from a Java resultset to ColdFusion query in Railo?

    - by Shawn Grigson
    The following works fine in CFMX 7 and CF8, and I'd assume CF9 as well: <!--- 'conn' is a JDBC connection ---> <cfset stat = conn.createStatement() /> <cfset rs = stat.executeQuery(trim(arguments.sql)) /> <!--- convert this Java resultset to a CF query recordset ---> <cfset queryTable = CreateObject("java", "coldfusion.sql.QueryTable")> <cfset queryTable.init(rs) > <cfset query = queryTable.FirstTable() /> This creates a statement using a JDBC driver, executes a query against it, putting it into a java resultset, and then coldfusion.sql.QueryTable is instantiated, passed the Java resulset object, and then queryTable.FirstTable() is called, which returns an actual coldfusion resultset (for cfloop and the like). The problem comes with a difference in Railo's implementation. Running this code in Railo returns the following error: No matching Constructor for coldfusion.sql.QueryTable(org.sqlite.RS) found. I've dumped the Railo java object, and don't see init() among the methods. Am I missing something simple? I'd love to get this working in Railo as well. Please note: I am doing a DSN-less connection to a SQLite db. I understand how to set up a CF datasource. My only hiccup at this point is doing the translation from a Java result set to a Railo query.

    Read the article

  • How do I switch to a SQL Server Server Database that will exist after another command?

    - by Jason Young
    I can't get this script to run, because SQL management studio 2008 says the table "NewName" does not exist. However, the script's purpose is to rename an existing database, so that it does exist when it gets to that line. Ideas? Use Master; ALTER DATABASE OldName SET SINGLE_USER WITH NO_WAIT; ALTER DATABASE OldName MODIFY NAME = NewName; ALTER DATABASE NewName SET MULTI_USER; Use NewName; --THIS LINE FAILS BEFORE THE SCRIPT EVEN RUNS!

    Read the article

  • I'm trying to display records from a database and then when a new one is added automatically update

    - by Pete
    I'm trying to display records from a database and then when a new one is added automatically update the displayed records from the database with the new one I am doing this using php and javascript. I want to load a page and display tags under a video and then when a user adds a new tag by entering it into text box to add it to the database and then refresh the part of the page which shows these tags and include the new one which has just been added all without the page being reloaded. Thanks in advance for any help

    Read the article

  • How to convert lots of database file from MSSQL 2000 to MSSQL 2005?

    - by Tech
    Hi all, I am moving the SQL Server from MSSQL 2000 to MSSQL 2005, and I found the article in the web like this: http://www.aspfree.com/c/a/MS-SQL-Server/Moving-Data-from-SQL-Server-2000-to-SQL-Server-2005/ It works, but the problem is, it only move database one by one. Because I have so many database, is there any easy way to do so? or is there provides any batches / untitlty allow me to do so? thz u.

    Read the article

  • JSON Twitter List in C#.net

    - by James
    Hi, My code is below. I am not able to extract the 'name' and 'query' lists from the JSON via a DataContracted Class (below) I have spent a long time trying to work this one out, and could really do with some help... My Json string: {"as_of":1266853488,"trends":{"2010-02-22 15:44:48":[{"name":"#nowplaying","query":"#nowplaying"},{"name":"#musicmonday","query":"#musicmonday"},{"name":"#WeGoTogetherLike","query":"#WeGoTogetherLike"},{"name":"#imcurious","query":"#imcurious"},{"name":"#mm","query":"#mm"},{"name":"#HumanoidCityTour","query":"#HumanoidCityTour"},{"name":"#awesomeindianthings","query":"#awesomeindianthings"},{"name":"#officeformac","query":"#officeformac"},{"name":"Justin Bieber","query":"\"Justin Bieber\""},{"name":"National Margarita","query":"\"National Margarita\""}]}} My code: WebClient wc = new WebClient(); wc.Credentials = new NetworkCredential(this.Auth.UserName, this.Auth.Password); string res = wc.DownloadString(new Uri(link)); //the download string gives me the above JSON string - no problems Trends trends = new Trends(); Trends obj = Deserialise<Trends>(res); private T Deserialise<T>(string json) { T obj = Activator.CreateInstance<T>(); using (MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(json))) { DataContractJsonSerializer serialiser = new DataContractJsonSerializer(obj.GetType()); obj = (T)serialiser.ReadObject(ms); ms.Close(); return obj; } } [DataContract] public class Trends { [DataMember(Name = "as_of")] public string AsOf { get; set; } //The As_OF value is returned - But how do I get the //multidimensional array of Names and Queries from the JSON here? }

    Read the article

  • How can I get back my privilege to create a new database in MySQL?

    - by Steven
    I can not use MySQL. MySQL is on my local computer. Currently I added skip-grant-tables in My.ini so I can use MySQL. But I have no privilege to create a new database. My problem is tough, although I asked related questions on SO, but no answer can resolve my problem. I almost give up. So I lower my expectation. I am developing a website, so I need to create database, tables and operate tables. You don't have to consider security. Is there a simple solution that can give me privilege to create a new database? Maybe by adding some command in my.ini or something? You won't need to completely resolve my problem. Maybe after the development, I will upload the database and tables to another server(The current database server is my personal computer, windows XP) so I can uninstall and reinstall MySQL. The root of problem is that I lack privileges.

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >