Search Results

Search found 30474 results on 1219 pages for 'relational database'.

Page 593/1219 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Best way to code a webservice in weblogic?

    - by John
    I am new to Weblogic and J2ee. I need to build a webservice that simply runs a query on the backend database (DB2 zOS) and returns the results. Being new to this I have a few questions. 1) What is the best way to build the webservice? 2) How do I connect to the database with weblogic. 3) Is there a way to cache the data returned so that the next request for the same data is pulled from cache? If googled for this but there seems to be many way to handle this. I am looking for the best way that can handle a high volume of requests. Any links to sample code would be helpful. - Thanks

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • PgJDBC: "no suitable driver found" when following tutorial, why?

    - by Celeritas
    I'm writing a Java program that queries a PostgreSQL database. I'm following this example and have trouble here: connection = DriverManager.getConnection( "jdbc:postgresql://127.0.0.1:5432/testdb", "mkyong", "123456"); According to the JavaDoc for DriverManager the first string is "a database url of the form jdbc:subprotocol:subname. When I connect to the server I type in psql -h dataserv.abc.company.com -d app -U emp24 and give the password qwe123 (for example sake). What should the first argument of getConnection be? I've tried connection = DriverManager.getConnection( "jdbc:postgresql://dataserv.abc.company.com", "emp24", "qwe123"); and get the run time error: no suitable driver found. I've download JDBC4 Postgresql Driver, Version 9.2-1000.

    Read the article

  • Any active Bold for Delphi users ?

    - by Roland Bengtsson
    What are you using as a persistance framework when programming in Delphi? If the application is growing it soon became really complicated to handle the model in SQL ? Bold is a persistance framework for Delphi win32 that really deserve more attention. I use it daily and using OCL instead of SQL to get data from the database saves a lot of time and debugging. When the model is changed Bold translate this to an SQL script and change the database. EDIT: For those that are interested in Bold for Delphi I have spend this evening on create a site on Google about it. I'm not a guru in html so the design is maybe not so exciting. But I want comments and reactions about the site. You can leave the comments in this thread or at the bottom on the subpage. And the address is... http://sites.google.com/site/boldfordelphi/

    Read the article

  • Can I Use ASP.NET Wizard Control to Insert Data into Multiple Tables?

    - by SidC
    Hello All, I have an ASP.NET 3.5 webforms project written in VB that involves a multi-table SQL Server insert. That is, I want the customer to input all their contact information, order details etc. into one control (thinking wizard control). Then, I want to call a stored procedure that does the insert into the respective database tables. I'm familiar and comfortable with the ASP.NET wizard control. However, all the examples I've seen in my searches pertain to inserting data into one table. Questions: 1. Given a typical order process - customer information, order information, order details - should a wizard control be used to insert data into multiple database tables? If not, what controls/workflow do you suggest? 2. I've set primary keys and indexes on my order details, orders and customers tables. Is there special stored procedure syntax to use to ensure that referential integrity is maintained through the insert process? Thanks, Sid

    Read the article

  • Entity Framework 4 - Delete Object

    - by GibboK
    I have 3 Tables in my DataBase CmsMasterPages CmsMasterPagesAdvSlots (Pure Juction Table) CmsAdvSlots Here a Picture of my EDM: I need find out all objects CmsAdvSlot connected with a CmsMasterPage (it is working in my code posted belove), and DELETE the result (CmsAdvSlot) from the DataBase. My Problem is I am not able to DELETE this Objects when I found theme. Error: The object cannot be deleted because it was not found in the ObjectStateManager. int findMasterPageId = Convert.ToInt32(uxMasterPagesListSelector.SelectedValue); CmsMasterPage myMasterPage = context.CmsMasterPages.FirstOrDefault(x => x.MasterPageId == findMasterPageId); var resultAdvSlots = myMasterPage.CmsAdvSlots; // It is working until here foreach (var toDeleteAdv in resultAdvSlots) { context.DeleteObject(myMasterPage.CmsAdvSlots.Any()); // ERORR HERE!! context.SaveChanges(); } Any idea how to solve it? Thanks for your time! :-)

    Read the article

  • Turkish characters are not displayed correctly

    - by tfeseas
    MySql database uses utf-8 encoding and data are stored correctly.I use set_name utf8 query to make sure the data called are utf-8 encoded.all variables from database works fine as long as the header charset is utf-8,but the static html characters do not work properly.When i set header charset to ISO-8859-9 variables are displayed differenly while html characters work ok.can anyone help me? <?php header('Content-Type: text/html; charset=ISO-8859-9'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>noname</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

    Read the article

  • What is the fastest way to get the persisted object after calling Hibernate's saveOrUpdate?

    - by Dave
    I'm using Hibernate 3.2.1.ga, hibernate annotations 3.2.1.ga, and hibernate-jpa-2.0-api. I can't upgrade at this time as I'm working with legacy code. I have this generic method for saving or updating objects ... protected void saveOrUpdate(Object obj) { final Session session = sessionFactory.getCurrentSession(); session.saveOrUpdate(obj); } You can assume that every argument, "obj," will have a member field that is marked with the "@Id" annotation. I would like to change the return type to return an Object that represents the persisted object in the database (meaning if "obj" didn't contain an id before, what is returned is the database object with a populated id. What is the fastest way to do this given my versioning and generic constraints?

    Read the article

  • ODBC Storage Size

    - by dcp3450
    I'm pulling a lot of text from a MS SQL Server database. I'm not getting all the text (which includes some html. The text is stored perfectly on the database. However, when I run the query to get the data It will only pull part of the text. I pull the data using odbc_exec and store using $variable = odbc_result($runquery,"body"). if i display the content with odbc_result_all($runquery) i get part of the content. if I use echo $body; i get part of the content then some garbage and part of the text from the begining. very strange response. Is there a size limit? Any ideas what I'm missing here?

    Read the article

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • Sql Server XML-type column duplicate entry detection

    - by aaaa bbbb
    In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages. I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert. Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc. Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.

    Read the article

  • SELECT Statement without duplicate rows on the multiple join tables

    - by theBo
    I have 4 tables built with JOINS and I would like to SELECT DISTINCT rows on the setsTbl.s_id so they always show regardless if there's relational data against them or not!. This is what I have at present which displays the data but doesn't display all of but not the entire distinct row! SELECT setsTbl.s_id, setsTbl.setName, userProfilesTbl.no + ' ' + userProfilesTbl.surname AS Name, trainingTbl.t_date, userAssessmentTbl.o_id FROM userProfilesTbl LEFT OUTER JOIN userAssessmentTbl ON userProfilesTbl.UserId = userAssessmentTbl.UserId FULL OUTER JOIN trainingTbl ON userAssessmentTbl.tt_id = trainingTbl.tt_id RIGHT OUTER JOIN setsTbl ON trainingTbl.s_id = setsTbl.s_id WHERE (userProfilesTbl.st_id=@st_id AND userProfilesTbl.sh_id=@sh_id) AND (DATEPART(yyyy,t_date) = @y_date ) OR (userAssessmentTbl.o_id IS NULL) ORDER BY setName ASC, t_date ASC With this statement I get some of the rows (the ones with data against them) but as stated the s_id field does not return distinct. This following inner select statement works in part when used in SQL Query analyzer and returns pretty much the data i require s_id setName Name o_id ----- ----- ----- ------ 1 100 Barnes 2 2 100 Beardsley 3 3 101 Aldridge 1 4 102 Molby 2 5 102 Whelan 3 but not when used outside of that environment. select * from ( SELECT userProfilesTbl.serviceNo + ' ' + userProfilesTbl.surname AS Name, userProfilesTbl.st_id, userProfilesTbl.sh_id, userAssessmentTbl.o_id, setsTbl.s_id, setsTbl.setName, row_number() over ( partition by setsTbl.s_id order by setsTbl.s_id ) r FROM userProfilesTbl LEFT OUTER JOIN userAssessmentTbl ON userProfilesTbl.UserId = userAssessmentTbl.UserId FULL OUTER JOIN trainingTbl ON userAssessmentTbl.tt_id = trainingTbl.tt_id RIGHT OUTER JOIN setsTbl ON trainingTbl.s_id = setsTbl.s_id ) x where x.r = 1 Not receiving any errors just not displaying the data?

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • How to config Remote BLOB Storage(RBS) with Microsoft Dynamics CRM 4.0 ?

    - by jk
    Hi We have working site for Dynamic crm 4.0 and in it we are storing image into the database. Now database is growing very fast and server is dying.. now I want to enable the Remote BLOB Storage with Dynamic CRM 4.0. for that I tried to install RBS for testing but everywhere is configure with Sharepoint 2010 not with Dynamic Crm. Does anybody know how to install and configure with Dyanmic CRM 4.0? Does RBS with Standard Edition of SQL Server 2008? I followed following path to install but it with Sharepoint? http://technet.microsoft.com/en-us/library/ee663474.aspx Any help is appreciate. Thanks

    Read the article

  • Filemaker XSL 20sec Query Latency

    - by Ian Wetherbee
    I have an ASP frontend that loads data from a Filemaker database using XSL to perform simple queries. The problem is that the first page load takes 20 seconds +/- 200ms, then the next few page refreshes within a minute of the first request take <200ms, then the cycle starts over again. Each page load makes only 2 XSL queries, and they execute fast after the first page load, so what is causing the delay on the first page load? I have caching turned up with a 100% hit rate, and number of connections at 100. I've tried with XSL database sessions on and off, and session time anywhere from 1 to 60 minutes without any changes. The XSL loads from ASP use a GET request and add a Basic Authorization header to authenticate each time. During fast page requests, the fmserver.exe and fmswpc.exe processes don't even flinch, but during a 20 second holdup I see fmserver jump to 30% CPU and a 3mb I/O read a few seconds into the request, and occasionally fmswpc jump to 60% CPU.

    Read the article

  • I want to retrieve some information based on Caller ID

    - by Hassan Al-Jeshi
    Hello, my friend has a Real Estate company that receives a lot of phone calls everyday. He wants to have a solution such that when somebody call to his company, the operator sees all the information about the person who is calling based on the database he have right now and the caller ID. Is there a ready made software or solution that can do the job?? Since I'm a software engineer myself, I would never mind developing something from scratch or built on a ready made system (with a team of course), but I need some direction on how to start?? Notes: 1- cost is not an issue 2- the customers database is there but we never mind replacing it in a format to suit the new solution Best Regards,

    Read the article

  • ASP.NET web services leak memory when (de)serializing disposable objects?

    - by Serilla
    In the following two cases, if Customer is disposable (implementing IDisposable), I believe it will not be disposed by ASP.NET, potentially being the cause of a memory leak: [WebMethod] public Customer FetchCustomer(int id) { return new Customer(id); } [WebMethod] public void SaveCustomer(Customer value) { // save it } This flaw applies to any IDisposable object. So returning a DataSet from a ASP.NET web service, for example, will also result in a memory leak - the DataSet will not be disposed. In my case, Customer opened a database connection which was cleaned up in Dispose - except Dispose was never called resulting in loads of unclosed database connections. I realise there a whole bunch of bad practices being followed here (its only an example anyway), but the point is that ASP.NET - the (de)serializer - is responsible for disposing these objects, so why doesn't it? This is an issue I was aware of for a while, but never got to the bottom of. I'm hoping somebody can confirm what I have found, and perhaps explain if there is a way of dealing with it.

    Read the article

  • db2 jdbc driver does not release table locks

    - by as
    situation: We have a web service running on tomcat accessing DB2 database on AS400, we are using JTOPEN drivers for JNDI connections handled by tomcat. For handling transactions and access to database we are using Spring. For each select system takes JDBC connection from JNDI (i.e. from connection pool), does selection, and in the end it closes ResultSet, Statement and releases Connection in that order. That passes fine, shared lock on table dissappears. When we want to do update the same way as we did with select (exception on ResultSet object, we don't have one in such situation), after releasing Connection to JNDI lock on table stays. If we put maxIdle=0 for number of connections in JNDI configuration, this problem disappears, but this degrades performances, we have cca 100 users online on that service, we need few connections to be alive in pool. What do you suggest?

    Read the article

  • How to write custom SQLite functions in Javascript inside a Webkit browser?

    - by Jay Godse
    I have just learned how to use the SQLite database for local storage in a Webkit web browser (e.g. Google Chrome or Apple Safari) using the Javascript API. For example the "Sticky Notes" application. However, I know that SQLite has a function called sqlite_create_function() that lets you add custom functions to your instance of SQLite on the fly which can then be used inside SQL queries. This function is described at sqlite.org. I also know that you can call an equivalent of this API in Ruby as described here. QUESTION: Can anybody show me how to do this in Javascript - i.e. write a custom function in Javascript that can be bound into the SQLite database at run time to be called by the SQLite engine, and all inside a Webkit browser?

    Read the article

  • SQL Server 2000 - Filter by String Length

    - by user208662
    Hello, I have a database on a SQL Server 2000 server. This database has a table called "Person" that has a field call "FullName" that is a VARCHAR(100). I am trying to write a query that will allow me to get all records that have a name. Records that do not have a name have a FullName value of either null or an empty string. How do I get all of the Person records have a FullName? In other words, I want to ignore the records that do not have a FullName. Currently I am trying the following: SELECT * FROM Person p WHERE p.FullName IS NOT NULL AND LEN(p.FullName) > 0 Thank you

    Read the article

  • Entity Framework This property descriptor does not support the SetValue

    - by Gayan
    Hello guys, below are my entities which i have created using entity frame work. retailer id name childs(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [Name] nvarchar NOT NULL childern id name RETAILER(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [name] nvarchar NOT NULL [Retailer_Id] [int] NOT NULL, As you can see in the above model the relationship is 1 retailer can have 0 or 1 child. my problem is when i create a new child and set the retailer navigation property of it to a retailer entity it throws the following exception.how do i solve it Error while setting property 'retailer': 'This property descriptor does not support the SetValue method.'.

    Read the article

  • Entering Complex Data into Access

    - by DataMakesMeCrazy
    Fairly new to Access and trying to do something that seems simple, but may be very complex. I want to create a database of projects, each project has several phases (ie proposal, marketing, etc) and that will allow for multiple employees to work on a single project. Ie Bob and John are working on project number 102. From here, i would like to enter the forecasted start and end dates for each phase of the project, and enter the forecasted number our hours each employee will be allowed to work on that phase of that project ie. Project - Employee - Phase - Start - End - (list weeks) 102 - Bob - Marketing - 12-May-10 - 21-May-10 - 3 - 5 (3 hours first week, 5 hours the second) and so on Basically would all this data be on one table, or several? And can access dynamically show the weeks between the start and end date so that i can input the hours? I feel this database will become severely complicated :S Thanks, J

    Read the article

  • Does Postgresql varchar count using unicode character length or ASCII character length?

    - by bennylope
    I tried importing a database dump from a SQL file and the insert failed when inserting the string Mér into a field defined as varying(3). I didn't capture the exact error, but it pointed to that specific value with the constraint of varying(3). Given that I considered this unimportant to what I was doing at the time, I just changed the value to Mer, it worked, and I moved on. Is a varying field with its limit taking into account length of the byte string? What really boggles my mind is that this was dumped from another PostgreSQL database. So it doesn't make sense how a constraint could allow the value to be written initially.

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >