Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 689/1252 | < Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >

  • Multhreading in Java

    - by Vijay Selvaraj
    I'm working with core java and IBM Websphere MQ 6.0. We have a standalone module say DBcomponent that hits the database and fetches a resultset based on the runtime query. The query is passed to the application via MQ messaging medium. We have a trigger configured for the queue which invokes the DBComponent whenever a message is available in the queue. The DBComponent consumes the message, constructs the query and returns the resultset to another queue. In this overall process we use log4j to log statements on a log file for auditing. The connection is pooled to the database using Apache pool. I am trying to check whether the log messages are logged correctly using a sample program. The program places the input message to the queue and checks for the logs in the log file. Its expected for the trigger method invocation to complete before i try to check for the message in log file, but every time my program to check for log message gets executed first leading my check to failure. Even if i introduce a Thread.sleep(time) doesn't solves the case. How can i make it to keep my method execution waiting until the trigger operation completes? Any suggestion will be helpful.

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • MySQL can't access root account or reset with mysqladmin

    - by glumptious
    So if I type mysql -u root I'm supposedly logged in, however upon trying to create or access a database I get this lovely error: ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'test1'. I haven't the foggiest idea why after logging in as root it's trying access DB's as ''@'localhost' and it's driving me a bit crazy right now. Possibly related, when I try to set the root password I get the error mysqladmin: Can't turn off logging; error: 'Access denied; you need (at least one of) the SUPER privilege(s) for this operation'. I've tried removing mysql-server via running apt-get purge mysql-server and then reinstalling with no luck. This is running Ubuntu Server 12.10 64-bit and mysql is indeed running. --Edit-- I wonder if perhaps there is no root user. So I try to start MySQL with --skip-grant-tables and the create the root user but then I'm given this: ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement. Fun fun fun fun fun.

    Read the article

  • problem with mysql character set & GWT

    - by Ehsan Khodarahmi
    Hi I've a SmartGWT application which interacts with a mysql database using rpc services. Suppose it as a simple form with a textbox & two save & load buttons. My database & tables & all fields collation is utf8_persian_ci. All java source files & module html & xml files have saved with utf8 character set. & also I've a meta tag in module html file which contains my form : <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> my application works correctly in eclipse develpment mode & also in my local tomcat server. Then i put it on remote server (I compress it using jar.exe into a war file with -cvf flag & then upload it using my server's plesk control panel). In this mode, when I load data from a mysql table (load a record from any table), data will load into my form with no problem, but when I want to save some data (in persian language), mysql just writes some ? (question sign) in characteristic table fields. Any idea ?

    Read the article

  • JPA @ManyToMany on only one side?

    - by Ethan Leroy
    I am trying to refresh the @ManyToMany relation but it gets cleared instead... My Project class looks like this: @Entity public class Project { ... @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinTable(name = "PROJECT_USER", joinColumns = @JoinColumn(name = "PROJECT_ID", referencedColumnName = "ID"), inverseJoinColumns = @JoinColumn(name = "USER_ID", referencedColumnName = "ID")) private Collection<User> users; ... } But I don't have - and I don't want - the collection of Projects in the User entity. When I look at the generated database tables, they look good. They contain all columns and constraints (primary/foreign keys). But when I persist a Project that has a list of Users (and the users are still in the database), the mapping table doesn't get updated gets updated but when I refresh the project afterwards, the list of Users is cleared. For better understanding: Project project = ...; // new project with users that are available in the db System.out.println(project getUsers().size()); // prints 5 em.persist(project); System.out.println(project getUsers().size()); // prints 5 em.refresh(project); System.out.println(project getUsers().size()); // prints 0 So, how can I refresh the relation between User and Project?

    Read the article

  • Efficient splitting of elements in a field

    - by Gary
    I have a field in a text file exported from a database. The field contains addresses but sometimes they are quite long and the database allows them to contain multiple lines. When exported, the newline character gets replaced with a dollar sign like this: first part of very long address$second part of very long address$third part of very long address Not every address has multiple lines and no address contains more than three lines. The length of each line is variable. I'm massaging the data for import into MS Access which is used for a mailmerge. I want to split the field on the $ sign if it's there but if the field only contains 1 line, I want to set my two extra output fields to a zero length string so that I don't wind up with blank lines in the address when it gets printed. I have an awk file that's working correctly on all the other data in the textfile but I need to get this last bit working. I tried the below code. Aside from the fact that I get a syntax error at the else, I'm not sure this is a good way to do what I want. This is being done with gawk on Windows. BEGIN { FS = "|" } $1 != "HEADER" { if ($6 ~ /\$/) split($6, arr, "$") address = arr[1] addresstwo = arr[2] addressthree = arr[3] addressLength = length(address) addressTwoLength = length(addresstwo) addressThreeLength = length(addressthree) else { address = $6 addressLength = length($6) addresstwo = "" addressTwoLength = length(addresstwo) addressthree = "" addressThreeLength = length(addressthree) } printf("%*s\t%*s\t\%*s\n", addressLength, address, addressTwoLength, addresstwo, addressThreeLength, addressthree) }

    Read the article

  • PHP Hashtable array optimisation.

    - by hiprakhar
    I made a PHP app which was taking about ~0.0070sec for execution. Now, I added a hashtable array with about 2000 values. Suddenly the time for execution has gone up to ~0.0700 secs. Almost 10 times the previous value. I tried commenting out the part where I was searching inside the hashtable array (but array was still left defined). Still, the execution time remains about ~0.0500secs. Array is something like: $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); Is there any way to optimize this part? I cannot use Database as I am running this app on Google app engine which is still not supporting JDO database for php. Some more code from the app: function getsubjectinfo($name) { $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); $name = str_replace("-", "", $name); $name = str_replace(" ", "", $name); if (isset($subjectinfo["$name"])) return "(".$subjectinfo["$name"].")"; else return ""; } Then I am using the following statement 2-3 times in the app: echo $key." ".$this->getsubjectinfo($key)

    Read the article

  • Alter Table Filestream runs even if IF statement is false

    - by Allison
    I am writing a script to update a database to add Filestream capability. The script needs to be able to be run multiple times without erroring. This is what I currently have IF ((select count(*) from sys.columns a inner join sys.objects b on a.object_id = b.object_id inner join sys.default_constraints c on c.parent_object_id = a.object_id where a.name = 'evidence_data' and b.name='evidence' and c.name='DF__evidence_evidence_data') = 0) begin ALTER TABLE evidence SET ( FILESTREAM_ON = AnalysisFSGroup ) ALTER TABLE evidence ALTER COLUMN id ADD ROWGUIDCOL; end GO The first time I run this against the database it works fine. The second time when the if statement should be false it throws an error saying "Cannot add FILESTREAM filegroup or partition scheme since table 'evidence' has a FILESTREAM filegroup or partition scheme already." If I put a simple select into the if statement and take out the alter table filestream on line it functions correctly and does not perform the if statement. So esentially it is always running the alter table filestream on statement even if the if statement is false. Any thoughts or suggestions would be great. Thanks.

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • When developing a Microsoft Office Add-In (for Word), is it possible to store hidden metadata inform

    - by leftend
    I am trying to store metadata (basically a unique id) along with each cell of a table in a Word document. Currently, for the add-in I'm developing, I am querying the database, and building a table inside the Word document using the data that is retrieved. I want to be able to save any of the user's edits to the document, and persist it back to the database. My initial thought was to store a unique id along with each cell in the table so that I would be able to tell which records to update. I would also like to store some sort of "isChanged" flag within each cell so that I could tell which cells were changed. I found that I could add the needed information into the "ID" property of the cell - however, that information was not retained if the user saved the document, closed it, and re-opened it. I then tried storing the data by adding a data to the "Fields" collection - but that did not work and threw a runtime error. Here is the code that I tried: object t1 = Word.WdFieldType.wdFieldEmpty; object val = "myValue: " + counter; object preserveFormatting = true; tbl.Cell(i, j).Range.Fields.Add(tbl.Cell(i, j).Range, ref t1, ref val, ref preserveFormatting); This compiles fine, but throws this runtime error "This command is not available". So, is this possible at all? Or am I headed in the wrong direction? Thanks in advance.

    Read the article

  • Creating LINQ to SQL Data Models' Data Contexts with ASP.NET MVC

    - by Maxim Z.
    I'm just getting started with ASP.NET MVC, mostly by reading ScottGu's tutorial. To create my database connections, I followed the steps he outlined, which were to create a LINQ-to-SQL dbml model, add in the database tables through the Server Explorer, and finally to create a DataContext class. That last part is the part I'm stuck on. In this class, I'm trying to create methods that work around the exposed data. Following the example in the tutorial, I created this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MySite.Models { public partial class MyDataContext { public List<Post> GetPosts() { return Posts.ToList(); } public Post GetPostById(int id) { return Posts.Single(p => p.ID == id); } } } As you can see, I'm trying to use my Post data table. However, it doesn't recognize the "Posts" part of my code. What am I doing wrong? I have a feeling that my problem is related to my not adding the data tables correctly, but I'm not sure. Thanks in advance.

    Read the article

  • java connectivity with mysql error

    - by abson
    I just started with the connectivity and tried this example. I have installed the necessary softwares. Also copied the jar file into the /ext folder.Yet the code below has the following error import java.sql.*; public class Jdbc00 { public static void main(String args[]){ try { Statement stmt; Class.forName("com.mysql.jdbc.Driver"); String url = "jdbc:mysql://localhost:3306/mysql" DriverManager.getConnection(url,"root", "root"); //Display URL and connection information System.out.println("URL: " + url); System.out.println("Connection: " + con); //Get a Statement object stmt = con.createStatement(); //Create the new database stmt.executeUpdate( "CREATE DATABASE JunkDB"); stmt.executeUpdate( "GRANT SELECT,INSERT,UPDATE,DELETE," + "CREATE,DROP " + "ON JunkDB.* TO 'auser'@'localhost' " + "IDENTIFIED BY 'drowssap';"); con.close(); }catch( Exception e ) { e.printStackTrace(); }//end catch }//end main }//end class Jdbc00 But it gave the following error D:\Java12\Explore>java Jdbc00 java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at Jdbc11.main(Jdbc00.java:11) Could anyone please guide me in correcting this?

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • In what order does DOJO handles ajax requests?

    - by mm2887
    Hi, Using a form in a dialog box I am using Dojo in jsp to save the form in my database. After that request is completed using dojo.xhrPost() I am sending in another request to update a dropdown box that should include the added form object that was just saved, but for some reason the request to update the dropdown is executed before saving the form in the database even though the form save is called first. Using Firebug I can see that the getLocations() request is completed before the sendForm() request. This is the code: Add Team sendForm("ajaxResult1", "addTeamForm"); dijit.byId("addTeamDialog").hide(); getLocations("locationsTeam"); </script> function sendForm(target, form){ var targetNode = dojo.byId(target); var xhrArgs = { form: form, url: "ajax", handleAs: "text", load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrPost(xhrArgs); } function getLocations(id) { var targetNode = dojo.byId(id); var xhrArgs = { url: "ajax", handleAs: "text", content: { location: "yes" }, load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrGet(xhrArgs); } Why is this happening? Is there way to make the first request complete first before the second executes? To reduce the possibilities of why this is happening I tried setting the cache property in xhrGet to false but the result is still the same. Please help!

    Read the article

  • PHP mySQL Error

    - by happyCoding25
    Hello, Im new to php so I decided to follow this tutorial for a simple login screen. I got the code setup but when I try login I get this error: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in (a long file path to the script) on line 27 The code I got from the tutorial is: <?php ob_start(); $host="thehost"; // Host name $username="myusername"; // Mysql username $password="mypass"; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Define $myusername and $mypassword $myusername=$_POST['myusername']; $mypassword=$_POST['mypassword']; // To protect MySQL injection (more detail about MySQL injection) $myusername = stripslashes($myusername); $mypassword = stripslashes($mypassword); $myusername = mysql_real_escape_string($myusername); $mypassword = mysql_real_escape_string($mypassword); $sql="SELECT * FROM $tbl_name WHERE username='$myusername' and password='$mypassword'"; $result=mysql_query($sql); // Mysql_num_row is counting table row $count=mysql_num_rows($result); // If result matched $myusername and $mypassword, table row must be 1 row if($count==1){ // Register $myusername, $mypassword and redirect to file "login_success.php" session_register("myusername"); session_register("mypassword"); header("location:login_success.php"); } else { echo "Wrong Username or Password"; } ob_end_flush(); ?> (Note: All of the mySQL database info is filled in on my version) Aslo, the author gives a php5 version and a normal php version. I have tried both and gotten the same error. If anyone knows why this is happening help is really appreciated.

    Read the article

  • How to remove the error "Cant find PInvoke DLL SQLite.interop.dll"

    - by Shailesh Jaiswal
    I am developing windows mobile application. I am using the SQLlite database. I am using the following code to connect to this database as follows SQLiteConnection cn = new SQLiteConnection(); SQLiteDataReader SQLiteDR; cn.ConnectionString = @"Data Source=F:\CompNetDB.db3"; cn.Open(); SQLiteCommand cmd = new SQLiteCommand(); cmd.CommandText = "select * from CustomerInfo"; cmd.CommandType = CommandType.Text; cmd.Connection = cn; SQLiteDR = cmd.ExecuteReader(); In the above case I am getting the error "Cant find PInvike DLL SQLite.interop.dll". I have added the DLL System.Data.SQLLite from the \SQLite.NET\bin\compactframework this folder. This is the folder which is installed by default when I installed the SQLite. In the same folder there is one DLL file named SQLlite.Interop.66.DLL. When I try to add reference to this dll it is giving error that dll can not be added. Are the two dlls SQLlite.Interop.dll & System.Interop.066.dll same ? In the above code how to solve the error "Cant find PInvoke.SQLite.Interop.dll" Please can you tell whether there is mistake in my code or I am missing something in my application?

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • partial entity loading and management in silverlight / wcf ria

    - by Dan Wray
    I have a Silverlight 4 app which pulls entities down from a database using WCF RIA services. These data objects are fairly simple, just a few fields but one of those fields contains binary data of an arbitrarily size. The application needs access to this data basically asap after a user has logged in, to display in a list, enable selection etc. My problem is because of the size of this data, the load times are not acceptable and can approach the default timeout of the RIA service. I'd like to somehow partially load the objects into my local data context so that I have the IDs, names etc but not the binary data. I could then at a later point (ie when it's actually needed) populate the binary fields of those objects I need to display. Any suggestions on how to accomplish this would be welcome. Another approach which has occurred to me whilst writing this question (how often does that happen?!) is that I could move the binary data into a seperate database table joined to the original record 1:1 which would allow me to make use of RIA's lazy loading on that binary data. again.. comments welcome! Thanks.

    Read the article

  • Out Of Memory error while executing mysqldump

    - by Nishaz Salam
    Hi, I am getting the following error when trying to backup a database using mysqldump from the command prompt. C:\Documents and Settings\bobC:\Adobe\LiveCycle8.2\mysql\bin\mysqldump --quick --add-locks --lock-tables -c --default-character-set=utf8 --skip-opt -pxxxx -u adobe -r C:\Adobe\LiveCycle8.2\configurationManager\working\upgrade\mysql\adobe. sql -B adobe --port=3306 --host=localhost mysqldump: Out of memory (Needed 10380928 bytes) mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server As you can see i am using the --quick and --skip-opt too; cannot figure out what is causing the issue. The server log has the following messages 100420 15:16:39 InnoDB: Error: cannot allocate 4814100 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 33427880 bytes. Operating system errno: 2 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 100420 15:16:40 InnoDB: Warning: could not allocate 3814100 + 1000000 bytes to retrieve InnoDB: a big column. Table name adobe/tb_form_data Any help on this regard is highly appreciated P.S: The backup works fine without any issues when i use the MYSQL Administrator, but since an external app( adobe livecycle installer) uses the above command to backup the database during install, i need to get this working. Thanks, Nishaz Salam

    Read the article

  • .Net Architecture challenge: The Change-prone Frankestein Model

    - by SDReyes
    Good Morning SO! We've been scratching our heads with with this interesting scenario at the office, and we're anxious to hear your ideas and approaches: We have a database, whose schema is prone to changes -lets call it Prony-. (is used to store configuration parameters for embedded devices. so if the embedded devices guy need a new table, property or relationship for the model, he should be able to adapt the schema in a easy way -happens so often- ). Prony needs a web interface to create/edit its data. We have another database containing data that also need to be loaded to the devices, after making some transformations - lets call this one Oddy- (this data it's generated by an already existent administrative web application). Finally we have Tracy, a server that communicates our DBs and our embedded devices. She should to auto-adapt herself, to our dbs schema changes and serialize the data to the devices. Nice puzzle, don't think so? : ) Our current candidates: Rady: The fast Lets create some views in Prony that make the data transformation from Oddy. then use DynamicData (or some RAD tool) to create/update a simple web interface for Prony (so he can even consult the transformated data from coming from Prony : ). About Tracy, she will need to be recompiled to update her DB schema (Entity framework should work) and use Reflection to explore recursively the schema and serialize data. Cons: We would have to recompile Tracy and the Prony's web interface. What do you think of the candidate(s)? What would you do?

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Design Practice Retrieve Multiple Users - PHP

    - by pws5068
    Greetings all, I'm looking for a more efficient way to grab multiple users without too much code redundancy. I have a Users class, (here's a highly condensed representation) Class Users { function __construct() { ... ... } private static function getNew($id) { // This is all only example code $sql = "SELECT * FROM Users WHERE User_ID=?"; $user = new User(); $user->setID($id); .... return $user; } public static function getNewestUsers($count) { $sql = "SELECT * FROM Users ORDER BY Join_Date LIMIT ?"; $users = array(); // Prepared Statement Data Here while($stmt->fetch()) { $users[] = Users::getNew($id); ... } return $users; } // These have more sanity/security checks, demonstration only. function setID($id) { $this->id = $id; } function setName($name) { $this->name = $name; } ... } So clearly calling getNew() in a loop is inefficient, because it makes $count number of calls to the database each time. I would much rather select all users in a single SQL statement. The problem is that I have probably a dozen or so methods for getting arrays of users, so all of these would now need to know the names of the database columns which is repeating a lot of structure that could always change later. What are some other ways that I could approach this problem? My goals include efficiency, flexibility, scalability, and good design practice.

    Read the article

< Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >