Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 752/838 | < Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >

  • SqlBulkCopy causes Deadlock on SQL Server 2000.

    - by megatoast
    I have a customized data import executable in .NET 3.5 which the SqlBulkCopy to basically do faster inserts on large amounts of data. The app basically takes an input file, massages the data and bulk uploads it into a SQL Server 2000. It was written by a consultant who was building it with a SQL 2008 database environment. Would that env difference be causing this? SQL 2000 does have the bcp utility which is what BulkCopy is based on. So, When we ran this, it triggered a Deadlock error. Error details: Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. I've tried numerous ways to try to resolve it. like temporarily setting the connection string variable MultipleActiveResultSets=true, which wasn't ideal, but it still gives a Deadlock error. I also made sure it wasn't a connection time out problem. here's the function. Any advice? /// <summary> /// Bulks the insert. /// </summary> public void BulkInsert(string destinationTableName, DataTable dataTable) { SqlBulkCopy bulkCopy; if (this.Transaction != null) { bulkCopy = new SqlBulkCopy ( this.Connection, SqlBulkCopyOptions.TableLock, this.Transaction ); } else { bulkCopy = new SqlBulkCopy ( this.Connection.ConnectionString, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.UseInternalTransaction ); } bulkCopy.ColumnMappings.Add("FeeScheduleID", "FeeScheduleID"); bulkCopy.ColumnMappings.Add("ProcedureID", "ProcedureID"); bulkCopy.ColumnMappings.Add("AltCode", "AltCode"); bulkCopy.ColumnMappings.Add("AltDescription", "AltDescription"); bulkCopy.ColumnMappings.Add("Fee", "Fee"); bulkCopy.ColumnMappings.Add("Discount", "Discount"); bulkCopy.ColumnMappings.Add("Comment", "Comment"); bulkCopy.ColumnMappings.Add("Description", "Description"); bulkCopy.BatchSize = dataTable.Rows.Count; bulkCopy.DestinationTableName = destinationTableName; bulkCopy.WriteToServer(dataTable); bulkCopy = null; }

    Read the article

  • Identifying and Resolving Oracle ITL Deadlock

    - by Allan
    I have an Oracle DB package that is routinely causing what I believe is an ITL (Interested Transaction List) deadlock. The relevant portion of a trace file is below. Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cb52-00000000 22 131 S 23 143 SS TM-0000ceec-00000000 23 143 SX 32 138 SX SSX TM-0000cb52-00000000 30 138 SX 22 131 S session 131: DID 0001-0016-00000D1C session 143: DID 0001-0017-000055D5 session 143: DID 0001-0017-000055D5 session 138: DID 0001-001E-000067A0 session 138: DID 0001-001E-000067A0 session 131: DID 0001-0016-00000D1C Rows waited on: Session 143: no row Session 138: no row Session 131: no row There are no bit-map indexes on this table, so that's not the cause. As far as I can tell, the lack of "Rows waited on" plus the "S" in the Waiter waits column likely indicates that this is an ITL deadlock. Also, the table is written to quite often (roughly 8 insert or updates concurrently, as often as 240 times a minute), so and ITL deadlock seems like a strong possibility. I've increased the INITRANS parameter of the table and it's indexes to 100 and increased the PCT_FREE on the table from 10 to 20 (then rebuilt the indexes), but the deadlocks are still occurring. The deadlock seems to happen most often during an update, but that could just be a coincidence, as I've only traced it a couple of times. My questions are two-fold: 1) Is this actually an ITL deadlock? 2) If it is an ITL deadlock, what else can be done to avoid it?

    Read the article

  • SQL Compact allow only one WCF Client

    - by Andreas Hoffmann
    Hi, I write a little Chat Application. To save some infos like Username and Password I store the Data in an SQL-Compact 3.5 SP1 Database. Everything working fine, but If another (the same .exe on the same machine) Client want to access the Service. It came an EndpointNotFound exception, from the ServiceReference.Class.Open() at the Second Client. So i remove the CE Data Access Code and I get no Error (with an if (false)) Where is the Problem? I googled for this, but no one seems the same error I get :( SOLUTION I used the wrapper in http://csharponphone.blogspot.com/2007/01/keeping-sqlceconnection-open-and-thread.html for threat safty, and now it works :) Client Code: public test() { var newCompositeType = new Client.ServiceReference1.CompositeType(); newCompositeType.StringValue = "Hallo" + DateTime.Now.ToLongTimeString(); newCompositeType.Save = (Console.ReadKey().Key == ConsoleKey.J); ServiceReference1.Service1Client sc = new Client.ServiceReference1.Service1Client(); sc.Open(); Console.WriteLine("Save " + newCompositeType.StringValue); sc.GetDataUsingDataContract(newCompositeType); sc.Close(); } Server Code public CompositeType GetDataUsingDataContract(CompositeType composite) { if (composite.Save) { SqlCeConnection con = new SqlCeConnection(Properties.Settings.Default.Con); con.Open(); var com = con.CreateCommand(); com.CommandText = "SELECT * FROM TEST"; SqlCeResultSet result = com.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Updatable); var rec = result.CreateRecord(); rec["TextField"] = composite.StringValue; result.Insert(rec); result.Close(); result.Dispose(); com.Dispose(); con.Close(); con.Dispose(); } return composite; }

    Read the article

  • Auto switching databases from a rails app gracefully from the ApplicationController?

    - by Zaqintosh
    I've seen this post a few times, but haven't really found the answer to this specific question. I'd like to run a rails application that based on the detected request.host (imagine I have two subdomains points to the same rails app and server ip address: myapp1.domain.com and myapp2.domain.com). I'm trying to have myapp1 use the default "production" database, and myapp2 requests always use the alternative remote database. Here is an example of what I tried to do in Application controller that did not work: class ApplicationController < ActionController::Base helper :all before_filter :use_alternate_db private def use_alternate_db if request.host == 'myapp1.domain.com' regular_db elsif request.host == 'myapp2.domain.com' alternate_db end end def regular_db ActiveRecord::Base.establish_connection :production end def alternate_db ActiveRecord::Base.establish_connection( :adapter => 'mysql', :host => '...', :username => '...', :password => '...', :database => 'alternatedb' ) end end The problem is when it switches databases using this method, all connections (including valid sessions across the different subdomains get interrupted...). All examples online have people controlling database connectivity at the model level, but this would involve adding code all over my application. Is there some way to globally switch database connections on a per-request basis in the manner I'm suggesting above WITHOUT having to inject code all over my application? The added complexity here is I'm using Heroku as a hosting provider, so I have no control at the apache / rails application server level. I have looked at solutions like dbcharmer and magicmodels, but none seem to show examples of doing it in the manner that I'm trying to. Thanks for any help!

    Read the article

  • Rich faces and dataTable

    - by ortho
    Hi all :) I have the question regarding rich faces and beans. I have a jsp page that is using richfaces and inside it I have the: rich:extendedDatatable component, that takes data from my MainBean as ArrayList (this bean queries the mySQL and puts results to the ArrayList that populates the dataTable later on). There are 4 columns in datatable, first 3 are h:outputLabels and the last one is checkbox. Now I have a question: how can I get information from selected row ? I mean, when user clicks checkbox, I want to take the id/name or whatever that is associated to this particular row, then when user clicks on Apply changed a4j: button I will update the database and when user logs in back again he will see updated info: e.g. checkbox is selected/not selected now because the user checked that. I believe that is a simple query for someone who worked with it. For me ex. flash developer it would be easy in as3, but here I didnt find the solution yet, please help. Thank you in advance, Kindest regards

    Read the article

  • Excel 2003 VBA - Method to duplicate this code that select and colors rows

    - by Justin
    so this is a fragment of a procedure that exports a dataset from access to excel Dim rs As Recordset Dim intMaxCol As Integer Dim intMaxRow As Integer Dim objxls As Excel.Application Dim objWkb As Excel.Workbook Dim objSht As Excel.Worksheet Set rs = CurrentDb.OpenRecordset("qryOutput", dbOpenSnapshot) intMaxCol = rs.Fields.Count If rs.RecordCount > 0 Then rs.MoveLast: rs.MoveFirst intMaxRow = rs.RecordCount Set objxls = New Excel.Application objxls.Visible = True With objxls Set objWkb = .Workbooks.Add Set objSht = objWkb.Worksheets(1) With objSht On Error Resume Next .Range(.Cells(1, 1), .Cells(intMaxRow, intMaxCol)).CopyFromRecordset rs .Name = conSHT_NAME .Cells.WrapText = False .Cells.EntireColumn.AutoFit .Cells.RowHeight = 17 .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With .Rows("1:1").Select With Selection .Insert Shift:=xlDown End With .Rows("1:1").Interior.ColorIndex = 15 .Rows("1:1").RowHeight = 30 .Rows("2:2").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("4:4").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("6:6").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("1:1").Select With Selection.Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With End With End With End If Set objSht = Nothing Set objWkb = Nothing Set objxls = Nothing Set rs = Nothing Set DB = Nothing End Sub see where I am looking at coloring the rows. I wanted to select and fill (with any color) every other row, kinda like some of those access reports. I can do it manually coding each and every row, but two problems: 1) its a pain 2) i don't know what the record count is before hand. How can I make the code more efficient in this respect while incorporating the recordcount to know how many rows to "loop through" EDIT: Another question I have is with the selection methods I am using in the module, is there a better excel syntax instead of these with selections.... .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With is the only way i figure out how to accomplish this piece, but literally every other time I run this code, it fails. It says there is no object and points to the .font ....every other time? is this because the code is poor, or that I am not closing the xls app in the code? if so how do i do that? Thanks as always!

    Read the article

  • Read large file into sqlite table in objective-C on iPhone

    - by James Testa
    I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding: NULL error: NULL]; NSArray *lineArray = [file_text componentsSeparatedByString:@"\n"]; for(int k = 0; k < [lineArray count]; k++){ NSArray *parts = [[lineArray objectAtIndex:k] componentsSeparatedByString: @","]; NSString *field0 = [parts objectAtIndex:0]; NSString *field2 = [parts objectAtIndex:2]; NSString *field3 = [parts objectAtIndex:3]; NSString *loadSQLi = [[NSString alloc] initWithFormat: @"INSERT INTO TABLE (TABLE, FIELD0, FIELD2, FIELD3) VALUES ('%@', '%@', '%@');",field0, field2, field3]; if (sqlite3_exec (db_table, [loadSQLi UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { sqlite3_close(db_table); NSAssert1(0, @"Error loading table: %s", errorMsg); } Am I missing something? Does anyone know of a fast way to get the file into a database? Or is it possible to translate the file into a sqlite format that can be read directly into sqlite? Or should I turn the file into a plist and load it into a Dictionary? Unfortunately I need to search on two of the fields, and I think a Dictionary can only have one key? Jim

    Read the article

  • SQLiteDataAdapter Fill exception C# ADO.NET

    - by Lirik
    I'm trying to use the OleDb CSV parser to load some data from a CSV file and insert it into a SQLite database, but I get an exception with the OleDbAdapter.Fill method and it's frustrating: An unhandled exception of type 'System.Data.ConstraintException' occurred in System.Data.dll Additional information: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Here is the source code: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=No;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); // <-- Exception here InsertData(ds, tableName); // <-- Inserts the data into the my SQLite db } } } class Program { static void Main(string[] args) { SQLiteDatabase target = new SQLiteDatabase(); string csvFileName = "D:\\Innovations\\Finch\\dev\\DataFeed\\YahooTagsInfo.csv"; string tableName = "Tags"; target.InsertData(csvFileName, tableName); Console.ReadKey(); } } The "YahooTagsInfo.csv" file looks like this: tagId,tagName,description,colName,dataType,realTime 1,s,Symbol,symbol,VARCHAR,FALSE 2,c8,After Hours Change,afterhours,DOUBLE,TRUE 3,g3,Annualized Gain,annualizedGain,DOUBLE,FALSE 4,a,Ask,ask,DOUBLE,FALSE 5,a5,Ask Size,askSize,DOUBLE,FALSE 6,a2,Average Daily Volume,avgDailyVolume,DOUBLE,FALSE 7,b,Bid,bid,DOUBLE,FALSE 8,b6,Bid Size,bidSize,DOUBLE,FALSE 9,b4,Book Value,bookValue,DOUBLE,FALSE I've tried the following: Removing the first line in the CSV file so it doesn't confuse it for real data. Changing the TRUE/FALSE realTime flag to 1/0. I've tried 1 and 2 together (i.e. removed the first line and changed the flag). None of these things helped... One constraint is that the tagId is supposed to be unique. Here is what the table look like in design view: Can anybody help me figure out what is the problem here?

    Read the article

  • Debugging a release only flash problem

    - by Fire Lancer
    I've got an Adobe Flash 10 program that freezes in certain cases, however only when running under a release version of the flash player. With the debug version, the application works fine. What are the best approaches to debugging such issues? I considered installing the release player on my computer and trying to set some kind of non-graphical method of output up (I guess there's some way to write a log file or similar?), however I see no way to have both the release and debug versions installed anyway :( . EDIT: Ok I managed to replace my version of flash player with the release version, and no freeze...so what I know so far is: Flash: Debug Release Vista 32: works works XP PRO 32: works* freeze I gave them the debug players I had to test this Hmm, seeming less and less like an error in my code and more like a bug in the player (10.0.45.2 in all cases)... At the very least id like to see the callstack at the point it freezes. Is there some way to do that without requiring them to install various bits and pieces, e.g. by letting flash write out a log.txt or something with a "trace" like function I can insert in the code in question? EDIT2: I just gave the swf to another person with XP 32bit, same results :(

    Read the article

  • CentOS - Convert Each WAV File to MP3/OGG

    - by Benny
    I am trying to build a script (I'm pretty new to linux scripting) and I can't seem to figure out why I'm not able to run this script. If I keep the header (#!/bin/sh) in, I get the following: -bash: /tmp/ConvertAndUpdate.sh: /bin/sh^M: bad interpreter: No such file or directory If I take it out, I get the following: 'tmp/ConvertAndUpdate.sh: line 2: syntax error near unexpected token `do 'tmp/ConvertAndUpdate.sh: line 2: `do Any ideas? Here is the full script: #!/bin/sh for file in *.wav; do mp3=$(basename .$file. .wav).mp3; #echo $mp3 nice lame -b 16 -m m -q 9 .resample 8 .$file. .$mp3.; touch .reference .$file. .$mp3.; chown apache.apache .$mp3.; chmod 600 .$mp3.; rm -f .$file.; mv .$file. /converted; sql="UPDATE recordings SET IsReady=1 WHERE Filename='${file%.*}'" echo $sql | mysql --user=me --password=pasword Recordings #echo $sql done

    Read the article

  • How to code a URL shortener?

    - by marco92w
    I want to create a URL shortener service where you can write a long URL into an input field and the service shortens the URL to "http://www.example.org/abcdef". Instead of "abcdef" there can be any other string with six characters containing a-z, A-Z and 0-9. That makes 56 trillion possible strings. My approach: I have a database table with three columns: id, integer, auto-increment long, string, the long URL the user entered short, string, the shortened URL (or just the six characters) I would then insert the long URL into the table. Then I would select the auto-increment value for "id" and build a hash of it. This hash should then be inserted as "short". But what sort of hash should I build? Hash algorithms like MD5 create too long strings. I don't use these algorithms, I think. A self-built algorithm will work, too. My idea: For "http://www.google.de/" I get the auto-increment id 239472. Then I do the following steps: short = ''; if divisible by 2, add "a"+the result to short if divisible by 3, add "b"+the result to short ... until I have divisors for a-z and A-Z. That could be repeated until the number isn't divisible any more. Do you think this is a good approach? Do you have a better idea?

    Read the article

  • ASP.NET: aggregating validators in a user control

    - by orsogufo
    I am developing a web application where I would like to perform a set of validations on a certain field (an account name in the specific case). I need to check that the value is not empty, matches a certain pattern and is not already used. I tried to create a UserControl that aggregates a RequiredFieldValidator, a RegexValidator and a CustomValidator, then I created a ControlToValidate property like this: public partial class AccountNameValidator : System.Web.UI.UserControl { public string ControlToValidate { get { return ViewState["ControlToValidate"] as string; } set { ViewState["ControlToValidate"] = value; AccountNameRequiredFieldValidator.ControlToValidate = value; AccountNameRegexValidator.ControlToValidate = value; AccountNameUniqueValidator.ControlToValidate = value; } } } However, if I insert the control on a page and set ControlToValidate to some control ID, when the page loads I get an error that says Unable to find control id 'AccountName' referenced by the 'ControlToValidate' property of 'AccountNameRequiredFieldValidator', which makes me think that the controls inside my UserControl cannot resolve correctly the controls in the parent page. So, I have two questions: 1) Is it possible to have validator controls inside a UserControl validate a control in the parent page? 2) Is it correct and good practice to "aggregate" multiple validator controls in a UserControl? If not, what is the standard way to proceed?

    Read the article

  • Problem counting item frequency on T-SQL

    - by Raúl Roa
    I'm trying to count the frequency of numbers from 1 to 100 on different fields of a table. Let's say I have the table "Results" with the following data: LottoId Winner Second Third --------- --------- --------- --------- 1 1 2 3 2 1 2 3 I'd like to be able to get the frequency per numbers. For that I'm using the following code: --Creating numbers temp table CREATE TABLE #Numbers( Number int) --Inserting the numbers into the temp table declare @counter int set @counter = 0 while @counter < 100 begin set @counter = @counter + 1 INSERT INTO #Numbers(Number) VALUES(@counter) end -- SELECT #Numbers.Number, Count(Results.Winner) as Winner,Count(Results.Second) as Second, Count(Results.Third) as Third FROM #Numbers LEFT JOIN Results ON #Numbers.Number = Results.Winner OR #Numbers.Number = Results.Second OR #Numbers.Number = Results.Third GROUP BY #Numbers.Number The problem is that the counts are repeating the same values for each number. In this particular case I'm getting the following result: Number Winner Second Third --------- --------- --------- --------- 1 2 2 2 2 2 2 2 3 2 2 2 ... When I should get this: Number Winner Second Third --------- --------- --------- --------- 1 2 0 0 2 0 2 0 3 0 0 2 ... What am I missing?

    Read the article

  • DataGridView in VB.net will not allow me to update

    - by Marc
    I have a datagridview with a dataTable as the dataSource. The user can add new rows to the datadridview, but I dont display the primary key column (for obvious reasons) and set it to .visible = false. When I need to update the information in the datagridview to the database, I use the sqlClient.SqlCommandBuilder to then update the underlying datasource (the dataTable mentioned above). Now, because the hidden column is the primary key, I loop through the datagridview and programically add the required primary key field to each new row that does not already contain a primary key (user added rows). This works great 95% of the time... The problem is when the user somehow gives focus at some point (any point) to that bottom row on the datagridview, below their added rows, that is used to add new rows. The update command gives me an error stating that it cannot insert null into the primary key field, even though when checking all the values in every row, it is definitely NOT null for any of them. I have tried to trap for row.isNewRow (as the field never shows null) and deleting that row, but I get an error stating I can not delete an uncommitted row. If the focus is never given to that empty row beneath the existing rows and user added rows, the update works fine. What is going on?!

    Read the article

  • C# Problem with getPixel & setting RTF text colour accordingly

    - by m3n
    Heyo, I'm messing with converting images to ASCII ones. For this I load the image, use getPixel() on each pixel, then change insert a character with that colour into a richTextBox. Bitmap bmBild = new Bitmap(openFileDialog1.FileName.ToString()); // valid image int x = 0, y = 0; for (int i = 0; i <= (bmBild.Width * bmBild.Height - bmBild.Height); i++) { // Ändra text här richTextBox1.Text += "x"; richTextBox1.Select(i, 1); if (bmBild.GetPixel(x, y).IsKnownColor) { richTextBox1.SelectionColor = bmBild.GetPixel(x, y); } else { richTextBox1.SelectionColor = Color.Red; } if (x >= (bmBild.Width -1)) { x = 0; y++; richTextBox1.Text += "\n"; } x++; } GetPixel does return the correct colour, but the text only end up black. If I change this richTextBox1.SelectionColor = bmBild.GetPixel(x, y); to this richTextBox1.SelectionColor = Color.Red; It works fine. Why am I not getting the right colours? (I know it doesn't do the new lines properly, but I thought I'd get to the bottom of this issue first.) Thanks

    Read the article

  • Doctrine unsigned validation error storing created_at

    - by Alex Dean
    Hi, I'm having problems with the Timestampable functionality in Doctrine 1.2.2. The error I get on trying to save() my Record is: Uncaught exception 'Doctrine_Validator_Exception' with message 'Validation failed in class XXX 1 field had validation error: * 1 validator failed on created_at (unsigned) ' in ... I've created the relevant field in the MySQL table as: created_at DATETIME NOT NULL, Then in setTableDefinition() I have: $this->hasColumn('created_at', 'timestamp', null, array( 'type' => 'timestamp', 'fixed' => false, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, )); Which is taken straight from the output of generateModelsFromDb(). And finally my setUp() looks like: public function setUp() { parent::setUp(); $this->actAs('Timestampable', array( 'created' => array( 'name' => 'created_at', 'type' => 'timestamp', 'format' => 'Y-m-d H:i:s', 'disabled' => false, 'options' => array() ), 'updated' => array( 'disabled' => true ))); } (I've tried not defining all of those fields for 'created', but I get the same problem.) I'm a bit stumped as to what I'm doing wrong - for one thing I can't see why Doctrine would be running any unsigned checks against a 'timestamp' datatype... Any help gratefully received! Alex

    Read the article

  • TinyMCE Image Alignment

    - by will.earp.co.uk
    TinyMCE has always been a little difficult to align images. Either the align tag, or adding style="float: left;" has been it solution. Ideally I would just like to add class="left" or class="right" so that I can set the border and margins of the image. Up until now the only way to do this without using the advimage plugin was to insert the image, then select it, the select a style from the style menu. Ideally I should be able to use the align control in the image dialogue to set the alignment class or use the alignment controls on the toolbar when in the main editing window. I have just again started looking at a solution to this, now that IE6 is finally starting to die, I can use CSS attributes in selectors, so IMG[style="float: left;"] {} Works, but I would rather use a class incase there are any other style attributes which will cause the selector to fail. And it doesn't work in IE6, and you know some corporate clients will still be running the bloody thing! So I looked through the TinyMCE documentation and found the formats configuration option, that seems to allow you to specify how tinyMCE applies code for various operations. Here I can add the IMG tag as a selector, and have classes: "left" for the alignleft function. This applies the class correctly when the alignment is selected from the toolbar, but it still writes an inline style when the alignment is selected through the image dialogue. Am I doing something wrong or is there a better way of doing this that will allow my clients to select image alignment from both the image dialogue and the toolbar, whilst applying a class to the image?

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • Problem parsing an atom feed using simplexml_load_file(), can't get an attribute.

    - by Craig Ward
    Hi, I am trying to create a social timeline. I pull in feeds form certain places so I have a timeline of thing I have done. The problem I am having is with Google reader Shared Items. I want to get the time at which I shared the item which is contained in <entry gr:crawl-timestamp-msec="1269088723811"> Trying to get the element using $date = $xml->entry[$i]->link->attributes()->gr:crawl-timestamp-msec; fails because of the : after gr which causes a PHP error. I could figure out how to get the element, so thought I would change the name using the code below but it throws the following error Warning: simplexml_load_file() [function.simplexml-load-file]: I/O warning : failed to load external entity "<?xml version="1.0"?><feed xmlns:idx="urn:atom-extension:indexing" xmlns:media="http://search.yahoo.com/mrss/" xmlns <?php $get_feed = file_get_contents('http://www.google.com/reader/public/atom/user/03120403612393553979/state/com.google/broadcast'); $old = "gr:crawl-timestamp-msec"; $new = "timestamp"; $xml_file = str_replace($old, $new, $get_feed); $xml = simplexml_load_file($xml_file); $i = 0; foreach ($xml->entry as $value) { $id = $xml->entry[$i]->id; $date = date('Y-m-d H:i:s', strtotime($xml->entry[$i]->attributes()->timestamp )); $text = $xml->entry[$i]->title; $link = $xml->entry[$i]->link->attributes()->href; $source = "googleshared"; echo "date = $date<br />"; $sql="INSERT IGNORE INTO timeline (id,date,text,link, source) VALUES ('$id', '$date', '$text', '$link', '$source')"; mysql_query($sql); $i++; }` Could someone point me in the right direction please. Cheers Craig

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • Regarding Toplink Fetching Policy

    - by Chandu
    Hi, I'm working for a Swing Project and the technologies used are netbeans with Toplink essentials, mysql. The Problem I'm facing is the entity object dosn't get updated after insertions take place while calling a getter collection of the foreign key property. Ex: I have 2 tables Table1,Table2. I have sno column, id column as a primary key in Table1 & is Foreign Key in Table2. Through find method I just get the particular sno object(existed in table 1) set some values persisted to table2 & committed the transaction. When I select the same sno object through find method & gets its collection from Table2 through the getTable2Collection() of the bean(as it is already created in bean by toplink essential) I'm unable to get the latest added record except that all other records of it are displayed. After I close the application & opening it then the new record gets reflected while calling the same sno through the above process. I came to know that this is a kind of lazy fetching and there should be some way of fetch policy to be changed to make the entity object get updated with the changes. So Please help me in this regard. Regards, Chandu

    Read the article

  • sqlite is must for merb ?????

    - by mayank
    Hello All, I have a doubt regarding merb dependency with sqlite. I am going to install merb on my m/c and I don't have installed sqlite on my m/c . I tried this command "gem install merb" and faced following error. If is there any way to install merb with mysql please tell me. Thanks Mayank Building native extensions. This could take a while... ERROR: Error installing merb: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for sqlite3.h... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2/ext/do_sqlite3/gem_make.out

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • JSF ISO-8859-2 charset

    - by Vladimir
    Hi! I have problem with setting proper charset on my jsf pages. I use MySql db with latin2 (ISO-8859-2 charset) and latin2_croatian_ci collation. But, I have problems with setting values on backing managed bean properties. Page directive on top of my page is: <%@ page language="java" pageEncoding="ISO-8859-2" contentType="text/html; charset=ISO-8859-2" %> In head I included: <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-2"> And my form tag is: <h:form id="entityDetails" acceptcharset="ISO-8859-2"> I've created and registered Filter in web.xml with following doFilter method implementation: public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { request.setCharacterEncoding("ISO-8859-2"); response.setCharacterEncoding("ISO-8859-2"); chain.doFilter(request, response); } But, i.e. when I set managed bean property through inputText, all special (unicode) characters are replaced with '?' character. I really don't have any other ideas how to set charset to pages to perform well. Any suggestions? Thanks in advance.

    Read the article

  • Where are tables in Mnesia located?

    - by Sanoj
    I try to compare Mnesia with more traditional databases. As I understand it tables in Mnesia can be located to: ram_copies - tables are stored in RAM only, so no durability as in ACID. disc_copies - tables are located on disc and a copy is located in RAM, so the table can not be bigger than the available memory? disc_only_copies - tables are located to disc only, so no caching in memory and worse performance? And the size of the table are limited to the size of dets or the table has to be fragmented. So if I want the performance of doing reads from RAM and the durability of writes to disc, then the size of the tables are very limited compared to a traditional RDBMS like MySQL or PostgreSQL. I know that Mnesia aren't meant to replace traditional RDBMS:s, but can it be used as a big RDBMS or do I have to look for another database? The server I will use is a VPS with limited amount of memory, around 512MB, but I want good database performance. Are disc_copies and the other types of tables in Mnesia so limited as I have understood?

    Read the article

< Previous Page | 748 749 750 751 752 753 754 755 756 757 758 759  | Next Page >