Search Results

Search found 8267 results on 331 pages for 'insert into'.

Page 95/331 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • XSLT: Transforming into non-xml content?

    - by Ian Boyd
    Is it possible to use XSLT to transform XML into something other than XML? e.g. i want the final non-xml content: <HTML> <BODY> <IMG src="file1.png"><BR> <IMG src="file2.png"><BR> ... <IMG src="filen.png"><BR> </BODY> </HTML> You'll notice this document is HTML, because in HTML IMG and BR tags are forbidden from having a closing tag. This constrasts with xhtml, the reformulation of HTML using xml, where all elements are required from having a closing tag (because in xml every tag must be closed). Is it possible, using XSLT, to generate non-xml output? Another example of non-xml output might be: INSERT INTO Documents (Filename) VALUES ('file1.png') INSERT INTO Documents (Filename) VALUES ('file2.png') ... INSERT INTO Documents (Filename) VALUES ('file3.png') The reason i ask is that as soon as my XSLT contains an <IMG>, the stylesheet contains an error; no closing </IMG>.

    Read the article

  • What is the equivalent to Master Views from ASP.NET in PHP?

    - by KingNestor
    I'm used to working in ASP.NET / ASP.NET MVC and now for class I have to make a PHP website. What is the equivalent to Master Views from ASP.NET in the PHP world? Ideally I would like to be able to define a page layout with something like: Master.php <html> <head> <title>My WebSite</title> <?php headcontent?> </head> <body> <?php bodycontent?> </body> </html> and then have my other PHP pages inherit from Master, so I can insert into those predefined places. Is this possible in PHP? Right now I have the top half of my page defined as "Header.html" and the bottom half is "footer.html" and I include_once both of them on each page I create. However, this isn't ideal for when I want to be able to insert into multiple places on my master page such as being able to insert content into the head. Can someone skilled in PHP point me in the right direction?

    Read the article

  • How to query MySQL for exact length and exact UTF-8 characters

    - by oskarae
    I have table with words dictionary in my language (latvian). CREATE TABLE words ( value varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; And let's say it has 3 words inside: INSERT INTO words (value) VALUES ('teja'); INSERT INTO words (value) VALUES ('vejš'); INSERT INTO words (value) VALUES ('feja'); What I want to do is I want to find all words that is exactly 4 characters long and where second character is 'e' and third character is 'j' For me it feels that correct query would be: SELECT * FROM words WHERE value LIKE '_ej_'; But problem with this query is that it returs not 2 entries ('teja','vejš') but all three. As I understand it is because internally MySQL converts strings to some ASCII representation? Then there is BINARY addition possible for LIKE SELECT * FROM words WHERE value LIKE BINARY '_ej_'; But this also does not return 2 entries ('teja','vejš') but only one ('teja'). I believe this has something to do with UTF-8 2 bytes for non ASCII chars? So question: What MySQL query would return my exact two words ('teja','vejš')? Thank you in advance

    Read the article

  • How do I query SQLite Database in Android?

    - by kunjaan
    I successfully created the Database and inserted a row however I cannot Query it for some reason. My Droid crashes everytime. String DATABASE_NAME = "myDatabase.db"; String DATABASE_TABLE = "mainTable"; String DATABASE_CREATE = "create table if not exists " + DATABASE_TABLE + " ( value VARCHAR not null);"; SQLiteDatabase myDatabase = null; myDatabase = openOrCreateDatabase(DATABASE_NAME, Context.MODE_PRIVATE, null); myDatabase.execSQL(DATABASE_CREATE); // Create a new row of values to insert. ContentValues newValues = new ContentValues(); // Assign values for each row. newValues.put("value", "kunjan"); // Insert the row into your table myDatabase.insert(DATABASE_TABLE, null, newValues); String[] result_columns = new String[] { "value" }; Cursor allRows = myDatabase.query(true, DATABASE_TABLE, result_columns, null, null, null, null, null, null); if (allRows.moveToFirst()) { String value = allRows.getString(1); TextView foo = (TextView) findViewById(R.id.TextView01); foo.setText(value); } allRows.close(); myDatabase.close();

    Read the article

  • Passing an Ajax variable to a Codeigniter function

    - by Matt
    Hello, I think this is a simple one. I have a Codeigniter function which takes the inputs from a form and inserts them into a database. I want to Ajaxify the process. At the moment the first line of the function gets the id field from the form - I need to change this to get the id field from the Ajax post (which references a hidden field in the form containing the necessary value) instead. How do I do this please? My Codeigniter Controller function function add() { $product = $this->products_model->get($this->input->post('id')); $insert = array( 'id' => $this->input->post('id'), 'qty' => 1, 'price' => $product->price, 'size' => $product->size, 'name' => $product->name ); $this->cart->insert($insert); redirect('home'); } And the jQuery Ajax function $("#form").submit(function(){ var dataString = $("input#id") //alert (dataString);return false; $.ajax({ type: "POST", url: "/home/add", data: dataString, success: function() { } }); return false; }); As always, many thanks in advance.

    Read the article

  • DBD::Oracle and utf8 issue

    - by goe
    Hi All, I have a problem where my perl code using the latest DBD::Oracle on perl v5.8.8 throws an exception on me when I try to insert characters like 'ñ'. Exception: DBD::Oracle::db do failed: ORA-01756: quoted string not properly terminated (DBD ERROR: OCIStmtPrepare) My $ENV{NLS_LANG} is set to 'AMERICAN_AMERICA.AL32UTF8' These are the DB params based on "SELECT * from NLS_DATABASE_PARAMETERS" 1 NLS_LANGUAGE AMERICAN 2 NLS_TERRITORY AMERICA 3 NLS_CURRENCY $ 4 NLS_ISO_CURRENCY AMERICA 5 NLS_NUMERIC_CHARACTERS ., 6 NLS_CHARACTERSET AL32UTF8 7 NLS_CALENDAR GREGORIAN 8 NLS_DATE_FORMAT DD-MON-RR 9 NLS_DATE_LANGUAGE AMERICAN 10 NLS_SORT BINARY 11 NLS_TIME_FORMAT HH.MI.SSXFF AM 12 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM 13 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR 14 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR 15 NLS_DUAL_CURRENCY $ 16 NLS_COMP BINARY 17 NLS_LENGTH_SEMANTICS BYTE These are perl params based on "$db-ora_nls_parameters()" $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'BYTE', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'AL32UTF8', 'NLS_CURRENCY' => '$' }; Here are some other strange facts: If I set NLS_LANG to ‘'AMERICAN_AMERICA.UTF8’ the insert executes fine with ‘ñ’ character. If I leave NLS_LANG as ‘'AMERICAN_AMERICA.AL32UTF8' but use ‘Ñ’ the insert will run fine as well.

    Read the article

  • [iText] Inserting Image onCloseDocument

    - by David
    I'm trying to insert an image in the footer of my document using iText's onCloseDocument event. I have the following code: public void onCloseDocument(PdfWriter writer, Document document) { PdfContentByte pdfByte = writer.getDirectContent(); try { // logo is a non-null global variable Image theImage = new Jpeg(logo); pdfByte.addImage(theImage, 400.0f, 0.0f, 0.0f, 400.0f, 0.0f, 0.0f); } catch (Exception e) { e.printStackTrace(); } } The code throws no exceptions, but it also fails to insert the image. This identical code is used onOpenDocument to insert the same logo. The only difference between the two methods are the coordinates in pdfByte.addImage. However, I've tried quite a few different coordinations in onCloseDocument and none of them appear anywhere in my document. Is there any troubleshooting technique for detecting content which is displayed off-page in a PDF? If not, can anyone see the problem with my onCloseDocument method? Edit: As a followup, it seems that onDocumentClose puts its content on page document.length() + 1 (according to its API). However, I don't know how to change the page number back to document.length() and place my logo on the last page.

    Read the article

  • Stored Procedure Does Not Fire Last Command

    - by jp2code
    On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote. It is not throwing any errors, and it is returning the correct values after making the insert in the database. However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run. I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works. Is there something wrong with how I call it in my stored procedure? [dbo].[spUpdateRequest](@packetID int, @statusID int output, @empID int, @mtf nVarChar(50)) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @id int SET @id=-1 -- Insert statements for procedure here SELECT A.ID, PacketID, StatusID INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID) WHERE (PacketID=@packetID) AND (StatusID=@statusID) IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue SELECT ID, MTF INTO #req FROM Request WHERE PacketID=@packetID WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN SELECT TOP 1 @id=ID FROM #req INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp) VALUES (@id, @statusID, @empID, GETDATE()) IF ((@mtf IS NOT NULL) AND (0 < LEN(RTRIM(@mtf)))) BEGIN UPDATE Request SET MTF=@mtf WHERE ID=@id END DELETE #req WHERE ID=@id END DROP TABLE #req SELECT @id=@@IDENTITY, @statusID=StatusID FROM Action SELECT TOP 1 @statusID=ID FROM Status WHERE (@statusID<ID) AND (-1 < Sequence) EXEC spSendEventNotificationEmail @packetID, @statusID, 'http:\\cpweb:8100\NextStep.aspx' END ELSE BEGIN SET @statusID = -1 END DROP TABLE #act END Idea of how the data tables are connected:

    Read the article

  • What are some methods to prevent double posting in a form? (PHP)

    - by jpjp
    I want to prevent users from accidentally posting a comment twice. I use the PRG (post redirect get) method, so that I insert the data on another page then redirect the user back to the page which shows the comment. This allows users to refresh as many times as they want. However this doesn't work when the user goes back and clicks submit again or when they click submit 100 times really fast. I don't want 100 of the same comments. I looked at related questions on SO and found that a token is best. But I am having trouble using it. //makerandomtoken(20) returns a random 20 length char. <form method="post" ... > <input type="text" id="comments" name="comments" class="commentbox" /><br/> <input type="hidden" name="_token" value="<?php echo $token=makerandomtoken(20); ?>" /> <input type="submit" value="submit" name="submit" /> </form> if (isset($_POST['submit']) && !empty($comments)) { $comments= mysqli_real_escape_string($dbc,trim($_POST['comments'])); //how do I make the if-statment to check if the token has been already set once? if ( ____________){ //don't insert comment because already clicked submit } else{ //insert the comment into the database } } So I have the token as a hidden value, but how do I use that to prevent multiple clicking of submit. METHODS: someone suggested using sessions. I would set the random token to $_SESSION['_token'] and check if that session token is equal to the $_POST['_token'], but how do I do that? When I tried, it still doesn't check

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Problems with string parameter insertion into prepared statement

    - by c0d3x
    Hi, I have a database running on an MS SQL Server. My application communicates via JDBC and ODBC with it. Now I try to use prepared statements. When I insert a numeric (Long) parameter everything works fine. When I insert a string parameter it does not work. There is no error message, but an empty result set. WHERE column LIKE ('%' + ? + '%') --inserted "test" -> empty result set WHERE column LIKE ? --inserted "%test%" -> empty result set WHERE column = ? --inserted "test" -> works But I need the LIKE functionality. When I insert the same string directly into the query string (not as a prepared statement parameter) it runs fine. WHERE column LIKE '%test%' It looks a little bit like double quoting for me, but I never used quotes inside a string. I use preparedStatement.setString(int index, String x) for insertion. What is causing this problem? How can I fix it? Thanks in advance.

    Read the article

  • SQL query to get latest record for all distinct items in a table

    - by David Buckley
    I have a table of all sales defined like: mysql> describe saledata; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | SaleDate | datetime | NO | | NULL | | | StoreID | bigint(20) unsigned | NO | | NULL | | | Quantity | int(10) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+ I need to get the last sale price for all items (as the price may change). I know I can run a query like: SELECT price FROM saledata WHERE itemID = 1234 AND storeID = 111 ORDER BY saledate DESC LIMIT 1 However, I want to be able to get the last sale price for all items (the ItemIDs are stored in a separate item table) and insert them into a separate table. How can I get this data? I've tried queries like this: SELECT storeID, itemID, price FROM saledata WHERE itemID IN (SELECT itemID from itemmap) ORDER BY saledate DESC LIMIT 1 and then wrap that into an insert, but it's not getting the proper data. Is there one query I can run to get the last price for each item and insert that into a table defined like: mysql> describe lastsale; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | StoreID | bigint(20) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+

    Read the article

  • Spring + iBatis + Hessian caching

    - by ILya
    Hi. I have a Hessian service on Spring + iBatis working on Tomcat. I'm wondering how to cache results... I've made the following config in my sqlmap file: <sqlMap namespace="Account"> <cacheModel id="accountCache" type="MEMORY" readOnly="true" serialize="false"> <flushInterval hours="24"/> <flushOnExecute statement="Account.addAccount"/> <flushOnExecute statement="Account.deleteAccount"/> <property name="reference-type" value="STRONG" /> </cacheModel> <typeAlias alias="Account" type="domain.Account" /> <select id="getAccounts" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts; </select> <select id="getAccount" parameterClass="Long" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts where id=#id#; </select> <insert id="addAccount" parameterClass="Account"> fix all; insert into accounts (id, name, pin) values (#id#, #name#, #pin#); </insert> <delete id="deleteAccount" parameterClass="Long"> fix all; delete from accounts where id = #id#; </delete> </sqlMap> Then i've done some tests... I have a hessian client application. I'm calling getAccounts several times and after each call it's a query to DBMS. How to make my service to query DBMS only a first time (after server restart) getAccounts called and for the following calls to use a cache?

    Read the article

  • Django install on a shared host, .htaccess help

    - by redconservatory
    I am trying to install Django on a shared host using the following instructions: docs.google.com/View?docid=dhhpr5xs_463522g My problem is with the following line on my root .htaccess: RewriteRule ^(.*)$ /cgi-bin/wcgi.py/$1 [QSA,L] When I include this line I get a 500 error with almost all of my domains on this account. My cgi-bin directory is home/my-username/public_html/cgi-bin/ The wcgi.py file contains: #!/usr/local/bin/python import os, sys sys.path.insert(0, "/home/username/django/") sys.path.insert(0, "/home/username/django/projects") sys.path.insert(0, "/home/username/django/projects/newprojects") import django.core.handlers.wsgi os.chdir("/home/username/django/projects/newproject") # optional os.environ['DJANGO_SETTINGS_MODULE'] = "newproject.settings" def runcgi(): environ = dict(os.environ.items()) environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) environ['wsgi.multithread'] = False environ['wsgi.multiprocess'] = True environ['wsgi.run_once'] = True application = django.core.handlers.wsgi.WSGIHandler() if environ.get('HTTPS','off') in ('on','1'): environ['wsgi.url_scheme'] = 'https' else: environ['wsgi.url_scheme'] = 'http' headers_set = [] headers_sent = [] def write(data): if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set sys.stdout.write('Status: %s\r\n' % status) for header in response_headers: sys.stdout.write('%s: %s\r\n' % header) sys.stdout.write('\r\n') sys.stdout.write(data) sys.stdout.flush() def start_response(status,response_headers,exc_info=None): if exc_info: try: if headers_sent: # Re-raise original exception if headers sent raise exc_info[0], exc_info[1], exc_info[2] finally: exc_info = None # avoid dangling circular ref elif headers_set: raise AssertionError("Headers already set!") headers_set[:] = [status,response_headers] return write result = application(environ, start_response) try: for data in result: if data: # don't send headers until body appears write(data) if not headers_sent: write('') # send headers now if body was empty finally: if hasattr(result,'close'): result.close() runcgi() Only I changed the "username" to my username...

    Read the article

  • media.set_xx giving me grief!

    - by Firas
    New guy here. I asked a while back about a sprite recolouring program that I was having difficulty with and got some great responses. Basically, I tried to write a program that would recolour pixels of all the pictures in a given folder from one given colour to another. I believe I have it down, but, now the program is telling me that I have an invalid value specified for the red component of my colour. (ValueError: Invalid red value specified.), even though it's only being changed from 64 to 56. Any help on the matter would be appreciated! (Here's the code, in case I messed up somewhere else; It's in Python): import os import media import sys def recolour(old, new, folder): old_list = old.split(' ') new_list = new.split(' ') folder_location = os.path.join('C:\', 'Users', 'Owner', 'Spriting', folder) for filename in os.listdir (folder): current_file = media.load_picture(folder_location + '\\' + filename) for pix in current_file: if (media.get_red(pix) == int(old_list[0])) and \ (media.get_green(pix) == int(old_list[1])) and \ (media.get_blue(pix) == int(old_list[2])): media.set_red(pix, new_list[0]) media.set_green(pix, new_list[1]) media.set_blue(pix, new_list[2]) media.save(pic) if name == 'main': while 1: old = str(raw_input('Please insert the original RGB component, separated by a single space: ')) if old == 'quit': sys.exit(0) new = str(raw_input('Please insert the new RGB component, separated by a single space: ')) if new == 'quit': sys.exit(0) folder = str(raw_input('Please insert the name of the folder you wish to modify: ')) if folder == 'quit': sys.exit(0) else: recolour(old, new, folder)

    Read the article

  • Hibernate saveorUpdate method problem

    - by umesh awasthi
    Hi all, i am trying to work on with Hibernate (Database side for first time) and some how struck in choosing best possible way to use saveOrUpdate or Save.Update i have a destination POJO class and its other attributes which needs to be updated along with the Destination entity. i am getting an XML file to be imported and i am using the following code to update/save the destination entity along with its attribute classes. try{ getSessionFactory().getCurrentSession().beginTransaction(); getSessionFactory().getCurrentSession().saveOrUpdate(entity); getSessionFactory().getCurrentSession().getTransaction().commit(); } catch(Exception e){ getSessionFactory().getCurrentSession().getTransaction().rollback(); } finally{ getSessionFactory().close(); } everything is working fine untill i am using the same session instance.but later on when i am using the same XML file to update the destination PO for certain attributes it giving me the following error. SEVERE: Duplicate entry 'MNLI' for key 'DESTINATIONID' 9 Jan, 2011 4:58:11 PM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:114) at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:109) at org.hibernate.jdbc.AbstractBatcher.prepareBatchStatement(AbstractBatcher.java:244) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2242) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) i am using UUID as primary key for the Destination table and in destination table i have a destination id which is unique.but i can understand that in the secodn case hibernate is not able to find if there is already an entry for the same destination in the DB and trying to execute insert statement rather than update. one possible solution is i can user destinationid to check if there is already a destination inplace with the given id and based on the results i can issue save or update command. my question is can this be achieve by any other good way..? Thanks in advance

    Read the article

  • Protecting critical sections based on a condition in C#

    - by NAADEV
    Hello, I'm dealing with a courious scenario. I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid. I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that: static readonly object LockableObject = new object(); static void SaveElement(Entity e) { lock(LockableObject) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } But this implies when i have a huge ammount of requests to be saved, they will be queued. I wonder if there is something like that (please, take it just as an idea): static void SaveElement(Entity e) { (using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property. Any idea?

    Read the article

  • strange segmentation fault during function return

    - by Kyle
    I am running a program on 2 different machines. On one it works fine without issue. On the other it results in a segmentation fault. Through debugging, I have figured out where the fault occurs, but I can't figure out a logical reason for it to happen. In one function I have the following code: pass_particles(particle_grid, particle_properties, input_data, coll_eros_track, collision_number_part, world, grid_rank_lookup, grid_locations); cout<<"done passing particles"<<endl; The function pass_particles looks like: void pass_particles(map<int,map<int,Particle> > & particle_grid, std::vector<Particle_props> & particle_properties, User_input& input_data, data_tracking & coll_eros_track, vector<int> & collision_number_part, mpi::communicator & world, std::map<int,int> & grid_rank_lookup, map<int,std::vector<double> > & grid_locations) { //cout<<"east-west"<<endl; //east-west exchange (x direction) map<int, vector<Particle> > particles_to_be_sent_east; map<int, vector<Particle> > particles_to_be_sent_west; vector<Particle> particles_received_east; vector<Particle> particles_received_west; int counter_x_sent=0; int counter_x_received=0; for(grid_iter=particle_grid.begin();grid_iter!=particle_grid.end();grid_iter++) { map<int,Particle>::iterator part_iter; for (part_iter=grid_iter->second.begin();part_iter!=grid_iter->second.end();) { if (particle_properties[part_iter->second.global_part_num()].particle_in_box()[grid_iter->first]) { //decide if a particle has left the box...need to consider whether particle was already outside the box if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()>(grid_locations[grid_iter->first-input_data.z_numboxes()][0])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes()) && (part_iter->second.position().x()>(grid_locations[input_data.total_boxes()-1][0])))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()<(grid_locations[grid_iter->first+input_data.z_numboxes()][1])) || (input_data.periodic_walls_x() && (grid_iter->first-floor(grid_iter->first/(input_data.xz_numboxes()))*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1) && (part_iter->second.position().x()<(grid_locations[0][1]))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); particle_properties[particle_grid[grid_iter->first][part_iter->first].global_part_num()].particle_in_box()[grid_iter->first]=false; counter_sent++; counter_x_sent++; } //select particles in overlap areas to send to neighboring cells else if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } ++part_iter; } else if (particles_received_current[grid_iter->first].find(part_iter->first)!=particles_received_current[grid_iter->first].end()) { if ((part_iter->second.position().x()>(grid_locations[grid_iter->first][0]) && part_iter->second.position().x()<(grid_locations[grid_iter->first][0]+input_data.diam_large()))) { particles_to_be_sent_west[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } else if ((part_iter->second.position().x()<(grid_locations[grid_iter->first][1]) && part_iter->second.position().x()>(grid_locations[grid_iter->first][1]-input_data.diam_large()))) { particles_to_be_sent_east[grid_iter->first].push_back(part_iter->second); counter_sent++; counter_x_sent++; } part_iter++; } else { particle_grid[grid_iter->first].erase(part_iter++); counter_removed++; } } } world.barrier(); mpi::request reqs_x_send[particles_to_be_sent_west.size()+particles_to_be_sent_east.size()]; vector<multimap<int,int> > box_sent_x_info; box_sent_x_info.resize(world.size()); vector<multimap<int,int> > box_received_x_info; box_received_x_info.resize(world.size()); int counter_x_reqs=0; //send particles for(grid_iter_vec=particles_to_be_sent_west.begin();grid_iter_vec!=particles_to_be_sent_west.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 50 will be "west" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)], grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(), grid_iter_vec->first + input_data.z_numboxes()*(input_data.x_numboxes()-1))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes()))*input_data.xz_numboxes()<input_data.z_numboxes())) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()], grid_iter_vec->first - input_data.z_numboxes(), particles_to_be_sent_west[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()]].insert(pair<int,int>(world.rank(),grid_iter_vec->first - input_data.z_numboxes())); } } } for(grid_iter_vec=particles_to_be_sent_east.begin();grid_iter_vec!=particles_to_be_sent_east.end();grid_iter_vec++) { if (grid_iter_vec->second.size()!=0) { //send a particle. 60 will be "east" tag if (input_data.periodic_walls_x() && (grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)], 2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)]].insert(pair<int,int>(world.rank(),2000000000-(grid_iter_vec->first - input_data.z_numboxes()*(input_data.x_numboxes()-1)))); } else if (!(grid_iter_vec->first-floor(grid_iter_vec->first/(input_data.xz_numboxes())*input_data.xz_numboxes())>input_data.xz_numboxes()-input_data.z_numboxes()-1)) { reqs_x_send[counter_x_reqs++]=world.isend(grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()], 2000000000-(grid_iter_vec->first + input_data.z_numboxes()), particles_to_be_sent_east[grid_iter_vec->first]); box_sent_x_info[grid_rank_lookup[grid_iter_vec->first + input_data.z_numboxes()]].insert(pair<int,int>(world.rank(), 2000000000-(grid_iter_vec->first + input_data.z_numboxes()))); } } } counter=0; for (int i=0;i<world.size();i++) { //if (world.rank()!=i) //{ reqs[counter++]=world.isend(i,1000000000,box_sent_x_info[i]); reqs[counter++]=world.irecv(i,1000000000,box_received_x_info[i]); //} } mpi::wait_all(reqs, reqs + world.size()*2); //receive particles //receive west particles for (int j=0;j<world.size();j++) { multimap<int,int>::iterator received_info_iter; for (received_info_iter=box_received_x_info[j].begin();received_info_iter!=box_received_x_info[j].end();received_info_iter++) { //receive the message if (received_info_iter->second<1000000000) { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_west); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_west.size();i++) { particle_grid[received_info_iter->second].insert(pair<int,Particle>(particles_received_west[i].global_part_num(),particles_received_west[i])); if(particles_received_west[i].position().x()>grid_locations[received_info_iter->second][0] && particles_received_west[i].position().x()<grid_locations[received_info_iter->second][1]) { particle_properties[particles_received_west[i].global_part_num()].particle_in_box()[received_info_iter->second]=true; } counter_received++; counter_x_received++; } } else { //receive the message world.recv(received_info_iter->first,received_info_iter->second,particles_received_east); //loop through all the received particles and add them to the particle_grid for this processor for (unsigned int i=0;i<particles_received_east.size();i++) { particle_grid[2000000000-received_info_iter->second].insert(pair<int,Particle>(particles_received_east[i].global_part_num(),particles_received_east[i])); if(particles_received_east[i].position().x()>grid_locations[2000000000-received_info_iter->second][0] && particles_received_east[i].position().x()<grid_locations[2000000000-received_info_iter->second][1]) { particle_properties[particles_received_east[i].global_part_num()].particle_in_box()[2000000000-received_info_iter->second]=true; } counter_received++; counter_x_received++; } } } } mpi::wait_all(reqs_y_send, reqs_y_send + particles_to_be_sent_bottom.size()+particles_to_be_sent_top.size()); mpi::wait_all(reqs_z_send, reqs_z_send + particles_to_be_sent_south.size()+particles_to_be_sent_north.size()); mpi::wait_all(reqs_x_send, reqs_x_send + particles_to_be_sent_west.size()+particles_to_be_sent_east.size()); cout<<"x sent "<<counter_x_sent<<" and received "<<counter_x_received<<" from rank "<<world.rank()<<endl; cout<<"rank "<<world.rank()<<" sent "<<counter_sent<<" and received "<<counter_received<<" and removed "<<counter_removed<<endl; cout<<"done passing"<<endl; } I only posted some of the code (so ignore the fact that some variables may appear to be undefined, as they are in a portion of the code I didn't post) When I run the code (on the machine in which it fails), I get done passing but not done passing particles I am lost as to what could possibly cause a segmentation fault between the end of the called function and the next line in the calling function and why it would happen on one machine and not another.

    Read the article

  • Having trouble hiding keyboard using invisible button which sits on top of uiscrollview

    - by phil
    I have 3 items in play... 1) UIView sits at the base of the hierarchy and contains the UIScrollview. 2) UIScrollview that is presenting a lengthy user form. 3) An invisible button on the UIScrollview that I'm using to provide "hide the keyboard" features. Notice in the code below that I'm registering to be notified when the keyboard is going to appear and again when it's going to disappear. These are working great. My problem is seemingly one of "layers". See below where I insert the button into the view atIndex:0. This causes the button to be activated and "stuffed" behind the scrollview so that when you click on it, the scrollview grabs the touch and the button is unaware. There is no way to "reach" the button and suppress the keyboard. However, if I insert atIndex:1, the button gets super imposed on top of the text entry fields and so any touch at all is acted upon by the button, which immediately suppresses the keyboard and then disappears. How do I insert the button on top of the UIScrollview but behind the UITextfields that sit there? other logistics: I have a -(void) hidekeyboard function that I have setup with the UIButtion as an IBAction(). And I have the UIButton connected to "files owner" via a ctrl-drag/drop. (Do I need both of those conventions?) This code in ViewDidLoad()... [[NSNotificationCenter defaultCenter] addObserverForName:UIKeyboardWillShowNotification object:nil queue:nil usingBlock:^(NSNotification *notification){ [self.view insertSubview:self.keyboardDismissalButton atIndex:0]; }];

    Read the article

  • CFfile -- value is not set to the queried data

    - by user494901
    I have this add user form, it also doubles as a edit user form by querying the data and setting the value="#query.xvalue#". If the user exists (eg, you're editing a user, it loads in the users data from the database. When doing this on the <cffile field it does not load in the data, then when the insert goes to insert data it overrights the database values with a blank string (If a user does not input a new file). How do I avoid this? Code: Form: <br/>Digital Copy<br/> <!--- If null, set a default if not, set the default to database default ---> <cfif len(Trim(certificationsList.cprAdultImage)) EQ 0> <cfinput type="file" required="no" name="cprAdultImage" value="" > <cfelse> File Exists: <cfoutput><a href="#certificationsList.cprAdultImage#">View File</a></cfoutput> <cfinput type="file" required="no" name="cprAdultImage" value="#certificationsList.cprAdultImage#"> </cfif> Form Processor: <!--- Has a file been specificed? ---> <cfif not len(Trim(form.cprAdultImage)) EQ 0> <cffile action="upload" filefield="cprAdultImage" destination="#destination#" nameConflict="makeUnique"> <cfinvokeargument name="cprAdultImage" value="#pathOfFile##cffile.serverFile#"> <cfelse> <cfinvokeargument name="cprAdultImage" value=""> </cfif> CFC ARGS: <cfargument name="cprAdultExp" required="NO"> <cfargument name="cprAdultCompany" type="string" required="no"> <cfargument name="cprAdultImage" type="string" required="no"> <cfargument name="cprAdultOnFile" type="boolean" required="no"> Query: UPDATE mod_StudentCertifications SET cprAdultExp='#DateFormat(ARGUMENTS.cprAdultExp, "mm/dd/yyyy")#', cprAdultCompany='#Trim(ARGUMENTS.cprAdultCompany)#', cprAdultImage='#Trim(ARGUMENTS.cprAdultImage)#', cprAdultOnFile='#Trim(ARGUMENTS.cprAdultOnFile)#' INSERT INTO mod_StudentCertifications( cprAdultExp, cprAdultcompany, cprAdultImage, cprAdultOnFile

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • MS Access: Permission problems with views

    - by Keith Williams
    "I'll use an Access ADP" I said, "it's only a tiny project and I've got better things to do", I said, "I can build an interface really quickly in Access" I said. </sarcasm> Sorry for the rant, but it's Friday, I have a date in just under two hours, and I'm here late because this just isn't working - so, in despair, I turn to SO for help. Access ADP front-end, linked to a SQL Server 2008 database Using a SQL Server account to log into the database (for testing); this account is a member of the role, "Api"; this role has SELECT, EXECUTE, INSERT, UPDATE, DELETE access to the "Api" schema The "Api" schema is owned by "dbo" All tables have a corresponding view in the Api schema: e.g. dbo.Customer -- Api.Customers The rationale is that users don't have direct table access, but can deal with views as if they were tables I can log into SQL using my test login, and it works fine: no access to the tables, but I can select, insert, update and delete from the Api views. In Access, I see the views, I can open them, but whenever I try to insert or update, I get the following error: The SELECT permission was denied on the object '[Table name which the view is using]', database '[database name]', schema 'dbo' Crazy as it sounds, Access seems to be trying to access the underlying table rather than the view. Any ideas?

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >