Search Results

Search found 955 results on 39 pages for 'inserts'.

Page 34/39 | < Previous Page | 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • javascript works with mozilla but not with webkit based browsers

    - by GlassGhost
    Im having problems with a css text variable in this javascript with webkit based browsers(Chrome & Safari) but it works in firefox 3.6 importScript('User:Gerbrant/hidePane.js');//Special thanks to Gerbrant for this wonderful script function addGlobalStyle(css) { var head = document.getElementsByTagName('head')[0]; if (!head) { return; } var style = document.createElement('style'); style.type = 'text/css'; style.rel = 'stylesheet'; style.media = 'screen'; style.href = 'FireFox.css'; style.innerHTML = css; head.appendChild(style); } //The page styling var NewSyleText = "h1, h2, h3, h4, h5 {font-family: 'Verdana','Helvetica',sans-serif; font-style: normal; font-weight:normal;}" + "body, b {background: #fbfbfb; font-style: normal; font-family: 'Cochin','GaramondNo8','Garamond','Big Caslon','Georgia','Times',serif;font-size: 11pt;}" + "p { margin: 0pt; text-indent:1.25em; margin-top: 0.3em; }" + "a { text-decoration: none; color: Navy; background: none;}" + "a:visited { color: #500050;}" + "a:active { color: #faa700;}" + "a:hover { text-decoration: underline;}" + "a.stub { color: #772233;}" + "a.new, #p-personal a.new { color: #ba0000;}" + "a.new:visited, #p-personal a.new:visited { color: #a55858;}" + "a.new, #quickbar a.new { color: #CC2200; }" + /* removes "From Wikipedia, the free encyclopedia" for those of you who actually know what site you are on */ "#siteSub { display: none; }" + /* hides the speaker icon in some articles */ "#spoken-icon .image { display:none;}" + /* KHTMLFix.css */ "#column-content { margin-left: 0; }" + /* Remove contents of the footer, but not the footer itself */ "#f-poweredbyico, #f-copyrightico { display:none;}" + /* Needed to show the star icon in a featured article correctly */ "#featured-star div div { line-height: 10px;}" + /* And the content expands to top and left */ "#content {margin: 0; padding: 0; background:none;}" + "#content div.thumb {border-color:white;}" + /* Hiding the bar under the entry header */ "h1.firstHeading { border-bottom: none;}" + /* Used for US city entries */ "#coordinates { top:1.2em;}"; addGlobalStyle(NewSyleText);//inserts the page styling

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Inserting checkbox values

    - by rabeea
    hey i have registration form that has checkboxes along with other fields. i cant insert the selected checkbox values into the data base. i have made one field in the database for storing all checked values. this is the code for checkbox part in the form: Websites, IT and Software Writing and Content <pre><input type="checkbox" name="expertise[]" value="Design and Media"> Design and Media <input type="checkbox" name="expertise[]" value="Data entry and Admin"> Data entry and Admin </pre> <pre><input type="checkbox" name="expertise[]" value="Engineering and Skills"> Engineering and Science <input type="checkbox" name="expertise[]" value="Seles and Marketing"> Sales and Marketing </pre> <pre><input type="checkbox" name="expertise[]" value="Business and Accounting"> Business and Accounting <input type="checkbox" name="expertise[]" value="Others"> Others </pre> and this is the corresponding php code for inserting data $checkusername=mysql_query("SELECT * FROM freelancer WHERE fusername='{$_POST['username']}'"); if (mysql_num_rows($checkusername)==1) { echo "username already exist"; } else { $query = "insert into freelancer(ffname,flname,fgender,femail,fusername,fpwd,fphone,fadd,facc,facc_name,fbank_details,fcity,fcountry,fexpertise,fprofile,fskills,fhourly_rate,fresume) values ('".$_POST['first_name']."','".$_POST['last_name']."','".$_POST['gender']."','".$_POST['email']."','".$_POST['username']."','".$_POST['password']."','".$_POST['phone']."','".$_POST['address']."','".$_POST['acc_num']."','".$_POST['acc_name']."','".$_POST['bank']."','".$_POST['city']."','".$_POST['country']."','".implode(',',$_POST['expertise'])."','".$_POST['profile']."','".$_POST['skills']."','".$_POST['rate']."','".$_POST['resume']."')"; $result = ($query) or die (mysql_error()); this code inserts data for all fields but the checkbox value field remains empty???

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state without waiting half a day?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Problem on creating font using a custom ant task, which extends LWUIT's FontTask.

    - by Smithy
    Hi. I am new to LWUIT and j2me, and I am building a j2me application for showing Japanese text vertically. The phonetic symbol part of the text should be shown in relatively small font size (about half the size of the text), small Kanas need to be shown as normal ones, and some 'vertical only' characters need to be put into the Private Use Area, etc. I tried to build this font into a bitmap font using the FontTask ant task LWUIT provided, but found that it does support the customizations mentioned above. So I decided to write my own task and add those. Below is what I have achieved: 1 An ant task extending the LWUITTask task to support a new nested element <verticalfont>. public class VerticalFontBuildTask extends LWUITTask { public void addVerticalfont(VerticalFontTask anVerticalFont) { super.addFont(anVerticalFont); } } 2 The VerticalFontTask task, which extends the original FontTask. Instead of inserting a EditorFont object, it inserts a VerticalEditorFont object(derived from EditorFont) into the resource. public class VerticalFontTask extends FontTask { // some constants are omitted public VerticalFontTask() { StringBuilder sb = new StringBuilder(); sb.append(UPPER_ALPHABET); sb.append(UPPER_ALPHABET.toLowerCase()); sb.append(HALFWIDTH); sb.append(HIRAGANA); sb.append(HIRAGANA_SMALL); sb.append(KATAKANA); sb.append(KATAKANA_SMALL); sb.append(WIDE); this.setCharset(sb.toString()); } @Override public void addToResources(EditableResources e) { log("Putting rigged font into resource..."); super.addToResources(e); //antialias settings Object aa = this.isAntiAliasing() ? RenderingHints.VALUE_TEXT_ANTIALIAS_ON :RenderingHints.VALUE_TEXT_ANTIALIAS_OFF; VerticalEditorFont ft = new VerticalEditorFont( Font.createSystemFont( this.systemFace, this.systemStyle, this.systemSize), null, getLogicalName(), isCreateBitmap(), aa, getCharset()); e.setFont(getName(), ft); } VerticalEditorFont is just a bunch of methods logging to output and call the super. I am still trying to figure out how to extend it. But things are not going well: none of the methods on the VerticalEditorFont object get called when executing this task. My questions are: 1 where did I do wrong? 2 I want to embed a truetype font to support larger screens. I only need a small part of the font inside my application and I don't want it to carry a font resource weighing 1~2MB. Is there a way to extract only the characters needed and pack them into LWUIT?

    Read the article

  • Parsing XML file using a for loop

    - by Johnny Spintel
    I have been working on this program which inserts an XML file into a MYSQL database. I'm new to the whole .jar idea by inserting packages. Im having an issue with parse(), select(), and children(). Can someone inform me how I could fix this issue? Here is my stack trace and my program below: Exception in thread "main" java.lang.Error: Unresolved compilation problems: The method select(String) is undefined for the type Document The method children() is undefined for the type Element The method children() is undefined for the type Element The method children() is undefined for the type Element The method children() is undefined for the type Element at jdbc.parseXML.main(parseXML.java:28) import java.io.*; import java.sql.*; import org.jsoup.Jsoup; import org.w3c.dom.*; import javax.xml.parsers.*; public class parseXML{ public static void main(String xml) { try{ BufferedReader br = new BufferedReader(new FileReader(new File("C:\\staff.xml"))); String line; StringBuilder sb = new StringBuilder(); while((line=br.readLine())!= null){ sb.append(line.trim()); } Document doc = Jsoup.parse(line); StringBuilder queryBuilder; StringBuilder columnNames; StringBuilder values; for (Element row : doc.select("row")) { // Start the query queryBuilder = new StringBuilder("insert into customer("); columnNames = new StringBuilder(); values = new StringBuilder(); for (int x = 0; x < row.children().size(); x++) { // Append the column name and it's value columnNames.append(row.children().get(x).tagName()); values.append(row.children().get(x).text()); if (x != row.children().size() - 1) { // If this is not the last item, append a comma columnNames.append(","); values.append(","); } else { // Otherwise, add the closing paranthesis columnNames.append(")"); values.append(")"); } } // Add the column names and values to the query queryBuilder.append(columnNames); queryBuilder.append(" values("); queryBuilder.append(values); // Print the query System.out.println(queryBuilder); } }catch (Exception err) { System.out.println(" " + err.getMessage ()); } } }

    Read the article

  • Php plugin to replace '->' with '.' as the member access operator ? Or even better: alternative synt

    - by Gigi
    Present day usable solution: Note that if you use an ide or an advanced editor, you could make a code template, or record a macro that inserts '->' when you press Ctrl and '.' or something. Netbeans has macros, and I have recorded a macro for this, and I like it a lot :) (just click the red circle toolbar button (start record macro),then type -> into the editor (thats all the macro will do, insert the arrow into the editor), then click the gray square (stop record macro) and assign the 'Ctrl dot' shortcut to it, or whatever shortcut you like) The php plugin: The php plugin, would also have to have a different string concatenation operator than the dot. Maybe a double dot ? Yea... why not. All it has to do is set an activation tag so that it doesnt replace / interpreter '.' as '->' for old scripts and scripts that dont intent do use this. Something like this: <php+ $obj.i = 5 ?> (notice the modified '<?php' tag to '<?php+' ) This way it wouldnt break old code. (and you can just add the '<?php+' code template to your editor and then type 'php tab' (for netbeans) and it would insert '<?php+' ) With the alternative syntax method you could even have old and new syntax cohabitating on the same page like this (I am illustrating this to show the great compatibility of this method, not because you would want to do this): <?php+ $obj.i = 5; ?> <?php $obj->str = 'a' . 'b'; ?> You could change the tag to something more explanatory, in case somebody who doesnt know about the plugin reads the script and thinks its a syntax error <?php-dot.com $obj.i = 5; ?> This is easy because most editors have code templates, so its easy to assign a shortcut to it. And whoever doesnt want the dot replacement, doesnt have to use it. These are NOT ultimate solutions, they are ONLY examples to show that solutions exist, and that arguments against replacing '->' with '.' are only excuses. (Just admit you like the arrow, its ok : ) With this potential method, nobody who doesnt want to use it would have to use it, and it wouldnt break old code. And if other problems (ahem... excuses) arise, they could be fixed too. So who can, and who will do such a thing ?

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • Create attribute in existing XML

    - by user560411
    Hello. I have the following php code that inserts data into XML and works correctly. However, I want to add an ID number like below <CD id="xxxx"> My question is how can i add an ID into CD for this to work ? I use form to parse the id. ** the main code ** <?php $CD = array( 'TITLE' => $_POST['title'], 'BAND' => $_POST['band'], 'YEAR' => $_POST['year'], ); $doc = new DOMDocument(); $doc->load( 'insert.xml' ); $doc->formatOutput = true; $r = $doc->getElementsByTagName("CATEGORIES")->item(0); $b = $doc->createElement("CD"); $TITLE = $doc->createElement("TITLE"); $TITLE->appendChild( $doc->createTextNode( $CD["TITLE"] ) ); $b->appendChild( $TITLE ); $BAND = $doc->createElement("BAND"); $BAND->appendChild( $doc->createTextNode( $CD["BAND"] ) ); $b->appendChild( $BAND ); $YEAR = $doc->createElement("YEAR"); $YEAR->appendChild( $doc->createTextNode( $CD["YEAR"] ) ); $b->appendChild( $YEAR ); $r->appendChild( $b ); $doc->save("insert.xml"); ?> the XML file <?xml version="1.0" encoding="utf-8"?> <MY_CD> <CATEGORIES> <CD> <TITLE>NEVER MIND THE BOLLOCKS</TITLE> <BAND>SEX PISTOLS</BAND> <YEAR>1977</YEAR> </CD> <CD> <TITLE>NEVERMIND</TITLE> <BAND>NIRVANA</BAND> <YEAR>1991</YEAR> </CD> </CATEGORIES> </MY_CD>

    Read the article

  • JSON datetime to SQL Server database via WCF

    - by moikey
    I have noticed a problem over the past couple of days where my dates submitted to an sql server database are wrong. I have a webpage, where users can book facilities. This webpage takes a name, a date, a start time and an end time(BookingID is required for transactions but generated by database), which I format as a JSON string as follows: {"BookingEnd":"\/Date(2012-26-03 09:00:00.000)\/","BookingID":1,"BookingName":"client test 1","BookingStart":"\/Date(2012-26-03 10:00:00.000)\/","RoomID":4} This is then passed to a WCF service, which handles the database insert as follows: [WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, UriTemplate = "createbooking")] void CreateBooking(Booking booking); [DataContract] public class Booking { [DataMember] public int BookingID { get; set; } [DataMember] public string BookingName { get; set; } [DataMember] public DateTime BookingStart { get; set; } [DataMember] public DateTime BookingEnd { get; set; } [DataMember] public int RoomID { get; set; } } Booking.svc public void CreateBooking(Booking booking) { BookingEntity bookingEntity = new BookingEntity() { BookingName = booking.BookingName, BookingStart = booking.BookingStart, BookingEnd = booking.BookingEnd, RoomID = booking.RoomID }; BookingsModel model = new BookingsModel(); model.CreateBooking(bookingEntity); } Booking Model: public void CreateBooking(BookingEntity booking) { using (var conn = new SqlConnection("Data Source=cpm;Initial Catalog=BookingDB;Integrated Security=True")) using (var cmd = conn.CreateCommand()) { conn.Open(); cmd.CommandText = @"IF NOT EXISTS ( SELECT * FROM Bookings WHERE BookingStart = @BookingStart AND BookingEnd = @BookingEnd AND RoomID= @RoomID ) INSERT INTO Bookings ( BookingName, BookingStart, BookingEnd, RoomID ) VALUES ( @BookingName, @BookingStart, @BookingEnd, @RoomID )"; cmd.Parameters.AddWithValue("@BookingName", booking.BookingName); cmd.Parameters.AddWithValue("@BookingStart", booking.BookingStart); cmd.Parameters.AddWithValue("@BookingEnd", booking.BookingEnd); cmd.Parameters.AddWithValue("@RoomID", booking.RoomID); cmd.ExecuteNonQuery(); conn.Close(); } } This updates the database but the time ends up "1970-01-01 00:00:02.013" each time I submit the date in the above json format. However, when I do a query in SQL server management studio with the above date format ("YYYY-MM-DD HH:MM:SS.mmm"), it inserts the correct values. Also, if I submit a millisecond datetime to the wcf, the correct date is being inserted. The problem seems to be with the format I am submitting. I am a little lost with this problem. I don't really see why it is doing this. Any help would be greatly appreciated. Thanks.

    Read the article

  • How can I represent a line of music notes in a way that allows fast insertion at any index?

    - by chairbender
    For "fun", and to learn functional programming, I'm developing a program in Clojure that does algorithmic composition using ideas from this theory of music called "Westergaardian Theory". It generates lines of music (where a line is just a single staff consisting of a sequence of notes, each with pitches and durations). It basically works like this: Start with a line consisting of three notes (the specifics of how these are chosen are not important). Randomly perform one of several "operations" on this line. The operation picks randomly from all pairs of adjacent notes that meet a certain criteria (for each pair, the criteria only depends on the pair and is independent of the other notes in the line). It inserts 1 or several notes (depending on the operation) between the chosen pair. Each operation has its own unique criteria. Continue randomly performing these operations on the line until the line is the desired length. The issue I've run into is that my implementation of this is quite slow, and I suspect it could be made faster. I'm new to Clojure and functional programming in general (though I'm experienced with OO), so I'm hoping someone with more experience can point out if I'm not thinking in a functional paradigm or missing out on some FP technique. My current implementation is that each line is a vector containing maps. Each map has a :note and a :dur. :note's value is a keyword representing a musical note like :A4 or :C#3. :dur's value is a fraction, representing the duration of the note (1 is a whole note, 1/4 is a quarter note, etc...). So, for example, a line representing the C major scale starting on C3 would look like this: [ {:note :C3 :dur 1} {:note :D3 :dur 1} {:note :E3 :dur 1} {:note :F3 :dur 1} {:note :G3 :dur 1} {:note :A4 :dur 1} {:note :B4 :dur 1} ] This is a problematic representation because there's not really a quick way to insert into an arbitrary index of a vector. But insertion is the most frequently performed operation on these lines. My current terrible function for inserting notes into a line basically splits the vector using subvec at the point of insertion, uses conj to join the first part + notes + last part, then uses flatten and vec to make them all be in a one-dimensional vector. For example if I want to insert C3 and D3 into the the C major scale at index 3 (where the F3 is), it would do this (I'll use the note name in place of the :note and :dur maps): (conj [C3 D3 E3] [C3 D3] [F3 G3 A4 B4]), which creates [C3 D3 E3 [C3 D3] [F3 G3 A4 B4]] (vec (flatten previous-vector)) which gives [C3 D3 E3 C3 D3 F3 G3 A4 B4] The run time of that is O(n), AFAIK. I'm looking for a way to make this insertion faster. I've searched for information on Clojure data structures that have fast insertion but haven't found anything that would work. I found "finger trees" but they only allow fast insertion at the start or end of the list. Edit: I split this into two questions. The other part is here.

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • How would I implement code in a .h file into the main.cpp file?

    - by Lea
    I have a c++ project I am working on. I am a little stumped at the moment. I need a little help. I need to implement code from the .h file into the main.cpp file and I am not sure how to do that. For example code code from main.cpp: switch (choice){ case 1: // open an account { cout << "Please enter the opening balence: $ "; cin >> openBal; cout << endl; cout << "Please enter the account number: "; cin >> accountNum; cout << endl; break; } case 2:// check an account { cout << "Please enter the account number: "; cin >> accountNum; cout << endl; break; } and code from the .h file: void display(ostream& out) const; // displays every item in this list through out bool retrieve(elemType& item) const; // retrieves item from this list // returns true if item is present in this list and // element in this list is copied to item // false otherwise // transformers void insert(const elemType& item); // inserts item into this list // preconditions: list is not full and // item not present in this list // postcondition: item is in this list In the .h file you would need to use the void insert under transformer in the main.cpp under case 1. How would you do that? Any help is apprecaited. I hope I didn't confuse anyone on what I am needing to know how to do. Thanks

    Read the article

  • Boxy Submit Form

    - by jornbjorndalen
    I am using the boxy jQuery plugin in my page to display a form on a clickEvent for the fullCalendar plugin. It is working all right , the only problem I have is that the form in boxy brings up the confirmation dialog the first time the dialog is opened and when the user clicks "Ok" it submits the form a second time which generates 2 events on my calendar and 2 entries in my database. My code looks like this inside fullCalendar: dayClick: function(date, allDay, jsEvent, view) { var day=""+date.getDate(); if(day.length==1){ day="0"+day; } var year=date.getFullYear(); var month=date.getMonth()+1; var month=""+month; if(month.length==1){ month="0"+month; } var defaultdate=""+year+"-"+month+"-"+day+" 00:00:00"; var ele = document.getElementById("myform"); new Boxy(ele,{title: "Add Task", modal: true}); document.getElementById("title").value=""; document.getElementById("description").value=""; document.getElementById("startdate").value=""+defaultdate; document.getElementById("enddate").value=""+defaultdate; } I also use validators on the forms fields: $.validator.addMethod( "datetime", function(value, element) { // put your own logic here, this is just a (crappy) example return value.match(/^([0-9]{4})-([0-1][0-9])-([0-3][0-9])\s([0-1][0-9]|[2][0-3]):([0-5][0-9]):([0-5][0-9])$/); }, "Please enter a date in the format YYYY-mm-dd hh:MM:ss" ); var validator=$("#myform").validate({ onsubmit:true, rules: { title: { required: true }, startdate: { required: true, datetime: true }, enddate: { required: true, datetime: true } }, submitHandler: function(form) { //this function renders a new event and makes a call to a php script that inserts it into the db addTask(form); } }); And the form looks like this: <form id ='myform'> <table border='1' width='100%'> <tr><td align='right'>Title:</td><td align='left'><input id='title' name='title' size='30'/></td></tr> <tr><td align='right'>Description:</td><td align='left'><textarea id='description' name='description' rows='4' cols='30'></textarea></td></tr> <tr><td align='right'>Start Date:</td><td align='left'><input id='startdate' name='startdate' size='30'/></td></tr> <tr><td align='right'>End Date:</td><td align='left'><input id='enddate' name='enddate' size='30' /></td></tr> <tr><td colspan='2' align='right'><input type='submit' value='Add' /></td></tr> </table> </form>

    Read the article

  • mySQL Trigger works after console insert, but not after script insert.

    - by Marcos
    Hello, I have a strange wird problem with a trigger: I set up a trigger for update other tables after an insert in a table. If i make an insert from mysql console, all works fine, but if i do inserts from external python script, trigger does nothing, as you can see bellow. When i insert from console, works fine, but insert THE SAME DATA from python script, doesn't works, i try changing the Definer to 'user'@'%' and 'root'@'%' but stills doing nothing. Any idea of what van be wrong? Regards mysql> select vid_visit,vid_money from videos where video_id=487; +-----------+-----------+ | vid_visit | vid_money | +-----------+-----------+ | 21 | 0.297 | +-----------+-----------+ 1 row in set (0,01 sec) mysql> INSERT INTO `prusland`.`validEvents` ( `id` , `campaigns_id` , `video_id` , `date` , `producer_id` , `distributor_id` , `money_producer` , `money_distributor` , `type` ) VALUES ( NULL , '30', '487', '2010-05-20 01:20:00', '1', '0', '0.009', '0.000', 'PRE' ); Query OK, 1 row affected (0,00 sec) mysql> select vid_visit,vid_money from videos where video_id=487; +-----------+-----------+ | vid_visit | vid_money | +-----------+-----------+ | 22 | 0.306 | +-----------+-----------+ DROP TRIGGER IF EXISTS `updateVisitAndMoney`// CREATE TRIGGER `updateVisitAndMoney` BEFORE INSERT ON `validEvents` FOR EACH ROW BEGIN if (NEW.type = 'PRE') THEN SET @eventcash=NEW.money_producer + NEW.money_distributor; UPDATE campaigns SET cmp_visit_distributed = cmp_visit_distributed + 1 , cmp_money_distributed = cmp_money_distributed + NEW.money_producer + NEW.money_distributor WHERE idcampaigns = NEW.campaigns_id; UPDATE offer_producer SET ofp_visit_procesed = ofp_visit_procesed + 1 , ofp_money_procesed = ofp_money_procesed + NEW. money_producer WHERE ofp_video_id = NEW.video_id AND ofp_money_procesed = NEW. campaigns_id; UPDATE videos SET vid_visit = vid_visit + 1 , vid_money = vid_money + @eventcash WHERE video_id = NEW.video_id; if (NEW.distributor_id != '') then UPDATE agreements SET visit_procesed = visit_procesed + 1, money_producer = money_producer + NEW.money_producer, money_distributor = money_distributor + NEW.money_distributor WHERE id_campaigns = NEW. campaigns_id AND id_video = NEW.video_id AND ag_distributor_id = NEW.distributor_id; UPDATE eventForDay SET visit = visit + 1, money = money + NEW. money_distributor WHERE date = SYSDATE() AND campaign_id = NEW. campaigns_id AND user_id = NEW.distributor_id; UPDATE eventForDay SET visit = visit + 1, money = money + NEW.money_producer WHERE date = SYSDATE() AND campaign_id = NEW. campaigns_id AND user_id= NEW.producer_id; ELSE UPDATE eventForDay SET visit = visit + 1, money = money + NEW. money_producer WHERE date = SYSDATE() AND campaign_id = NEW. campaigns_id AND user_id = NEW.producer_id; END IF; END IF; END //

    Read the article

  • Self referencing a table

    - by mue
    Hello, so I'm new to NHibernate and have a problem. Perhaps somebody can help me here. Given a User-class with many, many properties: public class User { public virtual Int64 Id { get; private set; } public virtual string Firstname { get; set; } public virtual string Lastname { get; set; } public virtual string Username { get; set; } public virtual string Email { get; set; } ... public virtual string Comment { get; set; } public virtual UserInfo LastModifiedBy { get; set; } } Here some DDL for the table: CREATE TABLE USERS ( "ID" BIGINT NOT NULL , "FIRSTNAME" VARCHAR(50) NOT NULL , "LASTNAME" VARCHAR(50) NOT NULL , "USERNAME" VARCHAR(128) NOT NULL , "EMAIL" VARCHAR(128) NOT NULL , ... "LASTMODIFIEDBY" BIGINT NOT NULL , ) IN "USERSPACE1" ; Database-table-field 'LASTMODIFIEDBY' holds for auditing purposes the Id from the User who is acting in case of inserts or updates. This would normally be an admin. Because the UI shall display not this Int64 but admins name (pattern like 'Lastname, Firstname') I need to retrieve these values by self referencing table USERS to itself. Next is, that a whole object of type User would be overkill by the amount of unwanted fields. So there is a class UserInfo with much smaller footprint. public class UserInfo { public Int64 Id { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public string FullnameReverse { get { return string.Format("{0}, {1}", Lastname ?? string.Empty, Firstname ?? string.Empty); } } } So here starts the problem. Actually I have no clue how to accomplish this task. Im not sure if I also must provide a mapping for class UserInfo and not only for class User. I'd like to integrate class UserInfo as Composite-element within the mapping for User-class. But I dont no how to define the mapping between USERS.ID and USERS.LASTMODIFIEDBY table-fields. Hopefully I decribes my problem clear enough to get some hints. Thanks alot!

    Read the article

  • Class Template Instantiation: any way round this circular reference?

    - by TimYorke34
    I have two classes that I'm using to represent some hardware: A Button and an InputPin class which represent a button that will change the value of an IC's input pin when it's pressed down. A simple example of them is: template <int pinNumber> class InputPin { static bool IsHigh() { return ( (*portAddress) & (1<<pinNumber) ); } }; template <typename InputPin> class Button { static bool IsPressed() { return !InputPin::IsHigh(); } }; This works beautifully and by using class templates, the condition below will compile as tightly as if I'd handwritten it in assembly (a single instruction). Button < InputPin<1> > powerButton; if (powerButton.IsPressed()) ........; However, I am extending it to deal with interrupts and have got a problem with circular references. Compared to the original InputPin, a new InputPinIRQ class has an extra static member function that will be called automatically by the hardware when the pin value changes. I'd like it to be able to notify the Button class of this, so that the Button class can then notify the main application that it has been pressed/released. I am currently doing this with function pointers to callbacks. In order for the callback code to be inlined by the compiler, I need to pass the function pointers as template parameters. So now, both of the new classes have an extra template parameter that is a pointer to a callback function. Unfortunately this gives me a circular reference because to instantiate a ButtonIRQ class I now have to do something like this: ButtonIRQ< InputPinIRQ< A1, ButtonIRQ<....>::OnPinChange, OnButtonChange > pB; where the <...... represents the circular reference. Does anyone know how I can avoid this circular reference? I am new to templates, so might be missing something really simple. It's important that the compiler knows exactly what code will be run when the interrupt occurs as it then does some very useful optimisation - it is able to inline the callback function and literally inserts the callback function's code at the exact address that is called on a h/w interrupt.

    Read the article

  • MySQLi Insert prepared statement via PHP

    - by Jimmy
    Howdie do, This is my first time dealing with MySQLi inserts. I had always used mysql and directly ran the queries. Apparently, not as secure as MySQLi. Anywho, I'm attempting to pass two variables into the database. For some reason my prepared statement keeps erroring out. I'm not sure if it's a syntax error, but the insert just won't work. I've updated the code to make the variables easier to read Also, the error is specifically, Error preparing statement. I've updated the code, but it's not a pHP error. It's a MySQL error as the script runs but fails at the execution of: if($stmt = $mysqli - prepare("INSERT INTO subnets (id, subnet, mask, sectionId, description, vrfId, masterSubnetId, allowRequests, vlanId, showName, permissions, pingSubnet, isFolder, editDate) VALUES ('', ?, ?, '1', '', '0', '0', '0', '0', '0', '{"3":"1","2":"2"}', '0', '0', 'NULL')")) { I'm going to enable MySQL error checking. I honestly didn't know about that. <?php error_reporting(E_ALL); function InsertIPs($decimnal,$cidr) { error_reporting(E_ALL); $mysqli = new mysqli("localhost","jeremysd_ips","","jeremysd_ips"); if(mysqli_connect_errno()) { echo "Connection Failed: " . mysqli_connect_errno(); exit(); } if($stmt = $mysqli -> prepare("INSERT INTO subnets (id, subnet, mask,sectionId,description,vrfld,masterSubnetId,allowRequests,vlanId,showName,permissions,pingSubnet,isFolder,editDate) VALUES ('',?,?,'1','','0','0','0','0','0', '{'3':'1','2':'2'}', '0', '0', NULL)")) { $stmt-> bind_param('ii',$decimnal,$cidr); if($stmt-> execute()) { echo "Row added successfully\n"; } else { $stmt->error; } $stmt -> close; } } else { echo 'Error preparing statement'; } $mysqli -> close(); } $SUBNETS = array ("2915483648 | 18"); foreach($SUBNETS as $Ip) { list($TempIP,$TempMask) = explode(' | ',$Ip); echo InsertIPs($Tempip,$Tempmask); } ?> This function isn't supposed to return anything. It's just supposed to perform the Insert. Any help would be GREATLY appreciated. I'm just not sure what I'm missing here

    Read the article

  • Jquery ajax using asp.net does not work on IE9 during the 2nd call of the function?

    - by randelramirez1
    I have gridview that is loaded from another aspx page after an ajax call, the problem is it works on chrome/firefox/safari but using ie9 the ajax call would work fine during the first call but when i try to call the function again it throws an 304 status on the network tab of ie9 dev tool and the gridview is not refreshed. Here is the jquery code: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="LoadCoursesGridViewHere.aspx.cs" Inherits="CoursesGridView" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <script src="Scripts/jquery-1.8.2.js" type="text/javascript"></script> </head> <body> <form id="form1" runat="server"> <div id="Gridview-container"> <asp:GridView ID="GridView1" runat="server"> </asp:GridView> </div> <asp:TextBox ID="TextBox1" runat="server" ViewStateMode="Disabled"></asp:TextBox> <%-- <asp:Button Text="text" ID="btn" OnClientClick=" __doPostBack('UpdatePanel1', '')" runat="server" />--%> <input type="button" id="btn" value="insert"/> </form> <script type="text/javascript"> $("#btn").click(function () { var a = $("#TextBox1").val(); $.ajax({ url: 'WebService.asmx/insert', data: "{ 'name': '" + a + "' }", contentType: "application/json; charset=utf-8", type: "POST", success: function () { // alert('insert was performed.'); $("#Gridview-container").empty(); $("#Gridview-container").load("GridViewCourses.aspx #GridView1"); } }); }); </script> </body> </html> What happen is that after click the button it will insert the textbox value in the database through the webservice 'insert' and then reload the gridview that is placed inside a different aspx page. The problem is that when I ran it on IE9 during the 1st insert everything works properly but the succeeding inserts does reload the gridview and I noticed that it says '304' on the network tab of ie9 dev tool.

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39  | Next Page >