Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 361/3203 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • loop in simplexml load file

    - by shantanuo
    I am using the following code to parse the XML data to MySQL table and it is working as expected. <?php $sxe = simplexml_load_file("$myfile"); foreach($sxe->AUTHAD as $sales) { $authad="insert into test.authad values ('".mysql_real_escape_string($sales->LOCALDATE)."','".mysql_real_escape_string($sales->LOCALTIME)."','".mysql_real_escape_string($sales->TLOGID)."','" ... ?> The problem is that when I get a new xml file with different format, I can not use the above insert into statement and have to change the parameters manually. For e.g. $authadv_new="insert into test.authad values ('".mysql_real_escape_string($sales->NEW1)."','".mysql_real_escape_string($sales->NEW2)."','".mysql_real_escape_string($sales->NEW3)."','" ... Is there any way to automate this? I mean the PHP code should be able to anticipate the parameters and generate the mysql_real_escape_string($sales-NEW1) to NEW10 values using a loop.

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • Improve SQL query performance

    - by Anax
    I have three tables where I store actual person data (person), teams (team) and entries (athlete). The schema of the three tables is: In each team there might be two or more athletes. I'm trying to create a query to produce the most frequent pairs, meaning people who play in teams of two. I came up with the following query: SELECT p1.surname, p1.name, p2.surname, p2.name, COUNT(*) AS freq FROM person p1, athlete a1, person p2, athlete a2 WHERE p1.id = a1.person_id AND p2.id = a2.person_id AND a1.team_id = a2.team_id AND a1.team_id IN ( SELECT id FROM team, athlete WHERE team.id = athlete.team_id GROUP BY team.id HAVING COUNT(*) = 2 ) GROUP BY p1.id ORDER BY freq DESC Obviously this is a resource consuming query. Is there a way to improve it?

    Read the article

  • jQuery DatePicker - 'fake' click on page load

    - by Danny
    Hey! I've got a quick question about the jQuery UI DatePicker. When I load the page, defaultDate: 0 will work fine with selecting the current day's date. I would like to create a 'fake' click on the date so it will execute my JavaScript function and retrieve information from the database. I tried calling the function when the page loads but that doesn't work. $(document).ready(function(){ $("#datepicker").datepicker({ gotoCurrent: false, onSelect: function(date, inst) { ajaxFunction(date); }, dateFormat: 'dd-mm-yy', defaultDate: 0, changeMonth: true, changeYear: true }); }); //Browser Support Code function ajaxFunction(date){ var ajaxRequest; // The variable that makes Ajax possible! try{ // Opera 8.0+, Firefox, Safari ajaxRequest = new XMLHttpRequest(); } catch (e){ // Internet Explorer Browsers try{ ajaxRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try{ ajaxRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e){ // Something went wrong alert("Your browser broke!"); return false; } } } // Create a function that will receive data sent from the server ajaxRequest.onreadystatechange = function(){ if(ajaxRequest.readyState == 4){ var ajaxDisplay = document.getElementById('ajaxDiv'); ajaxDisplay.innerHTML = ajaxRequest.responseText; } } var queryString = "?date=" + date; ajaxRequest.open("GET", "getDiary.php" + queryString, true); ajaxRequest.send(null); } function ajaxAdd(){ var ajaxRequest; // The variable that makes Ajax possible! try{ // Opera 8.0+, Firefox, Safari ajaxRequest = new XMLHttpRequest(); } catch (e){ // Internet Explorer Browsers try{ ajaxRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try{ ajaxRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e){ // Something went wrong alert("Your browser broke!"); return false; } } } var day1 = $("#datepicker").datepicker('getDate').getDate(); var day2 = (day1 < 10) ? '0' + day1 : day1; var month1 = $("#datepicker").datepicker('getDate').getMonth() + 1; var month2 = (month1 < 10) ? '0' + month1 : month1; var year1 = $("#datepicker").datepicker('getDate').getFullYear(); var year2 = (year1 < 1000) ? year1 + 1900 : year1; var fullDate = day2 + "-" + month2 + "-" + year2; var queryString = "?breakfast=" + diary1.breakfast.value; queryString = queryString + "&lunch=" + diary1.lunch.value; queryString = queryString + "&dinner=" + diary1.dinner.value; queryString = queryString + "&date=" + fullDate; ajaxRequest.open("GET", "addDiary.php" + queryString, true); ajaxRequest.send(null); alert("Added value to database!"); diary1.breakfast.value = ""; diary1.lunch.value = ""; diary1.dinner.value = ""; ajaxFunction(fullDate); } I have pasted my DatePicker class, and the two functions that are used (one to retrieve information from the database, and one to store). Basically I want to mirror the onSelect: function on the DatePicker, but when the page first loads. Thanks!

    Read the article

  • How do I load an image from sd card into coverview

    - by Jolene
    I am doing an application which consists of a cover flow. Currently I am trying to make the cover flow images be obtained from the sd card. But I am not exactly sure how should I display the image. this is the image adapter: public class ImageAdapter extends BaseAdapter { int mGalleryItemBackground; private Context mContext; private FileInputStream fis; private Integer[] mImageIds = { //Instead of using r.drawable, i need to load the images from my sd card instead R.drawable.futsing, R.drawable.futsing2 }; private ImageView[] mImages; public ImageAdapter(Context c) { mContext = c; mImages = new ImageView[mImageIds.length]; } public int getCount() { return mImageIds.length; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } @TargetApi(8) public View getView(int position, View convertView, ViewGroup parent) { int screenSize = getResources().getConfiguration().screenLayout & Configuration.SCREENLAYOUT_SIZE_MASK; ImageView i = new ImageView(mContext); i.setImageResource(mImageIds[position]); switch(screenSize) { case Configuration.SCREENLAYOUT_SIZE_XLARGE: //Toast.makeText(this, "Small screen",Toast.LENGTH_LONG).show(); Display display4 = ((WindowManager) context.getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay(); int rotation4 = display4.getRotation(); Log.d("XLarge:",String.valueOf(rotation4)); /** LANDSCAPE **/ if(rotation4 == 0 || rotation4 == 2) { i.setLayoutParams(new CoverFlow.LayoutParams(300, 300)); i.setScaleType(ImageView.ScaleType.CENTER_INSIDE); BitmapDrawable drawable = (BitmapDrawable) i.getDrawable(); drawable.setAntiAlias(true); return i; } /** PORTRAIT **/ else if (rotation4 == 1 || rotation4 == 3) { i.setLayoutParams(new CoverFlow.LayoutParams(650, 650)); i.setScaleType(ImageView.ScaleType.CENTER_INSIDE); BitmapDrawable drawable = (BitmapDrawable) i.getDrawable(); drawable.setAntiAlias(true); return i; } break; default: } return null; } /** Returns the size (0.0f to 1.0f) of the views * depending on the 'offset' to the center. */ public float getScale(boolean focused, int offset) { /* Formula: 1 / (2 ^ offset) */ return Math.max(0, 1.0f / (float)Math.pow(2, Math.abs(offset))); } } and this is how i download and save the file. I need to use the image saved and upload it into my coverflow // IF METHOD TO DOWNLOAD IMAGE void downloadFile() { new Thread(new Runnable(){ // RUN IN BACKGROUND THREAD TO AVOID FREEZING OF UI public void run(){ Bitmap bmImg; URL myFileUrl = null; try { //for (int i = 0; i < urlList.size(); i ++) //{ //url = urlList.get(i); myFileUrl = new URL("http://static.adzerk.net/Advertisers/d18eea9d28f3490b8dcbfa9e38f8336e.jpg"); // RETRIEVE IMAGE URL //} } catch (MalformedURLException e) { e.printStackTrace(); } try { HttpURLConnection conn = (HttpURLConnection) myFileUrl.openConnection(); conn.setDoInput(true); conn.connect(); InputStream in = conn.getInputStream(); Log.i("im connected", "Download"); bmImg = BitmapFactory.decodeStream(in); saveFile(bmImg); } catch (IOException e) { e.printStackTrace(); }} }).start(); // START THREAD } // SAVE THE IMAGE AS JPG FILE private void saveFile(Bitmap bmImg) { File filename; try { // GET EXTERNAL STORAGE, SAVE FILE THERE File storagePath = new File(Environment.getExternalStorageDirectory(),"Covers"); storagePath.mkdirs(); filename = new File(storagePath + "/image.jpg"); FileOutputStream out = new FileOutputStream(filename); bmImg.compress(Bitmap.CompressFormat.JPEG, 90, out); out.flush(); out.close(); MediaStore.Images.Media.insertImage(getContentResolver(),filename.getAbsolutePath(), filename.getName(), filename.getName()); // ONCE THE DOWNLOAD FINISHES, CLOSE DIALOG Toast.makeText(getApplicationContext(), "Image Saved!", Toast.LENGTH_SHORT).show(); } catch (Exception e) { e.printStackTrace(); } }

    Read the article

  • Configuring Cruise Control Net with sourcesafe - Unable to load array item 'executable'

    - by albert
    Hi all, I'm trying to create a continuous integration environment. To do so, I've used a guide that can be found at http://www.15seconds.com/issue/040621.htm. In this step by step, the goal is to create a CI with CCNet, NAnt, NUni, NDoc, FxCop and source safe. I've been able to create my build by using the command prompt (despite the the different versions issues). The problem has come with the configuration of ccnet.config I've made some changes because of the new versions, but I'm still getting errors when starting the CCNet server. Can anyone help me to fix this issue or point where to find a guide with this scenario? The error that I'm getting: Unable to instantiate CruiseControl projects from configuration document. Configuration document is likely missing Xml nodes required for properly populating CruiseControl configuration. Unable to load array item 'executable' - Cannot convert from type System.String to ThoughtWorks.CruiseControl.Core.ITask for object with value: "\DevTools\nant\bin\NAnt.exe" Xml: E:\DevTools\nant\bin\NAnt.exe My CCNet config file below: <cruisecontrol> <project name="BuildingSolution"> <webURL>http://localhost/ccnet</webURL> <modificationDelaySeconds>10</modificationDelaySeconds> <triggers> <intervaltrigger name="continuous" seconds="60" /> </triggers> <sourcecontrol type="vss" autoGetSource="true"> <ssdir>E:\VSS\</ssdir> <executable>C:\Program Files\Microsoft Visual SourceSafe\SS.EXE</executable> <project>$/CCNet/slnCCNet.root/slnCCNet</project> <username>Albert</username> <password></password> </sourcecontrol> <prebuild type="nant"> <executable>E:\DevTools\nant\bin\NAnt.exe</executable> <buildFile>E:\Builds\buildingsolution\WebForm.build</buildFile> <logger>NAnt.Core.XmlLogger</logger> <buildTimeoutSeconds>300</buildTimeoutSeconds> </prebuild> <tasks> <nant> <executable>E:\DevTools\nant\bin\nant.exe</executable> <nologo>true</nologo> <buildFile>E:\Builds\buildingsolution\WebForm.build</buildFile> <logger>NAnt.Core.XmlLogger</logger> <targetList> <target>build</target> </targetList> <buildTimeoutSeconds>6000</buildTimeoutSeconds> </nant> </tasks> <publishers> <merge> <files> <file>E:\Builds\buildingsolution\latest\*-results.xml</file> </files> </merge> <xmllogger /> </publishers> </project> </cruisecontrol> enter code here

    Read the article

  • My android app crash before load main.xml

    - by Saverio Puccia
    my android app crash before load main.xml. This is the exception thrown java.lang.RuntimeException: Unable to resume activity...: java.lang.NullPointerException What happens? For completeness I enclose my manifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="saverio.puccia.nfcauth" android:versionCode="1" android:versionName="1.0" > //permission <uses-sdk android:minSdkVersion="10" /> <uses-permission android:name="android.permission.NFC"></uses-permission> <uses-permission android:name="android.permission.READ_PHONE_STATE"></uses-permission> //main declaration <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".NFCAuthActivity" android:label="@string/app_name" > //intent filter declaration <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.NDEF_DISCOVER" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> </application> </manifest> LOGCAT 06-25 11:03:23.670: E/AndroidRuntime(4323): FATAL EXCEPTION: main 06-25 11:03:23.670: E/AndroidRuntime(4323): java.lang.RuntimeException: Unable to resume activity {saverio.puccia.nfcauth/saverio.puccia.nfcauth.NFCAuthActivity}: java.lang.NullPointerException 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2456) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:2484) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1998) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.ActivityThread.access$600(ActivityThread.java:127) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1159) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.os.Handler.dispatchMessage(Handler.java:99) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.os.Looper.loop(Looper.java:137) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.ActivityThread.main(ActivityThread.java:4507) 06-25 11:03:23.670: E/AndroidRuntime(4323): at java.lang.reflect.Method.invokeNative(Native Method) 06-25 11:03:23.670: E/AndroidRuntime(4323): at java.lang.reflect.Method.invoke(Method.java:511) 06-25 11:03:23.670: E/AndroidRuntime(4323): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:790) 06-25 11:03:23.670: E/AndroidRuntime(4323): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) 06-25 11:03:23.670: E/AndroidRuntime(4323): at dalvik.system.NativeStart.main(Native Method) 06-25 11:03:23.670: E/AndroidRuntime(4323): Caused by: java.lang.NullPointerException 06-25 11:03:23.670: E/AndroidRuntime(4323): at saverio.puccia.nfcauth.NFCAuthActivity.onResume(NFCAuthActivity.java:103) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.Instrumentation.callActivityOnResume(Instrumentation.java:1157) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.Activity.performResume(Activity.java:4539) 06-25 11:03:23.670: E/AndroidRuntime(4323): at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2446) 06-25 11:03:23.670: E/AndroidRuntime(4323): ... 12 more 06-25 11:03:23.670: W/ActivityManager(1998): Force finishing activity r.intent.getComponent().flattenToShortString()

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • Using UUIDs for cheap equals() and hashCode()

    - by Tom McIntyre
    I have an immutable class, TokenList, which consists of a list of Token objects, which are also immutable: @Immutable public final class TokenList { private final List<Token> tokens; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); } public List<Token> getTokens() { return tokens; } } I do several operations on these TokenLists that take multiple TokenLists as inputs and return a single TokenList as the output. There can be arbitrarily many TokenLists going in, and each can have arbitrarily many Tokens. These operations are expensive, and there is a good chance that the same operation (ie the same inputs) will be performed multiple times, so I would like to cache the outputs. However, performance is critical, and I am worried about the expense of performing hashCode() and equals() on these objects that may contain arbitrarily many elements (as they are immutable then hashCode could be cached, but equals will still be expensive). This led me to wondering whether I could use a UUID to provide equals() and hashCode() simply and cheaply by making the following updates to TokenList: @Immutable public final class TokenList { private final List<Token> tokens; private final UUID uuid; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); this.uuid = UUID.randomUUID(); } public List<Token> getTokens() { return tokens; } public UUID getUuid() { return uuid; } } And something like this to act as a cache key: @Immutable public final class TopicListCacheKey { private final UUID[] uuids; public TopicListCacheKey(TopicList... topicLists) { uuids = new UUID[topicLists.length]; for (int i = 0; i < uuids.length; i++) { uuids[i] = topicLists[i].getUuid(); } } @Override public int hashCode() { return Arrays.hashCode(uuids); } @Override public boolean equals(Object other) { if (other == this) return true; if (other instanceof TopicListCacheKey) return Arrays.equals(uuids, ((TopicListCacheKey) other).uuids); return false; } } I figure that there are 2^128 different UUIDs and I will probably have at most around 1,000,000 TokenList objects active in the application at any time. Given this, and the fact that the UUIDs are used combinatorially in cache keys, it seems that the chances of this producing the wrong result are vanishingly small. Nevertheless, I feel uneasy about going ahead with it as it just feels 'dirty'. Are there any reasons I should not use this system? Will the performance costs of the SecureRandom used by UUID.randomUUID() outweigh the gains (especially since I expect multiple threads to be doing this at the same time)? Are collisions going to be more likely than I think? Basically, is there anything wrong with doing it this way?? Thanks.

    Read the article

  • reuse div container to load images

    - by user295927
    Hi All; What I would like to do: use a single container to load images. As it is now: I have eleven (11) containers in HTML mark-up each with its own div. each container holds 4 images (2 images side by side top and bottom) when link in anchor tag is clicked div with images fades in. /*-- Jquery accordion is used for navigation typical example is below --*/ <li> <a class="head" href="#">commercial/hospitality</a> <ul> <li><a href="#" projectName="project1" projectType="hospitality1" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 1</a> </li> <li><a href="#" projectName="project2" projectType="hospitality2" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 2</a> </li> <li><a href="#" projectName="project3" projectType="hospitality3" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 3</a> </li> </ul> </li> Typical <div> container used for image insertion currently there are 11 of them: <div id="hospitality1" class="current"> <div id="image1"><img src="images/testImage.jpg"/></div> <div id="image2"><img src="images/testImage.jpg"/></div> <div id="image3"><img src="images/testImage.jpg"/></div> <div id="image4"><img src="images/testImage.jpg"/></div> </div> Here is the code I am using at this point, it does work, but is there a better way to do this that will only re-use a single div container for loading the images? $(document).ready(function(){ $('#navigation a').click(function (selected) { var projectType = $(this).attr("projectType"); //projectType var projectName = $(this).attr("projectName"); //projectName var image1 = $(this).attr("image1"); //anchor tag for image number 1 var image2 = $(this).attr("image2"); //anchor tag for image number 2 var image3 = $(this).attr("image3"); //anchor tag for image number 3 var image4 = $(this).attr("image4"); //anchor tag for image number 4 console.log(projectType); //returns type of project console.log(projectName); //returns name of project console.log(image1); //returns 1st image console.log(image2); //returns 2nd image console.log(image3); //returns 3rd image console.log(image4); //returns 4th image $(function() { $(".current").hide(); // hides previous selected image $("#" + projectType ).fadeIn("normal").addClass("current"); }); }); As you can, see the mark up getting quite large. Any help is appreciated. ussteele

    Read the article

  • Make c# matrix code faster

    - by Wam
    Hi all, Working on some matrix code, I'm concerned of performance issues. here's how it works : I've a IMatrix abstract class (with all matrices operations etc), implemented by a ColumnMatrix class. abstract class IMatrix { public int Rows {get;set;} public int Columns {get;set;} public abstract float At(int row, int column); } class ColumnMatrix : IMatrix { private data[]; public override float At(int row, int column) { return data[row + columns * this.Rows]; } } This class is used a lot across my application, but I'm concerned with performance issues. Testing only read for a 2000000x15 matrix against a jagged array of the same size, I get 1359ms for array access agains 9234ms for matrix access : public void TestAccess() { int iterations = 10; int rows = 2000000; int columns = 15; ColumnMatrix matrix = new ColumnMatrix(rows, columns); for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) matrix[i, j] = i + j; float[][] equivalentArray = matrix.ToRowsArray(); TimeSpan totalMatrix = new TimeSpan(0); TimeSpan totalArray = new TimeSpan(0); float total = 0f; for (int iteration = 0; iteration < iterations; iteration++) { total = 0f; DateTime start = DateTime.Now; for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) total = matrix.At(i, j); totalMatrix += (DateTime.Now - start); total += 1f; //Ensure total is read at least once. total = total > 0 ? 0f : 0f; start = DateTime.Now; for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) total = equivalentArray[i][j]; totalArray += (DateTime.Now - start); } if (total < 0f) logger.Info("Nothing here, just make sure we read total at least once."); logger.InfoFormat("Average time for a {0}x{1} access, matrix : {2}ms", rows, columns, totalMatrix.TotalMilliseconds); logger.InfoFormat("Average time for a {0}x{1} access, array : {2}ms", rows, columns, totalArray.TotalMilliseconds); Assert.IsTrue(true); } So my question : how can I make this thing faster ? Is there any way I can make my ColumnMatrix.At faster ? Cheers !

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

  • load search results into a div that toggles with other divs

    - by Z. Edwards
    I am working with a page that has multiple divs that toggle. This function works. A search function was added and this works too. The problem with the page as it exists currently: The search bar was placed on the "default" div and the results load below the bar into another div that is invisible when empty. The results div is inside this first default div. If you toggle to another div, you lose the default div and can't get back to it. For this reason, I moved the search bar to the left navigation where the other toggle links are situated. I also moved the search results div out of the default div to "stand on its own." What I am trying to do: Make the search button show the div with the results as well as find the results. Basically, to integrate the search function into the array/toggle function. The search function is in one .js file and the toggle function is in a different .js file. I keep thinking there must be a way to get "onclick" to call from both .js files so that I don't have to do a bunch of extra work combining the two functions that already exist and work separately. I am a Javascript newbie learning by examples and haven't been able to figure this out. I have never seen a working example of this and my searches haven't produced one. I would be very grateful for any help. Hope I explained the problem adequately. Edit: Here is the code I already have for the toggle function. var ids=new Array('a','b','c',[and so on--search results not added here yet]); function switchid(id_array){ hideallids(); for( var i=0, limit=id_array.length; i < limit; ++i) showdiv(id_array[i]); } function hideallids(){ for (var i=0;i<ids.length;i++){ hidediv(ids[i]); } } function hidediv(id) { //safe function to hide an element with a specified id if (document.getElementById) { // DOM3 = IE5, NS6 document.getElementById(id).style.display = 'none'; } else { if (document.layers) { // Netscape 4 document.id.display = 'none'; } else { // IE 4 document.all.id.style.display = 'none'; } } } function showdiv(id) {//safe function to show an element with a specified id if (document.getElementById) { // DOM3 = IE5, NS6 document.getElementById(id).style.display = 'block'; } else { if (document.layers) { // Netscape 4 document.id.display = 'block'; } else { // IE 4 document.all.id.style.display = 'block'; } } } function initialize(){ var t = gup("target"); if( t ) { switchid([t]); } } function gup( name ) { name = name.replace(/[\[]/,"\\\[").replace(/[\]]/,"\\\]"); var regexS = "[\\?&]"+name+"=([^&#]*)"; var regex = new RegExp( regexS ); var results = regex.exec( window.location.href ); if( results == null ){ return ""; } else { return results[1]; } } Thanks in advance!

    Read the article

  • A failed disk (Pay for professional service or SpinRite?)(new edit)

    - by huggie
    EDIT: After much negotiating and begging and seeing through promotion smoke screen, thanks to the nice representative who took my case, I now know that the engineer has already fixed my NTFS partition (I guess it might be a bad block in the partition table?). She told me that the problem was considered minor, and I should be able to boot normally and just copy stuff out. Whew..I'm glad I didn't agree to the NTD $16,000 deal. New question (should this be in a new thread?): is it safer to use the linux "dd" command or is it better to boot normally into Windows XP and just copy stuff out? EDIT2: Thanks to all the help. I give the best answer to Console as it's most directed related to my question. But many suggestion are helpful and informational. ---- ORIGINAL POST BELOW --- Hi, in my previous post (You don't need to read but it's at http://superuser.com/questions/48838/windows-xp-a-disk-read-error-occurred), I said that my hard disk was not booting and is showing "a disk read error occurred". I took it to a recovery professional. A representative responded today told me that the NTFS partitions have a "NTFS partition system crash". I have no idea what that means. The engineer handling my drive will not be available for contact till tomorrow. Now the company charges me NTD (New Taiwan Dollar) $16,000 to recover lost data, that's kind of a lot considering that my graduate student monthly stipend is currently NTD $32,000 (max. allowed by regulation, may be lower, may change depend on funding). Now I'm weighting in between the options. Option A: let the professional recovers it with the half of my monthly stipend. If file/directories I designated are not recovered I don't pay a penny. (other than the initial examination fee of NTD $1000 which I've already paid.) Option B: let me try SpinRite, if failed, back to Option A. I spoke to the representative at the company they recommended me not to handle it on my own (yeah of course that's what they all want to say, right?), and at the price tag the disk error is probably relatively minor and data recoverable. But the representative really did not have detailed information of the disk failure so I didn't take her recommendation readily. Though one thing I heed was that she said that what they would do is to duplicate the disk before attempting discovery, so there would be no data loss (Is this true? can't duplicating invoke further data loss?). That sounds very good to me. Or maybe a third option: Option C: Negotiate with them to pay them to duplicate the disk hopefully for a much smaller price tag. Let me try SpinRite, if failed, back to Option A. This is a difficult decision. Ultimately I want my data back, but if a cheaper way is available to achieve the same thing... Can operating with SpinRite also corrupt data in someway? I've no idea what happened to my drive. I'll attempt to contact the engineer and hope to get it clarified and make an edit here.

    Read the article

  • C# .Net 3.5 Asynchronous Socket Server Performance Problem

    - by iBrAaAa
    I'm developing an Asynchronous Game Server using .Net Socket Asynchronous Model( BeginAccept/EndAccept...etc.) The problem I'm facing is described like that: When I have only one client connected, the server response time is very fast but once a second client connects, the server response time increases too much. I've measured the time from a client sends a message to the server until it gets the reply in both cases. I found that the average time in case of one client is about 17ms and in case of 2 clients about 280ms!!! What I really see is that: When 2 clients are connected and only one of them is moving(i.e. requesting service from the server) it is equivalently equal to the case when only one client is connected(i.e. fast response). However, when the 2 clients move at the same time(i.e. requests service from the server at the same time) their motion becomes very slow (as if the server replies each one of them in order i.e. not simultaneously). Basically, what I am doing is that: When a client requests a permission for motion from the server and the server grants him the request, the server then broadcasts the new position of the client to all the players. So if two clients are moving in the same time, the server is eventually trying to broadcast to both clients the new position of each of them at the same time. EX: Client1 asks to go to position (2,2) Client2 asks to go to position (5,5) Server sends to each of Client1 & Client2 the same two messages: message1: "Client1 at (2,2)" message2: "Client2 at (5,5)" I believe that the problem comes from the fact that Socket class is thread safe according MSDN documentation http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx. (NOT SURE THAT IT IS THE PROBLEM) Below is the code for the server: /// /// This class is responsible for handling packet receiving and sending /// public class NetworkManager { /// /// An integer to hold the server port number to be used for the connections. Its default value is 5000. /// private readonly int port = 5000; /// /// hashtable contain all the clients connected to the server. /// key: player Id /// value: socket /// private readonly Hashtable connectedClients = new Hashtable(); /// /// An event to hold the thread to wait for a new client /// private readonly ManualResetEvent resetEvent = new ManualResetEvent(false); /// /// keeps track of the number of the connected clients /// private int clientCount; /// /// The socket of the server at which the clients connect /// private readonly Socket mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); /// /// The socket exception that informs that a client is disconnected /// private const int ClientDisconnectedErrorCode = 10054; /// /// The only instance of this class. /// private static readonly NetworkManager networkManagerInstance = new NetworkManager(); /// /// A delegate for the new client connected event. /// /// the sender object /// the event args public delegate void NewClientConnected(Object sender, SystemEventArgs e); /// /// A delegate for the position update message reception. /// /// the sender object /// the event args public delegate void PositionUpdateMessageRecieved(Object sender, PositionUpdateEventArgs e); /// /// The event which fires when a client sends a position message /// public PositionUpdateMessageRecieved PositionUpdateMessageEvent { get; set; } /// /// keeps track of the number of the connected clients /// public int ClientCount { get { return clientCount; } } /// /// A getter for this class instance. /// /// only instance. public static NetworkManager NetworkManagerInstance { get { return networkManagerInstance; } } private NetworkManager() {} /// Starts the game server and holds this thread alive /// public void StartServer() { //Bind the mainSocket to the server IP address and port mainSocket.Bind(new IPEndPoint(IPAddress.Any, port)); //The server starts to listen on the binded socket with max connection queue //1024 mainSocket.Listen(1024); //Start accepting clients asynchronously mainSocket.BeginAccept(OnClientConnected, null); //Wait until there is a client wants to connect resetEvent.WaitOne(); } /// /// Receives connections of new clients and fire the NewClientConnected event /// private void OnClientConnected(IAsyncResult asyncResult) { Interlocked.Increment(ref clientCount); ClientInfo newClient = new ClientInfo { WorkerSocket = mainSocket.EndAccept(asyncResult), PlayerId = clientCount }; //Add the new client to the hashtable and increment the number of clients connectedClients.Add(newClient.PlayerId, newClient); //fire the new client event informing that a new client is connected to the server if (NewClientEvent != null) { NewClientEvent(this, System.EventArgs.Empty); } newClient.WorkerSocket.BeginReceive(newClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), newClient); //Start accepting clients asynchronously again mainSocket.BeginAccept(OnClientConnected, null); } /// Waits for the upcoming messages from different clients and fires the proper event according to the packet type. /// /// private void WaitForData(IAsyncResult asyncResult) { ClientInfo sendingClient = null; try { //Take the client information from the asynchronous result resulting from the BeginReceive sendingClient = asyncResult.AsyncState as ClientInfo; // If client is disconnected, then throw a socket exception // with the correct error code. if (!IsConnected(sendingClient.WorkerSocket)) { throw new SocketException(ClientDisconnectedErrorCode); } //End the pending receive request sendingClient.WorkerSocket.EndReceive(asyncResult); //Fire the appropriate event FireMessageTypeEvent(sendingClient.ConvertBytesToPacket() as BasePacket); // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } catch (SocketException e) { if (e.ErrorCode == ClientDisconnectedErrorCode) { // Close the socket. if (sendingClient.WorkerSocket != null) { sendingClient.WorkerSocket.Close(); sendingClient.WorkerSocket = null; } // Remove it from the hash table. connectedClients.Remove(sendingClient.PlayerId); if (ClientDisconnectedEvent != null) { ClientDisconnectedEvent(this, new ClientDisconnectedEventArgs(sendingClient.PlayerId)); } } } catch (Exception e) { // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } } /// /// Broadcasts the input message to all the connected clients /// /// public void BroadcastMessage(BasePacket message) { byte[] bytes = message.ConvertToBytes(); foreach (ClientInfo client in connectedClients.Values) { client.WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, client); } } /// /// Sends the input message to the client specified by his ID. /// /// /// The message to be sent. /// The id of the client to receive the message. public void SendToClient(BasePacket message, int id) { byte[] bytes = message.ConvertToBytes(); (connectedClients[id] as ClientInfo).WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, connectedClients[id]); } private void SendAsync(IAsyncResult asyncResult) { ClientInfo currentClient = (ClientInfo)asyncResult.AsyncState; currentClient.WorkerSocket.EndSend(asyncResult); } /// Fires the event depending on the type of received packet /// /// The received packet. void FireMessageTypeEvent(BasePacket packet) { switch (packet.MessageType) { case MessageType.PositionUpdateMessage: if (PositionUpdateMessageEvent != null) { PositionUpdateMessageEvent(this, new PositionUpdateEventArgs(packet as PositionUpdatePacket)); } break; } } } The events fired are handled in a different class, here are the event handling code for the PositionUpdateMessage (Other handlers are irrelevant): private readonly Hashtable onlinePlayers = new Hashtable(); /// /// Constructor that creates a new instance of the GameController class. /// private GameController() { //Start the server server = new Thread(networkManager.StartServer); server.Start(); //Create an event handler for the NewClientEvent of networkManager networkManager.PositionUpdateMessageEvent += OnPositionUpdateMessageReceived; } /// /// this event handler is called when a client asks for movement. /// private void OnPositionUpdateMessageReceived(object sender, PositionUpdateEventArgs e) { Point currentLocation = ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position; Point locationRequested = e.PositionUpdatePacket.Position; ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position = locationRequested; // Broadcast the new position networkManager.BroadcastMessage(new PositionUpdatePacket { Position = locationRequested, PlayerId = e.PositionUpdatePacket.PlayerId }); }

    Read the article

  • XP Deploying issues due to msvcr90.dll trying to load FlsAlloc

    - by Sorin Sbarnea
    I have an application build with VS2008 SP1a (9.0.30729.4148) on Windows 7 x64 that does not want to start under XP. The message is The application failed to initialize properly (0x80000003). Click on OK to terminate the application.. I checked with depends.exe and found that msvcr90.dll does try to load FlsAlloc from KERNEL32.dll - and FlsAlloc is available only starting with Vista. I'm sure it is not used by the application. How to solve the issue? The SxS package is already installed on the target machine - In fact I have all 3 versions of 9.0 SxS (initial release, sp1, and sp1+security patch) Application is compiled with _BIND_TO_CURRENT_VCLIBS_VERSION=1 Also I defined the right target Windows version on stdafx.h #define WINVER 0x0500 #define _WIN32_WINNT 0x0500 Manifest file <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.MFC" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> </assembly> Result from depends Started "c:\program files\app\app.EXE" (process 0xA0) at address 0x00400000. Successfully hooked module. Loaded "c:\windows\system32\NTDLL.DLL" at address 0x7C900000. Successfully hooked module. Loaded "c:\windows\system32\KERNEL32.DLL" at address 0x7C800000. Successfully hooked module. Loaded "c:\program files\app\MFC90.DLL" at address 0x785E0000. Successfully hooked module. Loaded "c:\program files\app\MSVCR90.DLL" at address 0x78520000. Successfully hooked module. Loaded "c:\windows\system32\USER32.DLL" at address 0x7E410000. Successfully hooked module. Loaded "c:\windows\system32\GDI32.DLL" at address 0x77F10000. Successfully hooked module. Loaded "c:\windows\system32\SHLWAPI.DLL" at address 0x77F60000. Successfully hooked module. Loaded "c:\windows\system32\ADVAPI32.DLL" at address 0x77DD0000. Successfully hooked module. Loaded "c:\windows\system32\RPCRT4.DLL" at address 0x77E70000. Successfully hooked module. Loaded "c:\windows\system32\SECUR32.DLL" at address 0x77FE0000. Successfully hooked module. Loaded "c:\windows\system32\MSVCRT.DLL" at address 0x77C10000. Successfully hooked module. Loaded "c:\windows\system32\COMCTL32.DLL" at address 0x5D090000. Successfully hooked module. Loaded "c:\windows\system32\MSIMG32.DLL" at address 0x76380000. Successfully hooked module. Loaded "c:\windows\system32\SHELL32.DLL" at address 0x7C9C0000. Successfully hooked module. Loaded "c:\windows\system32\OLEAUT32.DLL" at address 0x77120000. Successfully hooked module. Loaded "c:\windows\system32\OLE32.DLL" at address 0x774E0000. Successfully hooked module. Entrypoint reached. All implicit modules have been loaded. DllMain(0x78520000, DLL_PROCESS_ATTACH, 0x0012FD30) in "c:\program files\app\MSVCR90.DLL" called. GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsAlloc") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543ACC and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsGetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AD9 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsSetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AE6 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsFree") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AF3 and returned NULL. Error: The specified procedure could not be found (127).

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • Cost of creating exception compared to cost of logging it

    - by Sebastien Lorber
    Hello, Just wonder how much cost to raise a java exception (or to call native fillInStackTrace() of Throwable) compared to what it cost to log it with log4j (in a file, with production hard drive)... Asking myself, when exceptions are raised, does it worth to often log them even if they are not necessary significant... (i work in a high load environment) Thanks

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Tuning MySQL to take advantage of a 4GB VPS

    - by alistair.mp
    Hello, We're running a large site at the moment which has a dedicated VPS for it's database server which is running MySQL and nothing else. At the moment all four CPU cores are running at close to 100% all of the time but the memory usage sticks at around 268MB out of an available 4096MB. I'm wondering what we can do to better utilise the memory and reduce the CPU load by tweaking MySQL's settings? Here is what we currently have in my.cnf: http://pastie.org/private/hxeji9o8n3u9up9mvtinbq Thanks

    Read the article

  • MySQL fields terminated by tab

    - by Brian
    I am trying to upload a tab delimitted file with MySQL. I want a query something likes this: LOAD DATA LOCAL INFILE 'file' INTO TABLE tbl FIELDS TERMINATED BY 'TAB' Is there something I can subsitute for TAB to make this work?

    Read the article

  • .Net 3.5 Asynchronous Socket Server Performance Problem

    - by iBrAaAa
    I'm developing an Asynchronous Game Server using .Net Socket Asynchronous Model( BeginAccept/EndAccept...etc.) The problem I'm facing is described like that: When I have only one client connected, the server response time is very fast but once a second client connects, the server response time increases too much. I've measured the time from a client sends a message to the server until it gets the reply in both cases. I found that the average time in case of one client is about 17ms and in case of 2 clients about 280ms!!! What I really see is that: When 2 clients are connected and only one of them is moving(i.e. requesting service from the server) it is equivalently equal to the case when only one client is connected(i.e. fast response). However, when the 2 clients move at the same time(i.e. requests service from the server at the same time) their motion becomes very slow (as if the server replies each one of them in order i.e. not simultaneously). Basically, what I am doing is that: When a client requests a permission for motion from the server and the server grants him the request, the server then broadcasts the new position of the client to all the players. So if two clients are moving in the same time, the server is eventually trying to broadcast to both clients the new position of each of them at the same time. EX: Client1 asks to go to position (2,2) Client2 asks to go to position (5,5) Server sends to each of Client1 & Client2 the same two messages: message1: "Client1 at (2,2)" message2: "Client2 at (5,5)" I believe that the problem comes from the fact that Socket class is thread safe according MSDN documentation http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx. (NOT SURE THAT IT IS THE PROBLEM) Below is the code for the server: /// /// This class is responsible for handling packet receiving and sending /// public class NetworkManager { /// /// An integer to hold the server port number to be used for the connections. Its default value is 5000. /// private readonly int port = 5000; /// /// hashtable contain all the clients connected to the server. /// key: player Id /// value: socket /// private readonly Hashtable connectedClients = new Hashtable(); /// /// An event to hold the thread to wait for a new client /// private readonly ManualResetEvent resetEvent = new ManualResetEvent(false); /// /// keeps track of the number of the connected clients /// private int clientCount; /// /// The socket of the server at which the clients connect /// private readonly Socket mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); /// /// The socket exception that informs that a client is disconnected /// private const int ClientDisconnectedErrorCode = 10054; /// /// The only instance of this class. /// private static readonly NetworkManager networkManagerInstance = new NetworkManager(); /// /// A delegate for the new client connected event. /// /// the sender object /// the event args public delegate void NewClientConnected(Object sender, SystemEventArgs e); /// /// A delegate for the position update message reception. /// /// the sender object /// the event args public delegate void PositionUpdateMessageRecieved(Object sender, PositionUpdateEventArgs e); /// /// The event which fires when a client sends a position message /// public PositionUpdateMessageRecieved PositionUpdateMessageEvent { get; set; } /// /// keeps track of the number of the connected clients /// public int ClientCount { get { return clientCount; } } /// /// A getter for this class instance. /// /// only instance. public static NetworkManager NetworkManagerInstance { get { return networkManagerInstance; } } private NetworkManager() {} /// Starts the game server and holds this thread alive /// public void StartServer() { //Bind the mainSocket to the server IP address and port mainSocket.Bind(new IPEndPoint(IPAddress.Any, port)); //The server starts to listen on the binded socket with max connection queue //1024 mainSocket.Listen(1024); //Start accepting clients asynchronously mainSocket.BeginAccept(OnClientConnected, null); //Wait until there is a client wants to connect resetEvent.WaitOne(); } /// /// Receives connections of new clients and fire the NewClientConnected event /// private void OnClientConnected(IAsyncResult asyncResult) { Interlocked.Increment(ref clientCount); ClientInfo newClient = new ClientInfo { WorkerSocket = mainSocket.EndAccept(asyncResult), PlayerId = clientCount }; //Add the new client to the hashtable and increment the number of clients connectedClients.Add(newClient.PlayerId, newClient); //fire the new client event informing that a new client is connected to the server if (NewClientEvent != null) { NewClientEvent(this, System.EventArgs.Empty); } newClient.WorkerSocket.BeginReceive(newClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), newClient); //Start accepting clients asynchronously again mainSocket.BeginAccept(OnClientConnected, null); } /// Waits for the upcoming messages from different clients and fires the proper event according to the packet type. /// /// private void WaitForData(IAsyncResult asyncResult) { ClientInfo sendingClient = null; try { //Take the client information from the asynchronous result resulting from the BeginReceive sendingClient = asyncResult.AsyncState as ClientInfo; // If client is disconnected, then throw a socket exception // with the correct error code. if (!IsConnected(sendingClient.WorkerSocket)) { throw new SocketException(ClientDisconnectedErrorCode); } //End the pending receive request sendingClient.WorkerSocket.EndReceive(asyncResult); //Fire the appropriate event FireMessageTypeEvent(sendingClient.ConvertBytesToPacket() as BasePacket); // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } catch (SocketException e) { if (e.ErrorCode == ClientDisconnectedErrorCode) { // Close the socket. if (sendingClient.WorkerSocket != null) { sendingClient.WorkerSocket.Close(); sendingClient.WorkerSocket = null; } // Remove it from the hash table. connectedClients.Remove(sendingClient.PlayerId); if (ClientDisconnectedEvent != null) { ClientDisconnectedEvent(this, new ClientDisconnectedEventArgs(sendingClient.PlayerId)); } } } catch (Exception e) { // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } } /// /// Broadcasts the input message to all the connected clients /// /// public void BroadcastMessage(BasePacket message) { byte[] bytes = message.ConvertToBytes(); foreach (ClientInfo client in connectedClients.Values) { client.WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, client); } } /// /// Sends the input message to the client specified by his ID. /// /// /// The message to be sent. /// The id of the client to receive the message. public void SendToClient(BasePacket message, int id) { byte[] bytes = message.ConvertToBytes(); (connectedClients[id] as ClientInfo).WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, connectedClients[id]); } private void SendAsync(IAsyncResult asyncResult) { ClientInfo currentClient = (ClientInfo)asyncResult.AsyncState; currentClient.WorkerSocket.EndSend(asyncResult); } /// Fires the event depending on the type of received packet /// /// The received packet. void FireMessageTypeEvent(BasePacket packet) { switch (packet.MessageType) { case MessageType.PositionUpdateMessage: if (PositionUpdateMessageEvent != null) { PositionUpdateMessageEvent(this, new PositionUpdateEventArgs(packet as PositionUpdatePacket)); } break; } } } The events fired are handled in a different class, here are the event handling code for the PositionUpdateMessage (Other handlers are irrelevant): private readonly Hashtable onlinePlayers = new Hashtable(); /// /// Constructor that creates a new instance of the GameController class. /// private GameController() { //Start the server server = new Thread(networkManager.StartServer); server.Start(); //Create an event handler for the NewClientEvent of networkManager networkManager.PositionUpdateMessageEvent += OnPositionUpdateMessageReceived; } /// /// this event handler is called when a client asks for movement. /// private void OnPositionUpdateMessageReceived(object sender, PositionUpdateEventArgs e) { Point currentLocation = ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position; Point locationRequested = e.PositionUpdatePacket.Position; ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position = locationRequested; // Broadcast the new position networkManager.BroadcastMessage(new PositionUpdatePacket { Position = locationRequested, PlayerId = e.PositionUpdatePacket.PlayerId }); }

    Read the article

  • STA threads with SQLXMLBULKLOAD

    - by Christopher
    If I have N STA .NET Threads each performing an independent bulk load operation on a different database using the SQLXMLBulkLoad dll (which requires calling threads to be STA), is it possible for all bulk loads to be happening at the same time, or are they implicitly serialized due to the STA COM configuration? Thanks!

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >