Search Results

Search found 6207 results on 249 pages for 'slow'.

Page 246/249 | < Previous Page | 242 243 244 245 246 247 248 249  | Next Page >

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • improving conversions to binary and back in C#

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • migrating from Prototype to jQuery in Rails, having trouble with duplicate get request

    - by aressidi
    I'm in the process of migrating from Prototype to jQuery and moving all JS outside of the view files. All is going fairly well with one exception. Here's what I'm trying to do, and the problem I'm having. I have a diary where users can update records in-line in the page like so: user clicks 'edit' link to edit an entry in the diary a get request is performed via jQuery and an edit form is displayed allowing the user to modify the record user updates the record, the form disappears and the updated record is shown in place of the form All of that works so far. The problem arises when: user updates a record user clicks 'edit' to update another record in this case, the edit form is shown twice! In firebug I get a status code 200 when the form shows, and then moments later, another edit form shows again with a status code of 304 I only want the form to show once, not twice. The form shows twice only after I update a record, otherwise everything works fine. Here's the code, any ideas? I think this might have to do with the fact that in food_item_update.js I call the editDiaryEntry() after a record is updated, but if I don't call that function and try and update the record after it's been modified, then it just spits up the .js.erb response on the screen. That's also why I have the editDiaryEntry() in the add_food.js.erb file. Any help would be greatly appreciated. diary.js jQuery(document).ready(function() { postFoodEntry(); editDiaryEntry(); initDatePicker(); }); function postFoodEntry() { jQuery('form#add_entry').submit(function(e) { e.preventDefault(); jQuery.post(this.action, jQuery(this).serialize(), null, "script"); // return this }); } function editDiaryEntry() { jQuery('.edit_link').click(function(e) { e.preventDefault(); // This should look to see if one version of this is open... if (jQuery('#edit_container_' + this.id).length == 0 ) { jQuery.get('/diary/entry/edit', {id: this.id}, null, "script"); } }); } function closeEdit () { jQuery('.close_edit').click(function(e) { e.preventDefault(); jQuery('.entry_edit_container').remove(); jQuery("#entry_" + this.id).show(); }); } function updateDiaryEntry() { jQuery('.edit_entry_form').submit(function(e) { e.preventDefault(); jQuery.post(this.action, $(this).serialize(), null, "script"); }); } function initDatePicker() { jQuery("#date, #edit_date").datepicker(); }; add_food.js.erb jQuery("#entry_alert").show(); jQuery('#add_entry')[ 0 ].reset(); jQuery('#diary_entries').html("<%= escape_javascript(render :partial => 'members/diary/diary_entries', :object => @diary, :locals => {:record_counter => 0, :date_header => 0, :edit_mode => @diary_edit}, :layout => false ) %>"); jQuery('#entry_alert').html("<%= escape_javascript(render :partial => 'members/diary/entry_alert', :locals => {:type => @type, :message => @alert_message}) %>"); jQuery('#entry_alert').show(); setTimeout(function() { jQuery('#entry_alert').fadeOut('slow'); }, 5000); editDiaryEntry(); food_item_edit.js.erb jQuery("#entry_<%= @entry.id %>").hide(); jQuery("#entry_<%= @entry.id %>").after("<%= escape_javascript(render :partial => 'members/diary/food_item_edit', :locals => {:user_food_profile => @entry}) %>"); closeEdit(); updateDiaryEntry(); initDatePicker(); food_item_update.js jQuery("#entry_<%= @entry.id %>").replaceWith("<%= escape_javascript(render :partial => 'members/diary/food_item', :locals => {:entry => @entry, :total_calories => 0}) %>"); jQuery('.entry_edit_container').remove(); editDiaryEntry();

    Read the article

  • Jquery fade and swap an element when clicked which will also relate to an accordian menu

    - by Nik
    You will notice when you click posture 1 the description drops down and images appear on the right. Now when you click posture 2 or posture 3 the images and description change as they should. What I need to do now is - If posture 1 has been clicked and then posture 2 is clicked the posture 1 menu needs to close so that there is only one posture description visible at one time. If I could also make it so that if the current open posture item is clicked so that it closes and there are no open posture descriptions that there also no images displayed on the right. Finally is there a way to make sure only one set of animation images is running, because just say the user goes through all 26 options and they continue to run in the background it may get sluggish (thanks to Nick Craver for bringing that up). At this stage only posture 1, 2 and 3 are available. Ok finally some code - //Description drop-down boxes $(document).ready(function(){ //Hide (Collapse) the toggle containers on load $(".toggle_container").hide(); //Switch the "Open" and "Close" state per click $("h5.trigger").toggle(function(){ $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); //Slide up and down on click $("h5.trigger").click(function(){ $(this).next(".toggle_container").slideToggle("slow"); }); }); //Images on the right fade in and out thanks to aSeptik $(document).ready(function(){ $('#section_Q_01,#section_Q_02,#section_Q_03').hide(); $(function() { $('h5.trigger a').click( function(e) { e.preventDefault(); var trigger_id = $(this).parent().attr('id'); //get id Q_## $('.current').removeClass('current').hide(); //add a class for easy access & hide $('#section_' + trigger_id).addClass('current').fadeIn(5000); //show clicked one }); }); }); //Fading pics $(document).ready(function(){ $('.pics').cycle({ fx: 'fade', speed: 2500 }); }); Description boxes - <h5 class="trigger" id="Q_01" ><a href="#">Posture 1 : Standing Deep Breathing :</a></h5> <div class="toggle_container" > <div class="block"> <span class="sc">Pranayama Series</span> <p class="bold">Benefits:</p> </div> </div> <h5 class="trigger" id="Q_02" ><a href="#">Posture 2 : Half Moon Pose With Hands To Feet Pose :</a></h5> <div class="toggle_container"> <div class="block"> <span class="sc">Ardha Chandrasana with Pada-Hastasana</span> <p class="bold">Benefits:</p> </div> </div> <h5 class="trigger" id="Q_03" ><a href="#">Posture 3 : Awkward Pose :</a></h5> <div class="toggle_container"> <div class="block"> <span class="sc">Utkatasana</span> <p class="bold">Benefits:</p> </div> </div> and the images on the right - <div id="section_Q_01" class="01"> <div class="pics"> <img src="../images/multi/poses/pose1/Pranayama._01.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._02.jpg"/> <img src="../images/multi/poses/pose1/Pranayama._03.jpg"/> </div> </div> <div id="section_Q_02" class="02"> <div class="pics"> <img src="../images/multi/poses/pose2/Half_Moon_Pose_04.jpg" /> <img src="../images/multi/poses/pose2/Backward_Bending_05.jpg" /> <img src="../images/multi/poses/pose2/Hands_to_Feet_Pose_06.jpg" /> </div> </div> <div id="section_Q_03" class="03"> <div class="pics"> <img src="../images/multi/poses/pose3/Awkward_01.jpg" /> <img src="../images/multi/poses/pose3/Awkward_02.jpg" /> <img src="../images/multi/poses/pose3/Awkward_03.jpg" /> </div> </div> It would be a bonus if images faded out when another element is clicked... but not a big deal. Thanks for having a look

    Read the article

  • Duplex communication using NetTcpBinding - ContractFilter mismatch?

    - by Shaul
    I'm making slow and steady progress towards having a duplex communication channel open between a client and a server, using NetTcpBinding. (FYI, you can observe my newbie progress here and here!) I'm now at the stage where I have successfully connected to my server, through the server's firewall, and the client can make requests of the server. In the other direction, however, things aren't quite so happy. It works fine when testing on my own machine, but when testing over the internet, when I try to initiate a callback from the server side, I get an error: The message with Action 'http://MyWebService/IWebService/HelloWorld' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). Here are some of the key bits of code. First, the web interface: [ServiceContract(Namespace = "http://MyWebService", SessionMode = SessionMode.Required, CallbackContract = typeof(ISiteServiceExternal))] public interface IWebService { [OperationContract] void Register(long customerID); } public interface ISiteServiceExternal { [OperationContract] string HelloWorld(); } Then, on the client side (I was fiddling with these attributes without really knowing what I'm doing): [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, Namespace="http://MyWebService")] class SiteServer : IWebServiceCallback { string IWebServiceCallback.HelloWorld() { return "Hello World!"; } ... } So what am I doing wrong here? EDIT: Adding app.config code. From server: <system.serviceModel> <diagnostics> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" logEntireMessage="true" maxMessagesToLog="1000" maxSizeOfMessageToLog="524288" /> </diagnostics> <behaviors> <serviceBehaviors> <behavior name="mex"> <serviceDebug includeExceptionDetailInFaults="true"/> <serviceMetadata/> </behavior> </serviceBehaviors> </behaviors> <services> <service name ="MyWebService.WebService" behaviorConfiguration="mex"> <endpoint address="net.tcp://localhost:8000" binding="netTcpBinding" contract="MyWebService.IWebService" bindingConfiguration="TestBinding" name="MyEndPoint"></endpoint> <endpoint address ="mex" binding="mexTcpBinding" name="MEX" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8000"/> </baseAddresses> </host> </service> </services> <bindings> <netTcpBinding> <binding name="TestBinding" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" portSharingEnabled="false"> <readerQuotas maxDepth="32" maxStringContentLength ="8192" maxArrayLength ="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <security mode="None"/> </binding> </netTcpBinding> </bindings> </system.serviceModel> and on the client side: <system.serviceModel> <bindings> <netTcpBinding> <binding name="MyEndPoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://mydomain.gotdns.com:8000/" binding="netTcpBinding" bindingConfiguration="MyEndPoint" contract="IWebService" name="MyEndPoint" /> </client> </system.serviceModel>

    Read the article

  • SSL confirmation dialog popup auto closes in IE8 when re-accessing a JNLP file

    - by haylem
    I'm having this very annoying problem to troubleshoot and have been going at it for way too many days now, so have a go at it. The Environment We have 2 app-servers, which can be located on either the same machine or 2 different machines, and use the same signing certificate, and host 2 different web-apps. Though let's say, for the sake of our study case here, that they are on the same physical machine. So, we have: https://company.com/webapp1/ https://company.com/webapp2/ webapp1 is GWT-based rich-client which contains on one of its screens a menu with an item that is used to invoke a Java WebStart Client located on webapp2. It does so by performing a simple window.open call via this GWT call: Window.open("https://company.com/webapp2/app.jnlp", "_blank", null); Expected Behavior User merrilly goes to webapp1 User navigates to menu entry to start the WebStart app and clicks on it browser fires off a separate window/dialog which, depending on the browser and its security settings, will: request confirmation to navigate to this secure site, directly download the file, and possibly auto-execute a javaws process if there's a file association, otherwise the user can simply click on the file and start the app (or go about doing whatever it takes here). If you close the app, close the dialog, and re-click the menu entry, the same thing should happen again. Actual Behavior On Anything but God-forsaken IE 8 (Though I admit there's also all the god-forsaken pre-IE8 stuff, but the Requirements Lords being merciful we have already recently managed to make them drop these suckers. That was close. Let's hold hands and say a prayer of gratitude.) Stuff just works. JNLP gets downloaded, app executes just fine, you can close the app and re-do all the steps and it will restart happily. People rejoice. Puppies are safe and play on green hills in the sunshine. Developers can go grab a coffee and move on to more meaningful and rewarding tasks, like checking out on SO questions. Chrome doesn't want to execute the JNLP, but who cares? Customers won't get RSI from clicking a file every other week. On God-forsaken IE8 On the first visit, the dialog opens and requests confirmation for the user to continue to webapp2, though it could be unsafe (here be dragons, I tell you). The JNLP downloads and auto-opens, the app start. Your breathing is steady and slow. You close the app, close that SSL confirmation dialog, and re-click the menu entry. The dialog opens and auto-closes. Nothing starts, the file wasn't downloaded to any known location and Fiddler just reports the connection was closed. If you close IE and reach that menu item to click it again, it is now back to working correctly. Until you try again during the same session, of course. Your heart-rate goes up, you get some more coffee to make matters worse, and start looking for plain tickets online and a cheap but heavy golf-club on an online auction site to go clubbing baby polar seals to avenge your bloodthirst, as the gates to the IE team in Redmond are probably more secured than an ice block, as one would assume they get death threats often. Plus, the IE9 and IE10 teams are already hard at work fxing the crap left by their predecessors, so maybe you don't want to be too hard on them, and you don't have money to waste on a PI to track down the former devs responsible for this mess. Added Details I have come across many problems with IE8 not downloading files over SSL when it uses a no-cache header. This was indeed one of our problems, which seems to be worked out now. It downloads files fine, webapp2 uses the following headers to serve the JNLP file: response.setHeader("Cache-Control", "private, must-revalidate"); // IE8 happy response.setHeader("Pragma", "private"); // IE8 happy response.setHeader("Expires", "0"); // IE8 happy response.setHeader("Access-Control-Allow-Origin", "*"); // allow to request via cross-origin AJAX response.setContentType("application/x-java-jnlp-file"); // please exec me As you might have inferred, we get some confirmation dialog because there's something odd with the SSL certificate. Unfortunately I have no control over that. Assuming that's only temporary and for development purposes as we usually don't get our hands on the production certs. So the SSL cert is expired and doesn't specify the server. And the confirmation dialog. Wouldn't be that bad if it weren't for IE, as other browsers don't care, just ask for confirmation, and execute as expected and consistantly. Please, pretty please, help me, or I might consider sacrificial killings as an option. And I think I just found a decently prized stainless steel golf-club, so I'm right on the edge of gore. Side Notes Might actually be related to IE8 window.open SSL Certificate issue. Though it doesn't explain why the dialog would auto-close (that really is beyong me...), it could help to not have the confirmation dialog and not need the dialog at all. For instance, I was thinking that just having a simple URL in that menu instead of have it entirely managed by GWT code to invoke a Window.open would solve the problem. But I don't have control on that menu, and also I'm very curious how this could be fixed otherwise and why the hell it happens in the first place...

    Read the article

  • Fastest way to parse XML files in C#?

    - by LifeH2O
    I have to load many XML files from internet. But for testing with better speed i downloaded all of them (more than 500 files) of the following format. <player-profile> <personal-information> <id>36</id> <fullname>Adam Gilchrist</fullname> <majorteam>Australia</majorteam> <nickname>Gilchrist</nickname> <shortName>A Gilchrist</shortName> <dateofbirth>Nov 14, 1971</dateofbirth> <battingstyle>Left-hand bat</battingstyle> <bowlingstyle>Right-arm offbreak</bowlingstyle> <role>Wicket-Keeper</role> <teams-played-for>Western Australia, New South Wales, ICC World XI, Deccan Chargers, Australia</teams-played-for> <iplteam>Deccan Chargers</iplteam> </personal-information> <batting-statistics> <odi-stats> <matchtype>ODI</matchtype> <matches>287</matches> <innings>279</innings> <notouts>11</notouts> <runsscored>9619</runsscored> <highestscore>172</highestscore> <ballstaken>9922</ballstaken> <sixes>149</sixes> <fours>1000+</fours> <ducks>0</ducks> <fifties>55</fifties> <catches>417</catches> <stumpings>55</stumpings> <hundreds>16</hundreds> <strikerate>96.95</strikerate> <average>35.89</average> </odi-stats> <test-stats> . . . </test-stats> <t20-stats> . . . </t20-stats> <ipl-stats> . . . </ipl-stats> </batting-statistics> <bowling-statistics> <odi-stats> . . . </odi-stats> <test-stats> . . . </test-stats> <t20-stats> . . . </t20-stats> <ipl-stats> . . . </ipl-stats> </bowling-statistics> </player-profile> I am using XmlNodeList list = _document.SelectNodes("/player-profile/batting-statistics/odi-stats"); And then loop this list with foreach as foreach (XmlNode stats in list) { _btMatchType = GetInnerString(stats, "matchtype"); //it returns null string if node not availible . . . . _btAvg = Convert.ToDouble(stats["average"].InnerText); } Even i am loading all files offline, parsing is very slow Is there any good faster way to parse them? Or is it problem with SQL? I am saving all extracted data from XML to database using DataSets, TableAdapters with insert command. I

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • Update table rows in a non-sequential way using the output of a php script

    - by moviemaniac
    Good evening everybody, this is my very first question and I hope I've done my search in stack's archive at best!!! I need to monitor several devices by querying theyr mysql database and gather some informations. Then these informations are presented to the operator in an html table. I have wrote a php script wich loads devices from a multidimensional array, loops through the array and gather data and create the table. The table structure is the following: <table id="monitoring" class="rt cf"> <thead class="cf"> <tr> <th>Device</th> <th>Company</th> <th>Data1</th> <th>Data2</th> <th>Data3</th> <th>Data4</th> <th>Data5</th> <th>Data6</th> <th>Data7</th> <th>Data8</th> <th>Data9</th> </tr> </thead> <tbody> <tr id="Device1"> <td>Devide 1 name</td> <td>xx</td> <td><img src="/path_to_images/ajax_loader.gif" width="24px" /></td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr id="Device2"> <td>Devide 1 name</td> <td>xx</td> <td><img src="/path_to_images/ajax_loader.gif" width="24px" /></td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr id="DeviceN"> <td>Devide 1 name</td> <td>xx</td> <td><img src="/path_to_images/ajax_loader.gif" width="24px" /></td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </tbody> </table> The above table is directly populated when I first load the page; then, with a very simple function, i update this table every minute without reloading the page: <script> var auto_refresh = setInterval( function() { jQuery("#monitoring").load('/overview.php').fadeIn("slow"); var UpdateTime= new Date(); var StrUpdateTime; StrUpdateTime= ('0' + UpdateTime.getHours()).slice(-2) + ':' + ('0' + UpdateTime.getMinutes()).slice(-2) + ':' + ('0' + UpdateTime.getSeconds()).slice(-2); jQuery("#progress").text("Updated on: " + StrUpdateTime); }, 60000); </script> The above code runs in a wordpress environment. It comes out that when devices are too much and internet connection is not that fast, the script times out, even if i dramatically increase the timeout period. So it is impossible even to load the page the first time... Therefore I would like to change my code so that I can handle each row as a single entity, with its own refresh period. So when the user first loads the page, he sees n rows (one per device) with the ajax loader image... then an update cycle should start independently for each row, so that the user sees data gathered from each database... then ajax loader when the script is trying to retrieve data, then the gathered data once it has been collected or an error message stating that it is not possible to gather data since hour xx:yy:zz. So rows updating should be somewhat independent from the others, like if each row updating was a single closed process. So that rows updating is not done sequentially from the first row to the last. I hope I've sufficiently detailed my problem. Currently I feel like I am at a dead-end. Could someone please show me somewhere to start from?

    Read the article

  • Slide navigation problem multiple div movement

    - by littleMan
    I have almost figured it out but still doesn't quite work the way i want it to. my problem is this part. It scrolls the first element to the left but the second element just appears and doesn't scroll does anyone know what todo here. var i = jQuery('.wikiform .wizard .view').size(); jQuery('.wikiform .navigation input[name^=Next]').click(function () { jQuery('.wikiform .wizard .view').each(function (j) { jQuery(this).animate({ marginLeft: -(current.next().width() * (i - j)) }, 750); /*line above Im having problems with ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ */ current = current.next(); j++; }); }); my complete code down below test it out and see what Im doing wrong (function ($) { $.fn.WikiForm = function (options) { this.Mode = options.mode || 'CancelOk' || 'Ok' || 'Wizard'; var current = jQuery('.wikiform .view:first'); function positionForm() { jQuery('body') .css('overflow-y', 'hidden'); jQuery('<div id="overlay"></div>') .insertBefore('.wikiform') .css('top', jQuery(document).scrollTop()) .animate({ 'opacity': '0.8' }, 'slow'); jQuery('.wikiform') .css('height', jQuery('.wikiform .wizard .view:first').height() + jQuery('.wikiform .navigation').height()) .css('top', window.screen.availHeight / 2 - jQuery('.wikiform').height() / 2) .css('width', jQuery('.wikiform .wizard .view:first').width()) .css('left', -jQuery('.wikiform').width()) .animate({ marginLeft: jQuery(document).width() / 2 + jQuery('.wikiform').width() / 2 }, 750); jQuery('.wikiform .wizard') .css('overflow', 'hidden') .css('height', jQuery('.wikiform .wizard .view:first').height()); } if (this.Mode == "Wizard") { return this.each(function () { positionForm(); //alert(current.next().width()); var i = jQuery('.wikiform .wizard .view').size(); jQuery('.wikiform .navigation input[name^=Next]').click(function () { jQuery('.wikiform .wizard .view').each(function (j) { jQuery(this).animate({ marginLeft: -(current.next().width() * (i - j)) }, 750); current = current.next(); j++; }); }); jQuery('.wikiform .navigation input[name^=Back]').click(function () { }); }); } else if (this.Mode == "CancelOk") { return this.each(function () { }); } else { return this.each(function () { }); } }; })(jQuery); $(document).ready(function () { jQuery(window).bind("load", function () { jQuery(".wikiform").WikiForm({ mode: 'Wizard', speed:750, ease:"expoinout" }); }); }); <style type="text/css"> body { margin:0px; } #overlay { background-color:Black; position:absolute; top:0; left:0; height:100%; width:100%; } .wikiform { background-color:Green; position:absolute; } .wikiform .wizard { clear: both; } .wizard { position: relative; left: 0; top: 0; width: 100%; list-style-type: none; } .wizard .view { float:left; } .view .form { } .navigation { float:right; clear:left } #view1 { background-color:Aqua; width:300px; height:300px; } #view2 { background-color:Fuchsia; width:300px; height:300px; } </style> <title></title></head><body><form action="" method=""><div id="layout"><div id="header"> Header </div> <div id="content" style="height:2000px"> Content </div> <div id="footer"> Footer </div> </div> <div id="formView1" class="wikiform"> <div class="wizard"> <div id="view1" class="view"> <div class="form"> Content 1 </div> </div> <div id="view2" class="view"> <div class="form"> Content 2 </div> </div> </div> <div class="navigation"> <input type="button" name="Back" value=" Back " /> <input type="button" name="Next " class="Next" value=" Next " /> <input type="button" name="Cancel" value="Cancel" /> </div> </div>

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Image/"most resembling pixel" search optimization?

    - by SigTerm
    The situation: Let's say I have an image A, say, 512x512 pixels, and image B, 5x5 or 7x7 pixels. Both images are 24bit rgb, and B have 1bit alpha mask (so each pixel is either completely transparent or completely solid). I need to find within image A a pixel which (with its' neighbors) most closely resembles image B, OR the pixel that probably most closely resembles image B. Resemblance is calculated as "distance" which is sum of "distances" between non-transparent B's pixels and A's pixels divided by number of non-transparent B's pixels. Here is a sample SDL code for explanation: struct Pixel{ unsigned char b, g, r, a; }; void fillPixel(int x, int y, SDL_Surface* dst, SDL_Surface* src, int dstMaskX, int dstMaskY){ Pixel& dstPix = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*x + dst->pitch*y)); int xMin = x + texWidth - searchWidth; int xMax = xMin + searchWidth*2; int yMin = y + texHeight - searchHeight; int yMax = yMin + searchHeight*2; int numFilled = 0; for (int curY = yMin; curY < yMax; curY++) for (int curX = xMin; curX < xMax; curX++){ Pixel& cur = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*(curX & texMaskX) + dst->pitch*(curY & texMaskY))); if (cur.a != 0) numFilled++; } if (numFilled == 0){ int srcX = rand() % src->w; int srcY = rand() % src->h; dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*srcX + src->pitch*srcY)); dstPix.a = 0xFF; return; } int storedSrcX = rand() % src->w; int storedSrcY = rand() % src->h; float lastDifference = 3.40282347e+37F; //unsigned char mask = for (int srcY = searchHeight; srcY < (src->h - searchHeight); srcY++) for (int srcX = searchWidth; srcX < (src->w - searchWidth); srcX++){ float curDifference = 0; int numPixels = 0; for (int tmpY = -searchHeight; tmpY < searchHeight; tmpY++) for(int tmpX = -searchWidth; tmpX < searchWidth; tmpX++){ Pixel& tmpSrc = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*(srcX+tmpX) + src->pitch*(srcY+tmpY))); Pixel& tmpDst = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*((x + dst->w + tmpX) & dstMaskX) + dst->pitch*((y + dst->h + tmpY) & dstMaskY))); if (tmpDst.a){ numPixels++; int dr = tmpSrc.r - tmpDst.r; int dg = tmpSrc.g - tmpDst.g; int db = tmpSrc.g - tmpDst.g; curDifference += dr*dr + dg*dg + db*db; } } if (numPixels) curDifference /= (float)numPixels; if (curDifference < lastDifference){ lastDifference = curDifference; storedSrcX = srcX; storedSrcY = srcY; } } dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*storedSrcX + src->pitch*storedSrcY)); dstPix.a = 0xFF; } This thing is supposed to be used for texture generation. Now, the question: The easiest way to do this is brute force search (which is used in example routine). But it is slow - even using GPU acceleration and dual core cpu won't make it much faster. It looks like I can't use modified binary search because of B's mask. So, how can I find desired pixel faster? Additional Info: It is allowed to use 2 cores, GPU acceleration, CUDA, and 1.5..2 gigabytes of RAM for the task. I would prefer to avoid some kind of lengthy preprocessing phase that will take 30 minutes to finish. Ideas?

    Read the article

  • Javascript: selfmade methods not working correctly

    - by hdr
    Hi everyone, I tried to figure this out for some days now, I tried to use my own object to sort of replace the global object to reduce problems with other scripts (userscripts, chrome extensions... that kind of stuff). However I can't get things to work for some reason. I tried some debugging with JSLint, the developer tools included in Google Chrome, Firebug and the integrated schript debugger in IE8 but there is no error that explains why it doesn't work at all in any browser I tried. I tried IE 8, Google Chrome 10.0.612.3 dev, Firefox 3.6.13, Safari 5.0.3 and Opera 11. So... here is the code: HTML: <!DOCTYPE HTML> <html manifest="c.manifest"> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta charset="utf-8"> <!--[if IE]> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/chrome-frame/1/CFInstall.min.js"></script> <script src="https://ie7-js.googlecode.com/svn/version/2.1(beta4)/IE9.js">IE7_PNG_SUFFIX=".png";</script> <![endif]--> <!--[if lt IE 9]> <script src="js/lib/excanvas.js"></script> <script src="https://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> <script src="js/data.js"></script> </head> <body> <div id="controls"> <button onclick="MYOBJECTis.next()">Next</button> </div> <div id="textfield"></div> <canvas id="game"></canvas> </body> </html> Javascript: var that = window, it = document, k = Math.floor; var MYOBJECTis = { aresizer: function(){ // This method looks like it doesn't work. // It should automatically resize all the div elements and the body. // JSLint: no error (execpt for "'window' is not defined." which is normal since // JSLint does nor recognize references to the global object like "window" or "self" // even if you assume a browser) // Firebug: no error // Chrome dev tools: no error // IE8: that.documentElement.clientWidth is null or not an object "use strict"; var a = that.innerWidth || that.documentElement.clientWidth, d = that.innerHeight || that.documentElement.clientHeight; (function() { for(var b = 0, c = it.getElementsByTagName("div");b < c.length;b++) { c.style.width = k(c.offsetWidth) / 100 * k(a); c.style.height = k(c.offsetHight) / 100 * k(d); } }()); (function() { var b = it.getElementsByTagName("body"); b.width = a; b.height = d; }()); }, next: function(){ // This method looks like it doesn't work. // It should change the text inside a div element // JSLint: no error (execpt for "'window' is not defined.") // Firebug: no error // Chrome dev tools: no error // IE8: no error (execpt for being painfully slow) "use strict"; var b = it.getElementById("textfield"), a = [], c; switch(c !== typeof Number){ case true: a[1] = ["HI"]; c = 0; break; case false: return Error; default: b.innerHtml = a[c]; c+=1; } } }; // auto events (function(){ "use strict"; that.onresize = MYOBJECTis.aresizer(); }()); If anyone can help me out with this I would very much appreciate it. EDIT: To answer the question what's not working I can just say that no method I showed here is working at all and I don't know the cause of the problem. I also tried to clean up some of the code that has most likely nothing to do with it. Additional information is in the comments inside the code.

    Read the article

  • Pagination links do not work after first page

    - by TheStack
    Hello, I am trying to fix this pagination script. It seems when I click on the pagination links [1][2][3][4]or[5] , it doesn't work. It just shows the first page and when clicking on the next numbers nothing happens. I hoping someone can see something in the script that I can not see. The main page looks like this (pagination.php): <?php include_once('generate_pagination.php'); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" data-page="1"></div> <ul id="pagination"> <?php generate_pagination() ?> </ul> <br /> <br /> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> Then, generate_pagination.php: <?php function generate_pagination($sql) { include_once('config.php'); $per_page = 3; //Calculating no of pages $result = mysql_query($sql); $count = mysql_fetch_row($result); $pages = ceil($count[0]/$per_page); //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li class="page_numbers" id="'.$i.'">'.$i.'</li>'; } } $ids=$_GET['ids']; generate_pagination("SELECT COUNT(*) FROM explore WHERE category='$ids'"); ?> Here is the jquery file (jquery_pagination.js): $(document).ready(function(){ //Display Loading Image function Display_Load() { $("#loading").fadeIn(900,0); $("#loading").html("<img src='bigLoader.gif' />"); } //Hide Loading Image function Hide_Load() { $("#loading").fadeOut('slow'); }; //Default Starting Page Results $("#pagination li:first").css({'color' : '#FF0084'}).css({'border' : 'none'}); Display_Load(); $("#content").load("pagination_data.php?page=1", Hide_Load()); //Pagination Click $("#pagination li").click(function(){ Display_Load(); //CSS Styles $("#pagination li") .css({'border' : 'solid #dddddd 1px'}) .css({'color' : '#0063DC'}); $(this) .css({'color' : '#FF0084'}) .css({'border' : 'none'}); //Loading Data var pageNum = this.id; $("#content").load("pagination_data.php?page=" + pageNum, function(){ Hide_Load(); $(this).attr('data-page', pageNum); }); }); // Editing below. // Sort content Marketing $("a.category").click(function() { Display_Load(); var this_id = $(this).attr('id'); $.get("pagination.php", { category: this.id }, function(data){ //Load your results into the page var pageNum = $('#content').attr('data-page'); $("#pagination").load('generate_pagination.php?category=' + pageNum +'&ids='+ this_id ); $("#content").load("filter_marketing.php?page=" + pageNum +'&id='+ this_id, Hide_Load()); }); }); }); Lastly, filter_marketing.php (when a user clicks the filter link buttons): <?php include('config.php'); $per_page = 3; if(count($_GET)>0) { if($_GET['page']!=''){ $page=$_GET['page']; } if($_GET['id']!=''){ $id=$_GET['id']; } } $page= ($_GET['page']!='') ? $_GET['page']: false; $id= ($_GET['id']!='') ? $_GET['id']: false; $start = ($page-1)*$per_page; if($page && $id){ $sql = "SELECT * FROM explore WHERE category='$id' ORDER BY category LIMIT $start,$per_page"; } else { die('Error: missing parameters. Id= '.$id.' and page= '.$page); } $result = mysql_query($sql); ?> <table width="800px"> <?php while($row = mysql_fetch_array($result)) { $msg_id=$row['id']; $message=$row['site_description']; $site_price=$row['site_price']; ?> <tr> <td><?php echo $msg_id; ?></td> <td><?php echo $message; ?></td> <td><?php echo $site_price; ?></td> </tr> <?php } ?> </table> So, if anyone sees where the problem is occurring and can help rid of the problem, that would be great, Thank you.

    Read the article

  • How should I store Dynamically Changing Data into Server Cache?

    - by Scott
    Hey all, EDIT: Purpose of this Website: Its called Utopiapimp.com. It is a third party utility for a game called utopia-game.com. The site currently has over 12k users to it an I run the site. The game is fully text based and will always remain that. Users copy and paste full pages of text from the game and paste the copied information into my site. I run a series of regular expressions against the pasted data and break it down. I then insert anywhere from 5 values to over 30 values into the DB based on that one paste. I then take those values and run queries against them to display the information back in a VERY simple and easy to understand way. The game is team based and each team has 25 users to it. So each team is a group and each row is ONE users information. The users can update all 25 rows or just one row at a time. I require storing things into cache because the site is very slow doing over 1,000 queries almost every minute. So here is the deal. Imagine I have an excel spreadsheet with 100 columns and 5000 rows. Each row has two unique identifiers. One for the row it self and one to group together 25 rows a piece. There are about 10 columns in the row that will almost never change and the other 90 columns will always be changing. We can say some will even change in a matter of seconds depending on how fast the row is updated. Rows can also be added and deleted from the group, but not from the database. The rows are taken from about 4 queries from the database to show the most recent and updated data from the database. So every time something in the database is updated, I would also like the row to be updated. If a row or a group has not been updated in 12 or so hours, it will be taken out of Cache. Once the user calls the group again via the DB queries. They will be placed into Cache. The above is what I would like. That is the wish. In Reality, I still have all the rows, but the way I store them in Cache is currently broken. I store each row in a class and the class is stored in the Server Cache via a HUGE list. When I go to update/Delete/Insert items in the list or rows, most the time it works, but sometimes it throws errors because the cache has changed. I want to be able to lock down the cache like the database throws a lock on a row more or less. I have DateTime stamps to remove things after 12 hours, but this almost always breaks because other users are updating the same 25 rows in the group or just the cache has changed. This is an example of how I add items to Cache, this one shows I only pull the 10 or so columns that very rarely change. This example all removes rows not updated after 12 hours: DateTime dt = DateTime.UtcNow; if (HttpContext.Current.Cache["GetRows"] != null) { List<RowIdentifiers> pis = (List<RowIdentifiers>)HttpContext.Current.Cache["GetRows"]; var ch = (from xx in pis where xx.groupID == groupID where xx.rowID== rowID select xx).ToList(); if (ch.Count() == 0) { var ck = GetInGroupNotCached(rowID, groupID, dt); //Pulling the group from the DB for (int i = 0; i < ck.Count(); i++) pis.Add(ck[i]); pis.RemoveAll((x) => x.updateDateTime < dt.AddHours(-12)); HttpContext.Current.Cache["GetRows"] = pis; return ck; } else return ch; } else { var pis = GetInGroupNotCached(rowID, groupID, dt);//Pulling the group from the DB HttpContext.Current.Cache["GetRows"] = pis; return pis; } On the last point, I remove items from the cache, so the cache doesn't actually get huge. To re-post the question, Whats a better way of doing this? Maybe and how to put locks on the cache? Can I get better than this? I just want it to stop breaking when removing or adding rows.

    Read the article

  • $.post is not working

    - by BEBO
    i am trying to post data to Mysql using jquery $.post and php page. my code is not running and nothing is added to the mysql table. I am not sure if the path i am creating is wrong but any help would be appreciated. Jquery location: f_js/tasks/TaskTest.js <script type="text/javascript"> $(document).ready(function(){ $("#AddTask").click(function(){ var acct = $('#acct').val(); var quicktask = $('#quicktask').val(); var user = $('#user').val(); $.post('addTask.php',{acct:acct,quicktask:quicktask,user:user}, function(data){ $('#result').fadeIn('slow').html(data); }); }); }); </script> addTask.php (runs the jqeury code) <?php include 'dbconnect.php'; include 'sessions.php'; $acct = $_POST['acct']; $task = $_POST['quicktask']; $taskstatus = 'Active'; //get task Creator $user = $_POST['user']; //query task creator from users table $allusers = mysql_query("SELECT * FROM users WHERE username = '$user'"); while ($rows = mysql_fetch_array($allusers)) { //get first and last name for task creator $taskOwner = $rows['user_firstname']; $taskOwnerLast = $rows['user_lastname']; $taskOwnerFull = $taskOwner." ".$taskOwnerLast; mysql_query("INSERT INTO tasks (taskresource, tasktitle, taskdetail, taskstatus, taskowner, taskOwnerFullName) VALUES ('$acct', '$task', '$task', '$taskstatus', '$user', '$taskOwnerFull' )"); echo "inserted"; } ?> Accountview.php finally the front page <html> <div class="input-cont "> <input type="text" class="form-control col-lg-12" placeholder="Add a quick Task..." name ="quicktask" id="quicktask"> </div> <div class="form-group"> <div class="pull-right chat-features"> <a href="javascript:;"> <i class="icon-camera"></i> </a> <a href="javascript:;"> <i class="icon-link"></i> </a> <input type="button" class="btn btn-danger" name="AddTask" id="AddTask" value="Add" /> <input type="hidden" name="acct" id="acct" value="<?php echo $_REQUEST['acctname']?>"/> <input type="hidden" name="user" id="user" value="<?php $username = $_SESSION['username']; echo $username?>"/> <div id="result">result</div> </div> </div> <!-- js placed at the end of the document so the pages load faster --> <script src="js/jquery.js"></script> <script src="f_js/tasks/TaskTest.js"></script> <!--common script for all pages--> <script src="js/common-scripts.js"></script> <script type="text/javascript" src="assets/gritter/js/jquery.gritter.js"></script> <script src="js/gritter.js" type="text/javascript"></script> <script> </html> Firebug reponse: Response Headers Connection Keep-Alive Content-Length 0 Content-Type text/html Date Fri, 08 Nov 2013 21:48:50 GMT Keep-Alive timeout=5, max=100 Server Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 X-Powered-By PHP/5.4.16 refresh 5; URL=index.php Request Headers Accept */* Accept-Encoding gzip, deflate Accept-Language en-US,en;q=0.5 Content-Length 13 Content-Type application/x-www-form-urlencoded; charset=UTF-8 Cookie PHPSESSID=6gufl3guiiddreg8cdlc0htnc6 Host localhost Referer http://localhost/betahtml/AccountView.php?acctname=client%201 User-Agent Mozilla/5.0 (Windows NT 6.3; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 X-Requested-With XMLHttpRequest

    Read the article

  • How to optimize method's that track metrics in 3rd party application?

    - by WulfgarPro
    Hi, I have two listboxes that keep updated lists of various objects roaming in a 3rd party application. When a user selects an object from a listbox, an event handler is fired, calling a method that gathers various metrics belonging to that object from the 3rd party application for displaying in a set of textboxes. This is slow! I am not sure how to optimize this functionality to facilitate greater speeds.. private void lsbUavs_SelectedIndexChanged(object sender, EventArgs e) { if (_ourSelectedUavFromListBox != null) { UtilStkScenario.ChangeUavColourOnScenario(_ourSelectedUavFromListBox.UavName, false); } if (lsbUavs.SelectedItem != null) { Uav ourUav = UtilStkScenario.FindUavFromScenarioBasedOnName(lsbUavs.SelectedItem.ToString()); hsbThrottle.Value = (int)ourUav.ThrottleValue; UtilStkScenario.ChangeUavColourOnScenario(ourUav.UavName, true); _ourSelectedUavFromListBox = ourUav; // we don't want this thread spawning many times if (tUpdateMetricInformationInTabControl != null) { if (tUpdateMetricInformationInTabControl.IsAlive) { tUpdateMetricInformationInTabControl.Abort(); } } tUpdateMetricInformationInTabControl = new Thread(UpdateMetricInformationInTabControl); tUpdateMetricInformationInTabControl.Name = "UpdateMetricInformationInTabControlUavs"; tUpdateMetricInformationInTabControl.IsBackground = true; tUpdateMetricInformationInTabControl.Start(lsbUavs); } } delegate string GetNameOfListItem(ListBox listboxId); delegate void SetTextBoxValue(TextBox textBoxId, string valueToSet); private void UpdateMetricInformationInTabControl(object listBoxToUpdate) { ListBox theListBoxToUpdate = (ListBox)listBoxToUpdate; GetNameOfListItem dGetNameOfListItem = new GetNameOfListItem(GetNameOfSelectedListItem); SetTextBoxValue dSetTextBoxValue = new SetTextBoxValue(SetNamedTextBoxValue); try { foreach (KeyValuePair<string, IAgStkObject> entity in UtilStkScenario._totalListOfAllStkObjects) { if (entity.Key.ToString() == (string)theListBoxToUpdate.Invoke(dGetNameOfListItem, theListBoxToUpdate)) { while ((string)theListBoxToUpdate.Invoke(dGetNameOfListItem, theListBoxToUpdate) == entity.Key.ToString()) { if (theListBoxToUpdate.Name == "lsbEntities") { double[] latLonAndAltOfEntity = UtilStkScenario.FindMetricsOfStkObjectOnScenario(UtilStkScenario._stkObjectRoot.CurrentTime, entity.Value); SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "Entity", entity.Key, "", "", "", "", latLonAndAltOfEntity[4].ToString(), latLonAndAltOfEntity[3].ToString()); } else if (theListBoxToUpdate.Name == "lsbUavs") { double[] latLonAndAltOfEntity = UtilStkScenario.FindMetricsOfStkObjectOnScenario(UtilStkScenario._stkObjectRoot.CurrentTime, entity.Value); SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "UAV", entity.Key, entity.Value.ClassName.ToString(), latLonAndAltOfEntity[0].ToString(), latLonAndAltOfEntity[1].ToString(), latLonAndAltOfEntity[2].ToString(), latLonAndAltOfEntity[4].ToString(), latLonAndAltOfEntity[3].ToString()); } } } } } catch (Exception e) { // selected entity was deleted(end-of-life) in STK - remove LLA information from GUI if (theListBoxToUpdate.Name == "lsbEntities") { SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "Entity", "", "", "", "", "", "", ""); UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "UpdateMetricInformationInTabControl", UtilLog.logWriter); } else if (theListBoxToUpdate.Name == "lsbUavs") { SetEntityOrUavMetricValuesInTextBoxes(dSetTextBoxValue, "UAV", "", "", "", "", "", "", ""); UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "UpdateMetricInformationInTabControl", UtilLog.logWriter); } } } internal static double[] FindMetricsOfStkObjectOnScenario(object timeToFindMetricState, IAgStkObject stkObject) { double[] stkObjectMetrics = null; try { stkObjectMetrics = new double[5]; object latOfStkObject, lonOfStkObject; double altOfStkObject, headingOfStkObject, velocityOfStkObject; IAgProvideSpatialInfo spatial = stkObject as IAgProvideSpatialInfo; IAgVeSpatialInfo spatialInfo = spatial.GetSpatialInfo(false); IAgSpatialState spatialState = spatialInfo.GetState(timeToFindMetricState); spatialState.FixedPosition.QueryPlanetodetic(out latOfStkObject, out lonOfStkObject, out altOfStkObject); double[] stkObjectheadingAndVelocity = FindHeadingAndVelocityOfStkObjectFromScenario(stkObject.InstanceName); headingOfStkObject = stkObjectheadingAndVelocity[0]; velocityOfStkObject = stkObjectheadingAndVelocity[1]; stkObjectMetrics[0] = (double)latOfStkObject; stkObjectMetrics[1] = (double)lonOfStkObject; stkObjectMetrics[2] = altOfStkObject; stkObjectMetrics[3] = headingOfStkObject; stkObjectMetrics[4] = velocityOfStkObject; } catch (Exception e) { UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "FindMetricsOfStkObjectOnScenario", UtilLog.logWriter); } return stkObjectMetrics; } private static double[] FindHeadingAndVelocityOfStkObjectFromScenario(string stkObjectName) { double[] stkObjectHeadingAndVelocity = new double[2]; IAgStkObject stkUavObject = null; try { string typeOfObject = CheckIfStkObjectIsEntityOrUav(stkObjectName); if (typeOfObject == "UAV") { stkUavObject = _stkObjectRootToIsolateForUavs.CurrentScenario.Children[stkObjectName]; IAgDataProviderGroup group = (IAgDataProviderGroup)stkUavObject.DataProviders["Heading"]; IAgDataProvider provider = (IAgDataProvider)group.Group["Fixed"]; IAgDrResult result = ((IAgDataPrvTimeVar)provider).ExecSingle(_stkObjectRootToIsolateForUavs.CurrentTime); stkObjectHeadingAndVelocity[0] = (double)result.DataSets[1].GetValues().GetValue(0); stkObjectHeadingAndVelocity[1] = (double)result.DataSets[4].GetValues().GetValue(0); } else if (typeOfObject == "Entity") { IAgStkObject stkEntityObject = _stkObjectRootToIsolateForEntities.CurrentScenario.Children[stkObjectName]; IAgDataProviderGroup group = (IAgDataProviderGroup)stkEntityObject.DataProviders["Heading"]; IAgDataProvider provider = (IAgDataProvider)group.Group["Fixed"]; IAgDrResult result = ((IAgDataPrvTimeVar)provider).ExecSingle(_stkObjectRootToIsolateForEntities.CurrentTime); stkObjectHeadingAndVelocity[0] = (double)result.DataSets[1].GetValues().GetValue(0); stkObjectHeadingAndVelocity[1] = (double)result.DataSets[4].GetValues().GetValue(0); } } catch (Exception e) { UtilLog.Log(e.Message.ToString(), e.GetType().ToString(), "FindHeadingAndVelocityOfStkObjectFromScenario", UtilLog.logWriter); } return stkObjectHeadingAndVelocity; } Any help would be really appreciated. From my knowledge, I cant really see any issues with the C#. Maybe it has to do with the methodology I'm using.. maybe some kind of caching mechanism is required - this is not natively available. WulfgarPro

    Read the article

  • mysql: Bind on unix socket: Permission denied

    - by Alex
    Can't start mysql with: sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 13:40:48 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:40:54 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended /srv/mysql/logs/mysqld-myDB.log: 120222 13:43:53 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:43:53 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld: Table 'plugin' is read only 120222 13:43:53 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 120222 13:43:53 InnoDB: Completed initialization of buffer pool 120222 13:43:53 InnoDB: Started; log sequence number 32 4232720908 120222 13:43:53 [ERROR] Can't start server : Bind on unix socket: Permission denied 120222 13:43:53 [ERROR] Do you already have another mysqld server running on socket: /srv/mysql/sockets/mysql-myDB.sock ? 120222 13:43:53 [ERROR] Aborting 120222 13:43:53 InnoDB: Starting shutdown... One instance mysqld is running: $ ps aux | grep mysql mysql 1093 0.0 0.2 169972 18700 ? Ssl 11:50 0:02 /usr/sbin/mysqld $ Port 3700 is available: $ netstat -a | grep 3700 $ Directory with sockets is empty: $ ls /srv/mysql/sockets/ $ There are all permissions: $ ls -l /srv/mysql/ total 20 drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:28 logs drwxrwxrwx 13 mysql mysql 4096 2012-02-22 13:44 myDB drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 pids drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 sockets drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:25 version Apparmor config: $cat /etc/apparmor.d/usr.sbin.mysqld # vim:syntax=apparmor # Last Modified: Tue Jun 19 17:37:30 2007 #include <tunables/global> /usr/sbin/mysqld flags=(complain) { #include <abstractions/base> #include <abstractions/nameservice> #include <abstractions/user-tmp> #include <abstractions/mysql> #include <abstractions/winbind> capability dac_override, capability sys_resource, capability setgid, capability setuid, network tcp, /etc/hosts.allow r, /etc/hosts.deny r, /etc/mysql/*.pem r, /etc/mysql/conf.d/ r, /etc/mysql/conf.d/* r, /etc/mysql/*.cnf r, /usr/lib/mysql/plugin/ r, /usr/lib/mysql/plugin/*.so* mr, /usr/sbin/mysqld mr, /usr/share/mysql/** r, /var/log/mysql.log rw, /var/log/mysql.err rw, /var/lib/mysql/ r, /var/lib/mysql/** rwk, /var/log/mysql/ r, /var/log/mysql/* rw, /{,var/}run/mysqld/mysqld.pid w, /{,var/}run/mysqld/mysqld.sock w, /srv/mysql/ r, /srv/mysql/** rwk, /sys/devices/system/cpu/ r, # Site-specific additions and overrides. See local/README for details. #include <local/usr.sbin.mysqld> } Any suggestions? UPD1: $ touch /srv/mysql/sockets/mysql-myDB.sock $ sudo chown mysql:mysql /srv/mysql/sockets/mysql-myDB.sock $ ls -l /srv/mysql/sockets/mysql-myDB.sock -rw-rw-r-- 1 mysql mysql 0 2012-02-22 14:29 /srv/mysql/sockets/mysql-myDB.sock $ sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 14:30:18 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. 120222 14:30:18 mysqld_safe Logging to '/srv/mysql/logs/mysqld-myDB.log'. 120222 14:30:18 mysqld_safe Starting mysqld daemon with databases from /srv/mysqlmyDB 120222 14:30:24 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended $ ls -l /srv/mysql/sockets/mysql-myDB.sock ls: cannot access /srv/mysql/sockets/mysql-myDB.sock: No such file or directory $ UPD2: $ sudo netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1093/mysqld unix 2 [ ACC ] STREAM LISTENING 5912 1093/mysqld /var/run/mysqld/mysqld.sock $ sudo lsof | grep /srv/mysql/sockets/mysql-myDB.sock lsof: WARNING: can't stat() fuse.gvfs-fuse-daemon file system /home/sears/.gvfs Output information may be incomplete. UPD3: $ cat /etc/mysql/my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • Network Logon Issues with Group Policy and Network

    - by bobloki
    I am gravely in need of your help and assistance. We have a problem with our logon and startup to our Windows 7 Enterprise system. We have more than 3000 Windows Desktops situated in roughly 20+ buildings around campus. Almost every computer on campus has the problem that I will be describing. I have spent over one month peering over etl files from Windows Performance Analyzer (A great product) and hundreds of thousands of event logs. I come to you today humbled that I could not figure this out. The problem as simply put our logon times are extremely long. An average first time logon is roughly 2-10 minutes depending on the software installed. All computers are Windows 7, the oldest computers being 5 years old. Startup times on various computers range from good (1-2 minutes) to very bad (5-60). Our second time logons range from 30 seconds to 4 minutes. We have a gigabit connection between each computer on the network. We have 5 domain controllers which also double as our DNS servers. Initial testing led us to believe that this was a software problem. So I spent a few days testing machines only to find inconsistent results from the etl files from xperfview. Each subset of computers on campus had a different subset of software issues, none seeming to interfere with logon just startup. So I started looking at our group policy and located some very interesting event ID’s. Group Policy 1129: The processing of Group Policy failed because of lack of network connectivity to a domain controller. Group Policy 1055: The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller). NETLOGON 5719 : This computer was not able to set up a secure session with a domain controller in domain OURDOMAIN due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. E1kexpress 27: Intel®82567LM-3 Gigabit Network Connection – Network link is disconnected. NetBT 4300 – The driver could not be created. WMI 10 - Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. More or less with timestamps it becomes apparent that the network maybe the issue. 1:25:57 - Group Policy is trying to discover the domain controller information 1:25:57 - The network link has been disconnected 1:25:58 - The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has successfully processed. If you do not see a success message for several hours, then contact your administrator. 1:25:58 - Making LDAP calls to connect and bind to active directory. DC1.ourdomain.edu 1:25:58 - Call failed after 0 milliseconds. 1:25:58 - Forcing rediscovery of domain controller details. 1:25:58 - Group policy failed to discover the domain controller in 1030 milliseconds 1:25:58 - Periodic policy processing failed for computer OURDOMAIN\%name%$ in 1 seconds. 1:25:59 - A network link has been established at 1Gbps at full duplex 1:26:00 - The network link has been disconnected 1:26:02 - NtpClient was unable to set a domain peer to use as a time source because of discovery error. NtpClient will try again in 3473457 minutes and DOUBLE THE REATTEMPT INTERVAL thereafter. 1:26:05 - A network link has been established at 1Gbps at full duplex 1:26:08 - Name resolution for the name %Name% timed out after none of the configured DNS servers responded. 1:26:10 – The TCP/IP NetBIOS Helper service entered the running state. 1:26:11 - The time provider NtpClient is currently receiving valid time data at dc4.ourdomain.edu 1:26:14 – User Logon Notification for Customer Experience Improvement Program 1:26:15 - Group Policy received the notification Logon from Winlogon for session 1. 1:26:15 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4. ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:18 - Computer details: Computer role : 2 Network name : (Blank) 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:19 - The WinHTTP Web Proxy Auto-Discovery Service service entered the running state. 1:26:46 - The Network Connections service entered the running state. 1:27:10 – Retrieved account information 1:27:10 – The system call to get account information completed. 1:27:10 - Starting policy processing due to network state change for computer OURDOMAIN\%name%$ 1:27:10 – Network state change detected 1:27:10 - Making system call to get account information. 1:27:11 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:27:13 - Computer details: Computer role : 2 Network name : ourdomain.edu (Now not blank) 1:27:13 - Group Policy successfully discovered the Domain Controller in 2886 milliseconds. 1:27:13 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu The call completed in 2371 milliseconds. 1:27:15 - Estimated network bandwidth on one of the connections: 0 kbps. 1:27:15 - Estimated network bandwidth on one of the connections: 8545 kbps. 1:27:15 - A fast link was detected. The Estimated bandwidth is 8545 kbps. The slow link threshold is 500 kbps. 1:27:17 – Powershell - Engine state is changed from Available to Stopped. 1:27:20 - Completed Group Policy Local Users and Groups Extension Processing in 4539 milliseconds. 1:27:25 - Completed Group Policy Scheduled Tasks Extension Processing in 5210 milliseconds. 1:27:27 - Completed Group Policy Registry Extension Processing in 1529 milliseconds. 1:27:27 - Completed policy processing due to network state change for computer OURDOMAIN\%name%$ in 16 seconds. 1:27:27 – The Group Policy settings for the computer were processed successfully. There were no changes detected since the last successful processing of Group Policy. Any help would be appreciated. Please ask for any relevant information and it will be provided as soon as possible.

    Read the article

  • Network Logon Issues with Group Policy and Network

    - by bobloki
    I am gravely in need of your help and assistance. We have a problem with our logon and startup to our Windows 7 Enterprise system. We have more than 3000 Windows Desktops situated in roughly 20+ buildings around campus. Almost every computer on campus has the problem that I will be describing. I have spent over one month peering over etl files from Windows Performance Analyzer (A great product) and hundreds of thousands of event logs. I come to you today humbled that I could not figure this out. The problem as simply put our logon times are extremely long. An average first time logon is roughly 2-10 minutes depending on the software installed. All computers are Windows 7, the oldest computers being 5 years old. Startup times on various computers range from good (1-2 minutes) to very bad (5-60). Our second time logons range from 30 seconds to 4 minutes. We have a gigabit connection between each computer on the network. We have 5 domain controllers which also double as our DNS servers. Initial testing led us to believe that this was a software problem. So I spent a few days testing machines only to find inconsistent results from the etl files from xperfview. Each subset of computers on campus had a different subset of software issues, none seeming to interfere with logon just startup. So I started looking at our group policy and located some very interesting event ID’s. Group Policy 1129: The processing of Group Policy failed because of lack of network connectivity to a domain controller. Group Policy 1055: The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller). NETLOGON 5719 : This computer was not able to set up a secure session with a domain controller in domain OURDOMAIN due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. E1kexpress 27: Intel®82567LM-3 Gigabit Network Connection – Network link is disconnected. NetBT 4300 – The driver could not be created. WMI 10 - Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. More or less with timestamps it becomes apparent that the network maybe the issue. 1:25:57 - Group Policy is trying to discover the domain controller information 1:25:57 - The network link has been disconnected 1:25:58 - The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has successfully processed. If you do not see a success message for several hours, then contact your administrator. 1:25:58 - Making LDAP calls to connect and bind to active directory. DC1.ourdomain.edu 1:25:58 - Call failed after 0 milliseconds. 1:25:58 - Forcing rediscovery of domain controller details. 1:25:58 - Group policy failed to discover the domain controller in 1030 milliseconds 1:25:58 - Periodic policy processing failed for computer OURDOMAIN\%name%$ in 1 seconds. 1:25:59 - A network link has been established at 1Gbps at full duplex 1:26:00 - The network link has been disconnected 1:26:02 - NtpClient was unable to set a domain peer to use as a time source because of discovery error. NtpClient will try again in 3473457 minutes and DOUBLE THE REATTEMPT INTERVAL thereafter. 1:26:05 - A network link has been established at 1Gbps at full duplex 1:26:08 - Name resolution for the name %Name% timed out after none of the configured DNS servers responded. 1:26:10 – The TCP/IP NetBIOS Helper service entered the running state. 1:26:11 - The time provider NtpClient is currently receiving valid time data at dc4.ourdomain.edu 1:26:14 – User Logon Notification for Customer Experience Improvement Program 1:26:15 - Group Policy received the notification Logon from Winlogon for session 1. 1:26:15 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4. ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:18 - Computer details: Computer role : 2 Network name : (Blank) 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:19 - The WinHTTP Web Proxy Auto-Discovery Service service entered the running state. 1:26:46 - The Network Connections service entered the running state. 1:27:10 – Retrieved account information 1:27:10 – The system call to get account information completed. 1:27:10 - Starting policy processing due to network state change for computer OURDOMAIN\%name%$ 1:27:10 – Network state change detected 1:27:10 - Making system call to get account information. 1:27:11 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:27:13 - Computer details: Computer role : 2 Network name : ourdomain.edu (Now not blank) 1:27:13 - Group Policy successfully discovered the Domain Controller in 2886 milliseconds. 1:27:13 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu The call completed in 2371 milliseconds. 1:27:15 - Estimated network bandwidth on one of the connections: 0 kbps. 1:27:15 - Estimated network bandwidth on one of the connections: 8545 kbps. 1:27:15 - A fast link was detected. The Estimated bandwidth is 8545 kbps. The slow link threshold is 500 kbps. 1:27:17 – Powershell - Engine state is changed from Available to Stopped. 1:27:20 - Completed Group Policy Local Users and Groups Extension Processing in 4539 milliseconds. 1:27:25 - Completed Group Policy Scheduled Tasks Extension Processing in 5210 milliseconds. 1:27:27 - Completed Group Policy Registry Extension Processing in 1529 milliseconds. 1:27:27 - Completed policy processing due to network state change for computer OURDOMAIN\%name%$ in 16 seconds. 1:27:27 – The Group Policy settings for the computer were processed successfully. There were no changes detected since the last successful processing of Group Policy. Any help would be appreciated. Please ask for any relevant information and it will be provided as soon as possible.

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • Hard Disk DRDY error: is it a crash

    - by pranjal
    I am using IBM Thinkpad, 1.7GHz, 512 RAM with Linux Mint 9 installed. I have two partitions in addition to root. One of the partitions became read-only yesterday, after which I rebooted my system. It is extremely slow along with DRDY Error : Is my Hard disk crashed ? Error Log while booting. Differences between boot sector and its backup. failed command : READ DMA BMDMA : stat 0X25 ata 1.00 : status : { DRDY ERR } ata 1.00 : status :{ UNC } Buffer I/O error on logical device, logical block 65467 smartctl output for the partition: mint mint # smartctl -a /dev/sda1 smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: TOSHIBA MK4026GAX RoHS Serial Number: X5LY1623T Firmware Version: PA107E User Capacity: 40,007,761,920 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 6 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Feb 17 06:48:25 2011 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 153) seconds. Offline data collection capabilities: (0x1b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 30) minutes. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 310 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 3968 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 40 7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 082 082 000 Old_age Always - 7257 10 Spin_Retry_Count 0x0033 179 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 3484 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 489 193 Load_Cycle_Count 0x0032 064 064 000 Old_age Always - 367150 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 36 (Lifetime Min/Max 14/57) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 33 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 82 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 220 Disk_Shift 0x0002 100 100 000 Old_age Always - 101 222 Loaded_Hours 0x0032 085 085 000 Old_age Always - 6146 223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0 224 Load_Friction 0x0022 100 100 000 Old_age Always - 0 226 Load-in_Time 0x0026 100 100 000 Old_age Always - 227 240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0 SMART Error Log Version: 1 ATA Error Count: 2371 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 2371 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:10.061 READ DMA f8 00 00 00 00 00 e0 00 00:03:10.061 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:10.053 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:10.053 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:10.053 READ NATIVE MAX ADDRESS Error 2370 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:03:03.328 READ DMA f8 00 00 00 00 00 e0 00 00:03:03.327 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:03:03.320 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:03:03.319 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:03:03.319 READ NATIVE MAX ADDRESS Error 2369 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:56.582 READ DMA f8 00 00 00 00 00 e0 00 00:02:56.582 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:56.574 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:56.574 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:56.574 READ NATIVE MAX ADDRESS Error 2368 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:49.809 READ DMA f8 00 00 00 00 00 e0 00 00:02:49.809 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:49.801 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:49.801 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:49.801 READ NATIVE MAX ADDRESS Error 2367 occurred at disk power-on lifetime: 7256 hours (302 days + 8 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 05 1a 1b 00 e0 Error: UNC 5 sectors at LBA = 0x00001b1a = 6938 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 05 1a 1b 00 e0 00 00:02:43.056 READ DMA f8 00 00 00 00 00 e0 00 00:02:43.056 READ NATIVE MAX ADDRESS ec 00 00 00 00 00 a0 02 00:02:43.048 IDENTIFY DEVICE ef 03 45 00 00 00 a0 02 00:02:43.048 SET FEATURES [Set transfer mode] f8 00 00 00 00 00 e0 00 00:02:43.047 READ NATIVE MAX ADDRESS SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] Device does not support Selective Self Tests/Logging Do I need to get a new Hard Disk my PC ?

    Read the article

  • OWA, Outlook Anywhere, RPCPing Inconsistencies

    - by pk.
    I'm troubleshooting an Outlook Anywhere issue with a new Exchange 2010 server. The server in question, MS2010, is behind a SonicWALL NSA 2400 device and works wonderfully except for Outlook Anywhere. Outlook Anywhere works internally and I've verified (through Ctrl-Right Click --> Connection Status) that I'm able to connect to MS2010 over HTTPS. When trying to connect to the server using HTTPS from outside the firewall, I'm unable to do so. A Wireshark trace shows 30 or so successful HTTPS packet transmissions, and then it fails with 3 straight transmissions to a destination port of 135. I have no idea why my computer is attempting to access anything on port 135 since I've setup my profile to use HTTPS on both slow and fast connections. I'm 99% certain that the firewall is configured correctly. I run Outlook Web Access (also HTTPS) on the same server and there are no issues with access. EDIT: My Autodiscover settings are correct (as far as I can tell). My server passes the Outlook Anywhere and Autodiscover tests at https://www.testexchangeconnectivity.com/. I've been using the RPCPing utility to troubleshoot and have come across the following results: Internally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v2.12. Copyright (C) Microsoft Corporation, 2002 OS Version is: 6.1, Service Pack 1 RPCPinging proxy server mail.mydomain.com with Echo Request Packet Sending ping to server Response from server received: 200 Pinging successfully completed in 93 ms Externally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v6.0. Copyright (C) Microsoft Corporation, 2002-2006 Enter password for RPC/HTTP proxy: RPCPing set Activity ID: {fc8411ba-2987-4175-b37b-801dc69d5ff9} RPCPinging proxy server mail.mydomain.com with Echo Request Packet Setting autologon policy to high WinHttpSetCredentials for target server called Error 87 : The parameter is incorrect. returned in WinHttpSetCredentials Ping failed What should I be checking in order to troubleshoot my Outlook Anywhere issues? I'm using Windows 7 SP1 for internal and external access. EDIT: Autodiscover.xml content <?xml version="1.0"?> <Autodiscover xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006"> <Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a"> <User> <DisplayName>John Doe</DisplayName> <LegacyDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=pk</LegacyDN> <DeploymentId>d35170cc-f3a7-42c5-9427-1f554a469126</DeploymentId> </User> <Account> <AccountType>email</AccountType> <Action>settings</Action> <Protocol> <Type>EXCH</Type> <Server>MS2010.MYDOMAIN.local</Server> <ServerDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010</ServerDN> <ServerVersion>738180DA</ServerVersion> <MdbDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010/cn=Microsoft Private MDB</MdbDN> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> <OOFUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</OOFUrl> <OABUrl>http://MS2010.MYDOMAIN.local/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://MS2010.MYDOMAIN.local/EWS/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <PublicFolderServer>MS2007.MYDOMAIN.local</PublicFolderServer> <AD>dc1.MYDOMAIN.local</AD> <EwsUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</EwsUrl> <EcpUrl>https://MS2010.MYDOMAIN.local/ecp/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>EXPR</Type> <Server>mail.mycompany.com</Server> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> <OOFUrl>https://mail.mycompany.com/ews/exchange.asmx</OOFUrl> <OABUrl>https://mail.mycompany.com/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://mail.mycompany.com/ews/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <SSL>On</SSL> <AuthPackage>Basic</AuthPackage> <CertPrincipalName>msstd:mail.mycompany.com</CertPrincipalName> <EwsUrl>https://mail.mycompany.com/ews/exchange.asmx</EwsUrl> <EcpUrl>https://mail.mycompany.com/owa/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>WEB</Type> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <Internal> <OWAUrl AuthenticationMethod="Basic, Fba">https://MS2010.MYDOMAIN.local/owa/</OWAUrl> <Protocol> <Type>EXCH</Type> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> </Protocol> </Internal> <External> <OWAUrl AuthenticationMethod="Fba">https://mail.mycompany.com/owa/</OWAUrl> <Protocol> <Type>EXPR</Type> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> </Protocol> </External> </Protocol> </Account> </Response> </Autodiscover>

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249  | Next Page >