Search Results

Search found 4161 results on 167 pages for 'desmond lost'.

Page 151/167 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • .NET SerialPort DataReceived event thread interference with main thread

    - by Kiran
    I am writing a serial communication program using the SerialPort class in C# to interact with a strip machine connected via a RS232 cable. When i send the command to the machine it responds with some bytes depending on the command. Like when i send a "\D" command, i am expecting to download the machine program data of 180 bytes as a continous string. As per the machine's manual, it suggests as a best practice to send an unreognized characters like comma (,) character to make sure the machine is initialized before sending the first command in the cycle. My serial communication code is as follows: public class SerialHelper { SerialPort commPort = null; string currentReceived = string.Empty; string receivedStr = string.Empty; private bool CommInitialized() { try { commPort = new SerialPort(); commPort.PortName = "COM1"; if (!commPort.IsOpen) commPort.Open(); commPort.BaudRate = 9600; commPort.Parity = System.IO.Ports.Parity.None; commPort.StopBits = StopBits.One; commPort.DataBits = 8; commPort.RtsEnable = true; commPort.DtrEnable = true; commPort.DataReceived += new SerialDataReceivedEventHandler(commPort_DataReceived); return true; } catch (Exception ex) { return false; } } void commPort_DataReceived(object sender, SerialDataReceivedEventArgs e) { SerialPort currentPort = (SerialPort)sender; currentReceived = currentPort.ReadExisting(); receivedStr += currentReceived; } internal int CommIO(string outString, int outLen, ref string inBuffer, int inLen) { receivedStr = string.Empty; inBuffer = string.Empty; if (CommInitialized()) { commPort.Write(outString); } System.Threading.Thread.Sleep(1500); int i = 0; while ((receivedStr.Length < inLen) && i < 10) { System.Threading.Thread.Sleep(500); i += 1; } if (!string.IsNullOrEmpty(receivedStr)) { inBuffer = receivedStr; } commPort.Close(); return inBuffer.Length; } } I am calling this code from a windows form as follows: len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) If(inBuffer == "!?*O") { len = SerialHelperObj.CommIO("\D",2,ref inBuffer, 180) } A valid return value from the serial port looks like this: \D00000010000000000010 550 3250 0000256000 and so on ... I am getting some thing like this: \D00000010D,, 000 550 D,, and so on... I feel that my comm calls are getting interferred with the one when i send commands. But i am trying to make sure the result of the comma command then initiating the actual command. but the received thread is inserting the bytes from the previous communication cycle. Can any one please shed some light into this...? I lost quite some hair just trying to get this work. I am not sure where i am doing wrong

    Read the article

  • Good Email Notification Sending Service

    - by Philibert Perusse
    I need to send a few but important email notifications to individual users. For instance, when they register their software I send them a confirmation email. Right now, I am using 'sendmail' from my Perl CGI script to do the job. Most of my automated email are lost or marked as junk. Unfortunately, I am using shared hosting services and not a very good control over the SPF and SenderID DNS records. Even more bad, some other user of that shared server has been infected with some kind of SPAM-BOT and the IP is now blacklisted until further notice! Anyway I just don't want to deal with this kind of headache. I am looking for an online service that I will be able to subscribe to and pay something like 0.10$ per email I send with no monthly fees. I just need and API to be able to send the email from PHP or Perl code I will have to write. I have been looking around at all those "Email Sending Services" and they are all wrapped around creating campains and managing lists for bulk email marketing distribution and newsletters. But remember, I want to send an email notification to a "single" recipient. So far, I have look at MailChimp, SocketLabs, iContact, ConstantContact, StreamSend and so many others to no avail. I have seen one comment at Hackers News saying that MailChimp have an API for transactional e-mails (i.e. ad-hoc ones to welcome a user for example). So you're not just restricted to using them for bulk emails But I cannot find this in the API documentation supplied, maybe this was removed. Any suggestions out there. Here is a summary of my requirements: Allows ad hoc sending of email to a single recipient. Throughput may well be throttle I don't care, i am sending like 2-5 emails a day. API available in PHP or Perl to connect to that web service. Ideally I can send HTML formatted emails, otherwise I will live with text only. Solution not too expensive, between 0.01$ and 0.25$ per email would be acceptable. No recurring monthly fees.

    Read the article

  • Safari/Chrome problem with ajaxsubmit?

    - by Jan
    Hi I'm currently having some weird issues with ajaxsubmit (http://jquery.malsup.com/form/#ajaxSubmit), which I'm currently using in a project. I have a flow where I need to open a form into a modal window. I'm using fancybox for that and it works like a charm. When the form has been forced to open in the fancybox window there can happen two things. 1) If the user who is about to submit the form is logged in she should see a confirmation in the modal box, that her input was succesfully submitted 2)If the user is not logged in there should be loaded a login form once she hits the submit button 2.1) When the user has logged in she should receive a confirmation in the modal box. This is also working like a charm in Firefox, IE8 and IE7 but not in Safari or Chrome. The weird part is that it seems like safari and chrome are completely ignoring my ajaxsubmit form. To force the first form to be opened I use the follwoing script - this part is working in both Safari and Chrome. $(".klikEnPrisForm").ajaxForm({ success: function(data){ $.fancybox({'content':data}); } }); My ajaxsubmit form scrip looks like this var options = { url: '/?altTemplate=XmlProxyKlikEnPris', dataType: 'xml', data: $(this).serializeArray(), success: function(data) { if ($(data).find('loggetind').text() == 'true') { $("#klikenpris").hide(); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/KlikEnPrisAccept?tilKvittering=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } else { $("#klikenpris").hide(); $("#fancybox-wrap").css({ 'width': '480px', 'height': '220px' }); $("#fancybox-inner").css({ 'width': '460px', 'height': '220px' }); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/login.aspx?loginklikpris=0&klikpris=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } } }; // bind to the form's submit event $('#klikenprisform').submit(function() { // inside event callbacks 'this' is the DOM element so we first // wrap it in a jQuery object and then invoke ajaxSubmit $(this).ajaxSubmit(options); // !!! Important !!! // always return false to prevent standard browser submit and page navigation return false; }); I have tried inserting an alert in the succes callback function but it's never being called it seems. It seems like the default action is not being overruled by the link written in the "url" in ajaxsubmit. I'm really puzzled about this, since it's working nicely in other browsers and I'm completely lost on how I should approach the debugging in safari/chrome. I hope all the above makes sense and I'm looking forward to hear any suggestions. Cheers!

    Read the article

  • Wrapping a pure virtual method with multiple arguments with Boost.Python

    - by fallino
    Hello, I followed the "official" tutorial and others but still don't manage to expose this pure virtual method (getPeptide) : ms_mascotresults.hpp class ms_mascotresults { public: ms_mascotresults(ms_mascotresfile &resfile, const unsigned int flags, double minProbability, int maxHitsToReport, const char * unigeneIndexFile, const char * singleHit = 0); ... virtual ms_peptide getPeptide(const int q, const int p) const = 0; } ms_mascotresults.cpp #include <boost/python.hpp> using namespace boost::python; #include "msparser.hpp" // which includes "ms_mascotresults.hpp" using namespace matrix_science; #include <iostream> #include <sstream> struct ms_mascotresults_wrapper : ms_mascotresults, wrapper<ms_mascotresults> { ms_peptide getPeptide(const int q, const int p) { this->get_override("getPeptide")(q); this->get_override("getPeptide")(p); } }; BOOST_PYTHON_MODULE(ms_mascotresults) { class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults") .def("getPeptide", pure_virtual(&ms_mascotresults::getPeptide) ) ; } Here are the bjam's errors : /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:66: error: cannot declare field ‘boost::python::objects::value_holder<ms_mascotresults_wrapper>::m_held’ to be of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: because the following virtual functions are pure within ‘ms_mascotresults_wrapper’: ... include/ms_mascotresults.hpp:334: note: virtual matrix_science::ms_peptide matrix_science::ms_mascotresults::getPeptide(int, int) const ms_mascotresults.cpp: In constructor ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper()’: ms_mascotresults.cpp:12: error: no matching function for call to ‘matrix_science::ms_mascotresults::ms_mascotresults()’ include/ms_mascotresults.hpp:284: note: candidates are: matrix_science::ms_mascotresults::ms_mascotresults(matrix_science::ms_mascotresfile&, unsigned int, double, int, const char*, const char*) include/ms_mascotresults.hpp:109: note: matrix_science::ms_mascotresults::ms_mascotresults(const matrix_science::ms_mascotresults&) ... /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp: In constructor ‘boost::python::objects::value_holder<Value>::value_holder(PyObject*) [with Value = ms_mascotresults_wrapper]’: /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: note: synthesized method ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper()’ first required here /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: error: cannot allocate an object of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: since type ‘ms_mascotresults_wrapper’ has pure virtual functions So I tried to change the constructor's signature by : BOOST_PYTHON_MODULE(ms_mascotresults) { //class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults") class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults", init<ms_mascotresfile &, const unsigned int, double, int, const char *,const char *>()) .def("getPeptide", pure_virtual(&ms_mascotresults::getPeptide) ) Giving these errors : /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:66: error: cannot declare field ‘boost::python::objects::value_holder<ms_mascotresults_wrapper>::m_held’ to be of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: because the following virtual functions are pure within ‘ms_mascotresults_wrapper’: include/ms_mascotresults.hpp:334: note: virtual matrix_science::ms_peptide matrix_science::ms_mascotresults::getPeptide(int, int) const ... ms_mascotresults.cpp:24: instantiated from here /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: error: no matching function for call to ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper(matrix_science::ms_mascotresfile&, const unsigned int&, const double&, const int&, const char* const&, const char* const&)’ ms_mascotresults.cpp:12: note: candidates are: ms_mascotresults_wrapper::ms_mascotresults_wrapper(const ms_mascotresults_wrapper&) ms_mascotresults.cpp:12: note: ms_mascotresults_wrapper::ms_mascotresults_wrapper() If I comment the virtual function getPeptide in the .hpp, it builds perfectly with this constructor : class_<ms_mascotresults>("ms_mascotresults", init<ms_mascotresfile &, const unsigned int, double, int, const char *,const char *>() ) So I'm a bit lost...

    Read the article

  • Need help manipulating WAV (RIFF) Files at a byte level

    - by Eric
    I'm writing an an application in C# that will record audio files (*.wav) and automatically tag and name them. Wave files are RIFF files (like AVI) which can contain meta data chunks in addition to the waveform data chunks. So now I'm trying to figure out how to read and write the RIFF meta data to and from recorded wave files. I'm using NAudio for recording the files, and asked on their forums as well on SO for way to read and write RIFF tags. While I received a number of good answers, none of the solutions allowed for reading and writing RIFF chunks as easily as I would like. But more importantly I have very little experience dealing with files at a byte level, and think this could be a good opportunity to learn. So now I want to try writing my own class(es) that can read in a RIFF file and allow meta data to be read, and written from the file. I've used streams in C#, but always with the entire stream at once. So now I'm little lost that I have to consider a file byte by byte. Specifically how would I go about removing or inserting bytes to and from the middle of a file? I've tried reading a file through a FileStream into a byte array (byte[]) as shown in the code below. System.IO.FileStream waveFileStream = System.IO.File.OpenRead(@"C:\sound.wav"); byte[] waveBytes = new byte[waveFileStream.Length]; waveFileStream.Read(waveBytes, 0, waveBytes.Length); And I could see through the Visual Studio debugger that the first four byte are the RIFF header of the file. But arrays are a pain to deal with when performing actions that change their size like inserting or removing values. So I was thinking I could then to the byte[] into a List like this. List<byte> list = waveBytes.ToList<byte>(); Which would make any manipulation of the file byte by byte a whole lot easier, but I'm worried I might be missing something like a class in the System.IO name-space that would make all this even easier. Am I on the right track, or is there a better way to do this? I should also mention that I'm not hugely concerned with performance, and would prefer not to deal with pointers or unsafe code blocks like this guy. If it helps at all here is a good article on the RIFF/WAV file format.

    Read the article

  • Using embedded resources in Silverlight (4) - other cultures not being compiled

    - by Andrei Rinea
    I am having a bit of a hard time providing localized strings for the UI in a small Silverlight 4 application. Basically I've put a folder "Resources" and placed two resource files in it : Statuses.resx Statuses.ro.resx I do have an enum Statuses : public enum Statuses { None, Working } and a convertor : public class StatusToMessage : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (!Enum.IsDefined(typeof(Status), value)) { throw new ArgumentOutOfRangeException("value"); } var x = Statuses.None; return Statuses.ResourceManager.GetString(((Status)value).ToString(), Thread.CurrentThread.CurrentUICulture); } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } in the view I have a textblock : <TextBlock Grid.Column="3" Text="{Binding Status, Converter={StaticResource StatusToMessage}}" /> Upon view rendering the converter is called but no matter what the Thread.CurrentThread.CurrentUICulture is set it always returns the default culture value. Upon further inspection I took apart the XAP resulted file, taken the resulted DLL file to Reflector and inspected the embedded resources. It only contains the default resource!! Going back to the two resource files I am now inspecting their properties : Build action : Embedded Resource Copy to output directory : Do not copy Custom tool : ResXFileCodeGenerator Custom tool namespace : [empty] Both resource (.resx) files have these settings. The .Designer.cs resulted files are as follows : Statuses.Designer.cs : //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace SilverlightApplication5.Resources { using System; /// <summary> /// A strongly-typed resource class, for looking up localized strings, etc. /// </summary> // This class was auto-generated by the StronglyTypedResourceBuilder // class via a tool like ResGen or Visual Studio. // To add or remove a member, edit your .ResX file then rerun ResGen // with the /str option, or rebuild your VS project. [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "4.0.0.0")] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] internal class Statuses { // ... yadda-yadda Statuses.ro.Designer.cs [empty] I've taken both files and put them in a console application and they behave as expected in it, not like in this silverlight application. What is wrong?

    Read the article

  • Optimizing multiple dispatch notification algorithm in C#?

    - by Robert Fraser
    Sorry about the title, I couldn't think of a better way to describe the problem. Basically, I'm trying to implement a collision system in a game. I want to be able to register a "collision handler" that handles any collision of two objects (given in either order) that can be cast to particular types. So if Player : Ship : Entity and Laser : Particle : Entity, and handlers for (Ship, Particle) and (Laser, Entity) are registered than for a collision of (Laser, Player), both handlers should be notified, with the arguments in the correct order, and a collision of (Laser, Laser) should notify only the second handler. A code snippet says a thousand words, so here's what I'm doing right now (naieve method): public IObservable<Collision<T1, T2>> onCollisionsOf<T1, T2>() where T1 : Entity where T2 : Entity { Type t1 = typeof(T1); Type t2 = typeof(T2); Subject<Collision<T1, T2>> obs = new Subject<Collision<T1, T2>>(); _onCollisionInternal += delegate(Entity obj1, Entity obj2) { if (t1.IsAssignableFrom(obj1.GetType()) && t2.IsAssignableFrom(obj2.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj1, (T2) obj2)); else if (t1.IsAssignableFrom(obj2.GetType()) && t2.IsAssignableFrom(obj1.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj2, (T2) obj1)); }; return obs; } However, this method is quite slow (measurable; I lost ~2 FPS after implementing this), so I'm looking for a way to shave a couple cycles/allocation off this. I thought about (as in, spent an hour implementing then slammed my head against a wall for being such an idiot) a method that put the types in an order based on their hash code, then put them into a dictionary, with each entry being a linked list of handlers for pairs of that type with a boolean indication whether the handler wanted the order of arguments reversed. Unfortunately, this doesn't work for derived types, since if a derived type is passed in, it won't notify a subscriber for the base type. Can anyone think of a way better than checking every type pair (twice) to see if it matches? Thanks, Robert

    Read the article

  • Inline-editing: onBlur prevents onClick from being triggered (jQuery)

    - by codethief
    Hello StackOverflow community! I'm currently working on my own jQuery plugin for inline-editing as those that already exist don't fit my needs. Anyway, I'd like to give the user the following (boolean) options concerning the way editing is supposed to work: submit_button reset_on_blur Let's say the user would like to have a submit button (submit_button = true) and wants the inline input element to be removed as soon as it loses focus (reset_on_blur = true). This leads to an onClick handler being registered for the button and an onBlur handler being registered for the input element. Every time the user clicks the button, however, the onBlur handler is also triggered and results in the edit mode being left, i.e. before the onClick event occurs. This makes submitting impossible. FYI, the HTML in edit mode looks like this: <td><input type="text" class="ie-input" value="Current value" /><div class="ie-content-backup" style="display: none;">Backup Value</div><input type="submit" class="ie-button-submit" value="Save" /></td> So, is there any way I could check in the onBlur handler if the focus was lost while activating the submit button, so that edit mode isn't left before the submit button's onClick event is triggered? I've also tried to register a $('body').click() handler to simulate blur and to be able to trace back which element has been clicked, but that didn't work either and resulted in rather strange bugs: $('html').click(function(e) { // body doesn't span over full page height, use html instead // Don't reset if the submit button, the input element itself or the element to be edited inline was clicked. if(!$(e.target).hasClass('ie-button-submit') && !$(e.target).hasClass('ie-input') && $(e.target).get(0) != element.get(0)) { cancel(element); } }); jEditable uses the following piece of code. I don't like this approach, though, because of the delay. Let alone I don't even understand why this works. ;) input.blur(function(e) { /* prevent canceling if submit was clicked */ t = setTimeout(function() { reset.apply(form, [settings, self]); }, 500); }); Thanks in advance!

    Read the article

  • user generated / user specific functions

    - by pedalpete
    I'm looking for the most elegant and secure method to do the following. I have a calendar, and groups of users. Users can add events to specific days on the calendar, and specify how long each event lasts for. I've had a few requests from users to add the ability for them to define that events of a specific length include a break, of a certain amount of time, or require that a specific amount of time be left between events. For example, if event is 2 hours, include a 20min break. for each event, require 30 minutes before start of next event. The same group that has asked for an event of 2 hours to include a 20 min break, could also require that an event 3 hours include a 30 minute break. In the end, what the users are trying to get is an elapsed time excluding breaks calculated for them. Currently I provide them a total elapsed time, but they are looking for a running time. However, each of these requests is different for each group. Where one group may want a 30 minute break during a 2 hour event, and another may want only 10 minutes for each 3 hour event. I was kinda thinking I could write the functions into a php file per group, and then include that file and do the calculations via php and then return a calculated total to the user, but something about that doesn't sit right with me. Another option is to output the groups functions to javascript, and have it run client-side, as I'm already returning the duration of the event, but where the user is part of more than one group with different rules, this seems like it could get rather messy. I currently store the start and end time in the database, but no 'durations', and I don't think I should be storing the calculated totals in the db, because if a group decides to change their calculations, I'd need to change it throughout the db. Is there a better way of doing this? I would just store the variables in mysql, but I don't see how I can then say to mysql to calculate based on those variables. I'm REALLY lost here. Any suggestions? I'm hoping somebody has done something similar and can provide some insight into the best direction. If it helps, my table contains eventid, user, group, startDate, startTime, endDate, endTime, type The json for the event which I return to the user is {"eventid":"'.$eventId.'", "user":"'.$userId.'","group":"'.$groupId.'","type":"'.$type.'","startDate":".$startDate.'","startTime":"'.$startTime.'","endDate":"'.$endDate.'","endTime":"'.$endTime.'","durationLength":"'.$duration.'", "durationHrs":"'.$durationHrs.'"} where for example, duration length is 2.5 and duration hours is 2:30.

    Read the article

  • Using FluentNHibernate + SQLite with .Net4?

    - by stiank81
    I have a WPF application running with .Net3.5 using FluentNHibernate, and all works fine. When I upgraded to VS2010 I ran into some odd problems, but after changing to use x64 variant of SQLite it worked fine again. Now I want to change to use .Net4, but this has turned into a more painful experience then I expected.. When calling FluentConfiguration.BuildConfiguration I get an exception thrown: FluentConfigurationException unhandled An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail The inner exception gives us more information: Message = "Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4." It has an InnerException again: Exception has been thrown by the target of an invocation. Which again has an InnerException: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Now - to me it sounds like it doesn't find System.Data.SQLite.dll, but I can't understand this. Everywhere this is referenced I have "Copy Local", and I have verified that it is in every build folder for projects using SQLite. I have also copied it manually to every Debug folder of the solution - without luck. Notes: This exact same code worked just fine before I upgraded to .Net4. I did see some x64 x86 mismatch problems earlier, but I have switched to use x86 as the target platform and for all referenced dlls. I have verified that all files in the Debug-folder are x86. I have tried the precompiled Fluent dlls, I have tried compiling myself, and I have compiled my own version of Fluent using .Net4. I see that there are also others that have seen this problem, but I haven't really seen any solution yet. After @devio's answer I tried adding a reference to the SQLite dll. This didn't change anything, but I hope I made it right though.. This is what I added to the root node of the app.config file: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <qualifyAssembly partialName="System.Data.SQLite" fullName="System.Data.SQLite, Version=1.0.60.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139" /> </assemblyBinding> </runtime> Anyone out there using Fluent with .Net4 and SQLite successfully? Help! I'm lost...

    Read the article

  • iPhone UIImage upload to web service

    - by user347635
    Hi all, I worked on this for several hours today and I'm pretty close to a solution but clearly need some help from someone who's pulled this off. I'm trying to post an image to a web service from the iPhone. I'll post the code first then explain everything I've tried: NSData *imageData = UIImageJPEGRepresentation(barCodePic, .9); NSString *soapMsg = [NSString stringWithFormat: @"<?xml version=\"1.0\" encoding=\"utf-8\"?><soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"><soap:Body><WriteImage xmlns=\"http://myserver/imagewebservice/\"><ImgIn>%@</ImgIn></WriteImage></soap:Body></soap:Envelope>", [NSData dataWithData:imageData] ]; NSURL *url = [NSURL URLWithString:@"http://myserver/imagewebservice/service1.asmx"]; NSMutableURLRequest *req = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMsg length]]; [req addValue:@"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [req addValue:@"http://myserver/imagewebservice/WriteImage" forHTTPHeaderField:@"SOAPAction"]; [req addValue:msgLength forHTTPHeaderField:@"Content-Length"]; [req setHTTPMethod:@"POST"]; [req setHTTPBody: [soapMsg dataUsingEncoding:NSUTF8StringEncoding]]; conn = [[NSURLConnection alloc] initWithRequest:req delegate:self]; if (conn) { webData = [[NSMutableData data] retain]; } First thing, this code works fine for anything but an image. The web service is running on my local network and I can change the source code at will and if I change the "ImgIn" parameter to a string and pass a string in, everything works fine, I get a return value no problem. So there are no connectivity issues at all, I'm able to call and get data from this web service on this server no problems. But I need to upload an image to this web service via the ImgIn parameter, so the above code is my best shot so far. I also have didReceiveResponse, didReceiveData, didFailWithError, etc all being handled. The above code fires off didRecieveResponse every time. However didReceiveData is never fired and it's like the web service itself never even runs. When I debug the web service itself, it runs and debugs fine when I use a string parameter, but with the image parameter, it never even debugs when I call it. It's almost like the ImgIn parameter is too long (it's huge when I output it to the screen) and the web service just chokes on it. I've read about having to encode to Base64 when using this method, but I can't find any good links on how that's done. If that's what I'm doing wrong, can you PLEASE provide code as to how to do this, not just "you need to use Base64", I'd really appreciate it as I can find almost nothing on how to implement this with an example. Other than that, I'm kind of lost, it seems like I'm doing everything else right. Please help! Thanks

    Read the article

  • Multi-part question about multi-threading, locks and multi-core processors (multi ^ 3)

    - by MusiGenesis
    I have a program with two methods. The first method takes two arrays as parameters, and performs an operation in which values from one array are conditionally written into the other, like so: void Blend(int[] dest, int[] src, int offset) { for (int i = 0; i < src.Length; i++) { int rdr = dest[i + offset]; dest[i + offset] = src[i] > rdr? src[i] : rdr; } } The second method creates two separate sets of int arrays and iterates through them such that each array of one set is Blended with each array from the other set, like so: void CrossBlend() { int[][] set1 = new int[150][75000]; // we'll pretend this actually compiles int[][] set2 = new int[25][10000]; // we'll pretend this actually compiles for (int i1 = 0; i1 < set1.Length; i1++) { for (int i2 = 0; i2 < set2.Length; i2++) { Blend(set1[i1], set2[i2], 0); // or any offset, doesn't matter } } } First question: Since this apporoach is an obvious candidate for parallelization, is it intrinsically thread-safe? It seems like no, since I can conceive a scenario (unlikely, I think) where one thread's changes are lost because a different threads ~simultaneous operation. If no, would this: void Blend(int[] dest, int[] src, int offset) { lock (dest) { for (int i = 0; i < src.Length; i++) { int rdr = dest[i + offset]; dest[i + offset] = src[i] > rdr? src[i] : rdr; } } } be an effective fix? Second question: If so, what would be the likely performance cost of using locks like this? I assume that with something like this, if a thread attempts to lock a destination array that is currently locked by another thread, the first thread would block until the lock was released instead of continuing to process something. Also, how much time does it actually take to acquire a lock? Nanosecond scale, or worse than that? Would this be a major issue in something like this? Third question: How would I best approach this problem in a multi-threaded way that would take advantage of multi-core processors (and this is based on the potentially wrong assumption that a multi-threaded solution would not speed up this operation on a single core processor)? I'm guessing that I would want to have one thread running per core, but I don't know if that's true.

    Read the article

  • Visual Studio Express 2012 debug mode doesn't work

    - by user2350086
    I have a project in Visual Studio that I have been working on for a while, and I have used the debugger extensively. Recently I changed some settings and I have lost the ability to stop the program and step through code. I can't figure out what I had changed that might have affected this. If I put a breakpoint in my code and try to have the program stop there, it doesn't. The break point shows up white with a red outline. If I hover the mouse over it, it says "The breakpoint will not currently be hit. No executable code of the debugger's target code type is associated with this line. Possible causes include: conditional compilation, compiler optimizations, or the target architecture of this line is not supported by the current debugger code type." I know for a fact that the program executes the code where the breakpoint is because I put the breakpoint in the beginning of the InitializeComponent method. The program displays the window fine, but does not stop at the breakpoint. Yes, I am running in debug mode. It seems as though there is a disconnect between the compiled code and the source code displayed. Does anyone know what that would be, or know which compiler settings I should check to re-enable debugging? Here are the compiler options: /GS /analyze- /W3 /Zc:wchar_t /I"D:\dev\libcurl-7.19.3-win32-ssl-msvc\include" /Zi /Od /sdl /Fd"Debug\vc110.pdb" /fp:precise /D "WIN32" /D "_DEBUG" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Oy- /clr /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\mscorlib.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Data.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Drawing.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.DataVisualization.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Xml.dll" /MDd /Fa"Debug\" /EHa /nologo /Fo"Debug\" /Fp"Debug\Prog.pch" The linker options are: /OUT:"D:\dev\Prog\Debug\Prog.exe" /MANIFEST /NXCOMPAT /PDB:"D:\dev\Prog\Debug\Prog.pdb" /DYNAMICBASE "curllib.lib" "winmm.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /FIXED:NO /DEBUG /MACHINE:X86 /ENTRY:"Main" /INCREMENTAL /PGD:"D:\dev\Prog\Debug\Prog.pgd" /SUBSYSTEM:WINDOWS /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /ManifestFile:"Debug\Prog.exe.intermediate.manifest" /ERRORREPORT:PROMPT /NOLOGO /LIBPATH:"D:\dev\libcurl-7.19.3-win32-ssl-msvc\lib\Debug" /ASSEMBLYDEBUG /TLBID:1

    Read the article

  • Securing a license key with RSA key.

    - by Jesse Knott
    Hello, it's late, I'm tired, and probably being quite dense.... I have written an application that I need to secure so it will only run on machines that I generate a key for. What I am doing for now is getting the BIOS serial number and generating a hash from that, I then am encrypting it using a XML RSA private key. I then sign the XML to ensure that it is not tampered with. I am trying to package the public key to decrypt and verify the signature with, but every time I try to execute the code as a different user than the one that generated the signature I get a failure on the signature. Most of my code is modified from sample code I have found since I am not as familiar with RSA encryption as I would like to be. Below is the code I was using and the code I thought I needed to use to get this working right... Any feedback would be greatly appreciated as I am quite lost at this point the original code I was working with was this, this code works fine as long as the user launching the program is the same one that signed the document originally... CspParameters cspParams = new CspParameters(); cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; cspParams.Flags = CspProviderFlags.UseMachineKeyStore; // Create a new RSA signing key and save it in the container. RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(cspParams) { PersistKeyInCsp = true, }; This code is what I believe I should be doing but it's failing to verify the signature no matter what I do, regardless if it's the same user or a different one... RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(); //Load the private key from xml file XmlDocument xmlPrivateKey = new XmlDocument(); xmlPrivateKey.Load("KeyPriv.xml"); rsaKey.FromXmlString(xmlPrivateKey.InnerXml); I believe this to have something to do with the key container name (Being a real dumbass here please excuse me) I am quite certain that this is the line that is both causing it to work in the first case and preventing it from working in the second case.... cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; Is there a way for me to sign/encrypt the XML with a private key when the application license is generated and then drop the public key in the app directory and use that to verify/decrypt the code? I can drop the encryption part if I can get the signature part working right. I was using it as a backup to obfuscate the origin of the license code I am keying from. Does any of this make sense? Am I a total dunce? Thanks for any help anyone can give me in this..

    Read the article

  • Splitting a test to a set of smaller tests

    - by mkorpela
    I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way. I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test. An example of what I am aiming at: //Class under test class A { public void setB(B b){ this.b = b; } public Output process(Input i){ return b.process(doMyProcessing(i)); } private InputFromA doMyProcessing(Input i){ .. } .. } //Another class under test class B { public Output process(InputFromA i){ .. } .. } //The Big Test @Test public void theBigTest(){ A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive Input i = createInput(); Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive assertEquals(o, expectedOutput()); } //The splitted tests @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest1(){ // this method is a bit too long but its just an example.. Input i = createInput(); InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B b = mock(B.class); when(b.process(x)).thenReturn(expected); A classUnderTest = createInstanceOfClassA(); classUnderTest.setB(b); Output o = classUnderTest.process(i); assertEquals(o, expected); verify(b).process(x); verifyNoMoreInteractions(b); } @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest2(){ InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B classUnderTest = createInstanceOfClassB(); Output o = classUnderTest.process(x); assertEquals(o, expected); }

    Read the article

  • Offline backup synchronization

    - by Pavan Kumar
    There is a Central Server running Windows Server 2003 and SQL Server 2005 and there are 7 client machines situated in various places and has XP Pro & SQL Server 2005 installed in all of them. They are not interconnected so they are physically seperate. One person goes to each of these centers maybe twice a month and takes the backup (Full database consisting of mdf and ldf files) with a pen drive and brings it to the Central server which contains the central database holding same schema as all the other client databases. I need to synchronize each backup database (belonging to different centers) one by one to update the existing data or inserting new data in the central database . The solution i got was Replication. The pendrive is brought to central server consisting of 7 instances of the databases and then the databases is attached to the central server one by one to the same SQL Server where the central database exists. Then my idea was to replicate the backup database one by one i.e using single subscription (Central Database) and multiple publication ( i.e 7 instances of databases in my case) toplogy by performing replication locally (i.e in the same machine). So i tried to develop a UI in C# .Net to programatically run the Transactional Replication with push subscription using RMO Programming (which is incomplete as of now because there is no point in developing when you already know it is not the solution). Transactional Replication can either be set to initialize with a snapshot or without a snapshot. If i go for the first option i.e with a snapshot , the data whatever is present in Central Database is overwritten by the new data . So the data present initially in the central database is lost. If i try to initialize without snapshot , no data (the data already has the updated and new data) will be sent from the backup database to server. The replication will work in a scenario where any incremental changes is done only after you set the replication . So the initial data whatever was present in the backup database when setting up the replication will not be replicated when running the snapshot agent for the first time to synchronize. Only changes in the backup database thereafter will be reflected to the central database .(Remember I am not going to insert new data or make any changes to the backup database after i attach it to the Central Server. ) So this solution is not feasible. I want a solution for synchronizing from one client database to central database present in the same machine using C#.NET. If you can provide me small example maybe with two databases(with same schema) DB1(Client) to DB2(Server) consisting of one or two tables it will be very helpful. The synchronization is not bidirectional.I want to only update existing data or insert new data from DB1 to DB2 (DB2 may contain some data initially). Thanks and Regards Pavan

    Read the article

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • Which mobile operating system should I code for?

    - by samgoody
    It seems as though mobile computing has fully arrived. I would like to rewrite two of our programs for mobile devices, but am a bit lost as to which platform to target. Complicating this decision: I would need to learn the relevant languages and IDEs - my coding to date has been almost all web based (PHP, JS, Actionscript, etc. Some ASPX). Most users seem to be religious about their mobile decision, so oral conversations leave me more confused then enlightened. I do not yet own a smartphone - will have to buy one once I know which platform to be aiming for. Both of my programs are more for business users, (one is only useful for C.P.A.s). I am a single developer, and cannot develop for more than one platform at a time. Getting it right is important. Based on what I've found on the web, I would've expected RIM to be a shoo-in, and the general order to be as follows: RIM Blackberry - More of them than any other brand. Despite naysayers, they've had double the sales (or perhaps 5X the sales) of any other smartphone, and have continued to grow. And, they have business users. Android - According to Schmidt, they have outsold everyone else except RIM (though I can't find where I read that now), and they are just getting started. According to Comscore, they are already at 8% of the market and expected to hit Shcmidt's claims within six months. Nokia - The largest worldwide. If they would just make up between Maemo or Symbian, I would be far less confused. iPhone - Much more competition by other apps, fewer sales to be had, and a overlord that can delay or cancel my app at any time. Is Cocoa hard to learn? Windows Mobile - Word is that version 7 will not be backwards compatible and losing market share. Palm WebOS - Perhaps this should go first, as it is the only one that offers tools to make my life easy as a web application developer. No competition in marketplace. But not very many users either. However, a search on StackOverflow shows a hugely disproportionate number of iPhone questions versus Blackberry. Likewise, there are clearly more apps on iPhone, so it must be getting developer love. What is the one platform I should develop for? Please back up your answer with the logic.

    Read the article

  • C#: BackgroundWorker cloning resources?

    - by Dav
    The problem I've been struggling with this partiular problem for two days now and just run out of ideas. A little... background: we have a WinForms app that needs to access a database, construct a list of related in-memory objects from that data, and then display on a DataGridView. Important point is that we first populate an app-wide cache (List), and then create a mirror of the cache local to the form on which the DGV lives (using List constructor param). Because fetching the data takes a good few seconds (DB sits on a LAN server) to load, we decided to use a BackgroundWorker, and only refresh the DGV once the data is loaded. However, it seems that doing the loading via a BGW results in some memory leak... or an error on my part. When loaded using a blocking method call, the app consumes about 30MB of RAM; with a BGW this jumps to 80MB! While it may not seem as much anyway, our clients are not too happy about it. Relevant code Form private void MyForm_Load(object sender, EventArgs e) { MyRepository.Instance.FinishedEvent += RefreshCache; } private void RefreshCache(object sender, EventArgs e) { dgvProducts.DataSource = new List<MyDataObj>(MyRepository.Products); } Repository private static List<MyDataObj> Products { get; set; } public event EventHandler ProductsLoaded; public void GetProductsSync() { List<MyDataObj> p; using (MyL2SDb db = new MyL2SDb(MyConfig.ConnectionString)) { p = db.PRODUCTS .Select(p => new MyDataObj {Id = p.ID, Description = p.DESCR}) .ToList(); } Products = p; // tell the form to refresh UI if (ProductsLoaded != null) ProductsLoaded(this, null); } public void GetProductsAsync() { using (BackgroundWorker myWorker = new BackgroundWorker()) { myWorker.DoWork += delegate { List<MyDataObj> p; using (MyL2SDb db = new MyL2SDb(MyConfig.ConnectionString)) { p = db.PRODUCTS .Select(p => new MyDataObj {Id = p.ID, Description = p.DESCR}) .ToList(); } Products = p; }; // tell the form to refresh UI when finished myWorker.RunWorkerCompleted += GetProductsCompleted; myWorker.RunWorkerAsync(); } } private void GetProductsCompleted(object sender, RunWorkerCompletedEventArgs e) { if (ProductsLoaded != null) ProductsLoaded(this, null); } End! GetProductsSync or GetProductsAsync are called on the main thread, not shown above. Could it be that the GarbageCollector just gets lost with two threads? Or is it the task manager that shows incorrect values? Will be greateful for any responses, suggestions, criticism.

    Read the article

  • $.ajax ColdFusion cfc JSON Hello World

    - by cf_PhillipSenn
    I've simplified this example as much as I can. I have a remote function: <cfcomponent output="false"> <cffunction name="Read" access="remote" output="false"> <cfset var local = {}> <cfquery name="local.qry" datasource="myDatasource"> SELECT PersonID,FirstName,LastName FROM Person </cfquery> <cfreturn local.qry> </cffunction> </cfcomponent> And using the jQuery $.ajax method, I would like to make an unordered list of everyone. <!DOCTYPE HTML> <html> <head> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1"); </script> <script type="text/javascript"> jQuery(function($){ $.ajax({ url: "Remote/Person.cfc?method=Read&ReturnFormat=json", success: function(data){ var str = '<ul>'; // This is where I need help: for (var I=0; I<data.length; I++) { str += '<li>' + I + data[I][1]+ '</li>' } str += '</ul>'; $('body').html(str); }, error: function(ErrorMsg){ console.log("Error"); } }); }); </script> </head> <body> </body> </html> The part where I'm lost is where I'm looping over the data. I prefer to the use jQuery $.ajax method because I understand that $.get and $.post don't have error trapping. I don't know how to handle JSON returned from the cfc.

    Read the article

  • Speed Problem with Wireless Connectivity on Cisco 877w

    - by Carl Crawley
    Having a bit of a weird one with my local LAN setup. I recently installed a Cisco 877W router on my DSL2+ connection and all is working really well.. Upgraded the IOS to 12.4 and my wired clients are streaming connectivity superfast at 1.3mb/s. However, there seems to be an issue with my wireless clients - I can't seem to stream any data across the local wireless connection (LAN) and using the Internet, whilst responsive enough isn't really comparable with the wired connection speed. For example, all devices are connected to an 8 Port Gb switch on FE0 from the Router with a NAS disk and on my wired clients, I can transfer/stream etc absolutely fine - however, transferring a local 700Mb file on my local LAN estimates 7-8 hours to transfer :( The Wireless config is as follows : interface Dot11Radio0 description WIRELESS INTERFACE no ip address ! encryption mode ciphers tkip ! ssid [MySSID] ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 channel 2462 station-role root rts threshold 2312 world-mode dot11d country GB indoor bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding All devices are connected to the Gb Switch which is connected to FE0 with the following: Hardware is Fast Ethernet, address is 0021.a03e.6519 (bia 0021.a03e.6519) Description: Uplink to Switch MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 14000 bits/sec, 19 packets/sec 5 minute output rate 167000 bits/sec, 23 packets/sec 177365 packets input, 52089562 bytes, 0 no buffer Received 919 broadcasts, 0 runts, 0 giants, 0 throttles 260 input errors, 260 CRC, 0 frame, 0 overrun, 0 ignored 0 input packets with dribble condition detected 156673 packets output, 106218222 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Not sure why I'm having problems on the wireless and I've reached the end of my Cisco knowledge... Thanks for any pointers! Carl

    Read the article

  • Calculate # of Rowspans and Colspans based on keys in a Multi-Array

    - by sologhost
    Ok, I have these 2 types of layouts, basically, it can be any layout really. I have decided to use tables for this, since using div tags cause undesirable results in some possible layout types. Here are 2 pics that describe the returned results of row and column: This would return the $layout array like so: $layout[0][0] $layout[0][1] $layout[1][1] In this layout type: $layout[1][0] is NOT SET, or doesn't exist. Row 1, Column 0 doesn't exist in here. So how can we use this to help us determine the rowspans...? Ok, this layout type would now return the following: $layout[0][0] $layout[0][1] $layout[1][0] $layout[2][0] $layout[2][1] $layout[3][1] Again, there are some that are NOT SET in here: $layout[1][1] $layout[3][0] Ok, I have an array called $layout that does a foreach on the row and column, but it doesn't grab the rows and columns that are NOT SET. So I created a for loop (with the correct counts of how many rows there are and how many columns there are). Here's what I got so far: // $not_set = array(); for($x = 0; $x < $cols; $x++) { $f = 0; for($p = 0; $p < $rows; $p++) { // $f = count($layout[$p]); if(!isset($layout[$p][$x])) { $f++; // It could be a rowspan or a Colspan... // We need to figure out which 1 it is! /* $not_set[] = array( 'row' => $p, 'column' => $x, ); */ } // if ($rows - count($layout[$p])) } } Ok, the $layout array has 2 keys. The first 1 is [ row ] and the 2nd key is [ column ]. Now looping through them all and determining whether it's NOT SET, tells me that either a rowspan or a colspan needs to be put into something somewhere. I'm completely lost here. Basically, I would like to have an array returned here, something like this: $spans['row'][ row # ][ column # ] = Number of rowspans for that <td> element. $spans['column'][ row # ][ column # ] = Number of colspans for that <td> element. It's either going to need a colspan or a rowspan, it will definitely never need both for the same element. Am I going about this whole thing the wrong way? Any help at all would be greatly appreciated!! I've been driving myself crazy with this for days! Pllleaase...

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

  • Auto update the content in ASP.NET

    - by Zerotoinfinite
    I have to design a website where user can update their status, just like facebook and twitter and other social networking sites. Now my requirement is to refresh the feed with new user updates. Ex: when the new status comes facebook automatically add that on the top of the feed. on the other hand twitter shows the number of updates which is ready to be load. both ways are acceptable to me Now, I have to decide what is the best way to achieve this functionality. I am open to use ASP.NET. So I am confused that regular repeater control with timer and auto refresh or any other way? (I am wondering that if I set repeater for auto update and meanwhile if user is performing some action on any status it will lost). or do I need to change my framework from ASP.NET to ASP.NET MVC (I am little afraid with MVC as I have very less knowledge regarding it and I know it has a learning curve to master ajax/Jquery things) Any suggestion how I can I achieve it in a better and feasible way? EDIT1 I am not looking for a code but I want advice to achieve this. Supporting URL's would be appreciated. EDIT2 I am open to JQuery which can regularly check the database and fill the section. But my concern is this that if user is updating any comment and want to load/feed is automatically generated. his textbox text shouldn't be disappear (just like facebook, twitter or Linkedin) EDIT3 I have seen that on Stack overflow when any other user has modified the question/answer, I got notification like this question/answer is modified. and when I clicked on that notification only that section got reloaded. I am curious to know how to achieve this functionality. So that when user is commenting on a status/post and if meanwhile someone has updated the content then it would show the other user comment. Edit4 Could someone please recommend me an example of ASP.NET MVC 3+ which can do similar kind of activity (i.e. one input box and once user insert an text it will add the item in the list (with JQuery).

    Read the article

  • Team matchups for Dota Bot

    - by Dan
    I have a ghost++ bot that hosts games of Dota (a warcraft 3 map that is played 5 players versus 5 players) and I'm trying to come up with good formulas to balance the players going into a match based on their records (I have game history for several thousand games). I'm familear with some of the concepts required to match up players, like confidence based on sample size of the number of games they played, and also perameter approximation and degrees of freedom and thus throwing out any variables that don't contribute enough to the r^2. My bot collects quite a few variables for each player from each game: The Important ones: Win/Lose/Game did not finish # of Player Kills # of Player Deaths # of Kills player assisted The not so important ones: # of enemy creep kills # of creep sneak attacks # of neutral creep kills # of Tower kills # of Rax kills # of courier kills Quick explination: The kills/deaths don't determine who wins, but the gold gained and lost from this usually is enough to tilt the game. Tower/Rax kills are what the goal of the game is (once a team looses all their towers/rax their thrown can be attacked if that is destroyed they lose), but I don't really count these as important because it is pretty random who gets the credit for the tower kill, and chances are if you destroy a tower it is only because some other player is doing well and distracting the otherteam elsewhere on the map. I'm getting a bit confused when trying to deal with the fact that 5 players are on a team, so ultimately each individual isn't that responsible for the team winner or losing. Take a player that is really good at killing and has 40 kills and only 10 deaths, but in their 5 games they've only won 1. Should I give him extra credit for such a high kill score despite losing? (When losing it is hard to keep a positive kill/death ratio) Or should I dock him for losing assuming that despite the nice kill/death ratio he probably plays in a really greedy way only looking out for himself and not helping the team? Ultimately I don't think I have to guess at questions like this because I have so much data... but I don't really know how to look at the data to answer questions like this. Can anyone help me come up with formulas to help team balance and predict the outcome? Thanks, Dan

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >