Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 765/969 | < Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >

  • grep 5 seconds of input from the serial port inside a shell-script

    - by pica
    I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like: ttylog -b 115200 -d /dev/ttyS0 I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example: touch tempFile ttylog -b 115200 -d /dev/ttyS0 >> tempFile & serialPID=$! sleep 5 #kill ${serialPID} #does not work, gets wrong PID killall ttylog cat tempFile The file gets created but never filled with any data. I can also replace the ttylog line with: ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile & In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh). I have no idea what's going on here. It seems to be a failure of redirection within my script. Am I on the right track? Is there a better way to sample 5 seconds of the serial port?

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • Navigating to nodes using xpath in flat structure

    - by James Berry
    I have an xml file in a flat structure. We do not control the format of this xml file, just have to deal with it. I've renamed the fields because they are highly domain specific and don't really make any difference to the problem. <attribute name="Title">Book A</attribute> <attribute name="Code">1</attribute> <attribute name="Author"> <value>James Berry</value> <value>John Smith</value> </attribute> <attribute name="Title">Book B</attribute> <attribute name="Code">2</attribute> <attribute name="Title">Book C</attribute> <attribute name="Code">3</attribute> <attribute name="Author"> <value>James Berry</value> </attribute> Key things to note: the file is not particularly hierarchical. Books are delimited by an occurance of an attribute element with name='Title'. But the name='Author' attribute node is optional. Is there a simple xpath statement I can use to find the authors of book 'n'? It is easy to identify the title of book 'n', but the authors value is optional. And you can't just take the following author because in the case of book 2, this would give the author for book 3. I have written a state machine to parse this as a series of elements, but I can't help thinking there would have been a way to directly get the results that I want.

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • MyMessage<T> throws an exception when calling XmlSerializer

    - by Arthis
    I am very new to nservicebus. I am using version 3.0.1, the last one up to date. And I wonder if my case is a normal limitation of NSB, I am not aware of. I have an asp.net MVC application, I am trying to setup and in my global.asax, I have the following : var configure = Configure.WithWeb() .DefaultBuilder() .ForMvc() .XmlSerializer(); But I have an error with the XmlSerializer when dealing with one of my object: [Serializable] public class MyMessage<T> : IMessage { public T myobject { get; set; } } I pass trough : XmlSerializer() instance.Initialize(types); this.InitType(type, moduleBuilder); this.InitType(info2.PropertyType, moduleBuilder); and then when dealing With T, string typeName = GetTypeName(t); typename is null and the following instruction : if (!nameToType.ContainsKey(typeName)) ends in error. null value not allowed. Is this some limitations to Nservicebus, or am I messing something up?

    Read the article

  • Why don't file type filters work properly with nsIFilePicker on Mac OSX?

    - by Eric Strom
    I am running a chrome app in firefox (started with -app) with the following code to open a filepicker: var nsIFilePicker = Components.interfaces.nsIFilePicker; var fp = Components.classes["@mozilla.org/filepicker;1"] .createInstance(nsIFilePicker); fp.init(window, "Select Files", nsIFilePicker.modeOpenMultiple); fp.appendFilter("video", "*.mov; *.mpg; *.mpeg; *.avi; *.flv; *.m4v; *.mp4"); fp.appendFilter("all", "*.*"); var res = fp.show(); if (res == nsIFilePicker.returnCancel) return; var files = fp.files; var paths = []; while (files.hasMoreElements()) { var arg = files.getNext().QueryInterface( Components.interfaces.nsILocalFile ).path; paths.push(arg); } Everything seems to work fine on Windows, and the file picker itself works on OSX, but the dropdown menu to select between file types only displays in Windows. The first filter (video in this case) is in effect, but the dropdown to select the other type never shows. Is there something extra that is needed to get this working on OSX? I have tried the latest firefox (3.6) and an older one (3.0.13) and both don't show the file type dropdown on OSX.

    Read the article

  • How does browser know when to prompt user to save password?

    - by Eric
    This is related to the question I asked here: http://stackoverflow.com/questions/2382329/how-can-i-get-browser-to-prompt-to-save-password This is the problem: I CAN'T get my browser to prompt me to save the password for the site I'm developing. (I'm talking about the bar that appears sometimes when you submit a form on Firefox, that says "Remember the password for yoursite.com? Yes / Not now / Never") This is super frustrating because this feature of Firefox (and most other modern browsers, which I hope work in a similar fashion) seems to be a mystery. It's like a magic trick the browser does, where it looks at your code, or what you submit, or something, and if it "looks" like a login form with a username (or email address) field and a password field, it offers to save. Except in this case, where it's not offering my users that option after they use my login form, and it's making me nuts. :-) (I checked my Firefox settings-- I have NOT told the browser "never" for this site. It should be prompting.) My question: exactly what the heuristics are that Firefox (or any other modern browser) uses to know when it should prompt the user to save? This shouldn't be too difficult to answer, since it's right there in the Mozilla source (I don't know where to look or else I'd try to dig it out myself). You'd think there would be a blog post or some other similar developer note from the Mozilla developers about this but I can't find that either. (* Note that if your answer to me has anything to do with cookies, encryption or anything else that is about how I'm storing the user's passwords in the database, you've probably misread my question. :-)

    Read the article

  • Convert UCS-2 characters to UTF-8 Using C#

    - by quanticle
    I'm pulling some internationalized text from a MS SQL Server 2005 database. As per the defaults for that DB, the characters are stored as UCS-2. However, I need to output the data in UTF-8 format, as I'm sending it out over the web. Currently, I have the following code to convert: SqlString dbString = resultReader.GetSqlString(0); byte[] dbBytes = dbString.GetUnicodeBytes(); byte[] utf8Bytes = System.Text.Encoding.Convert(System.Text.Encoding.Unicode, System.Text.Encoding.UTF8, dbBytes); System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding(); string outputString = encoder.GetString(utf8Bytes); However, when I examine the output in the browser, it appears to be garbage, no matter what I set the encoding to. What am I missing? EDIT: In response to the answers below, the reason I thought I had to perform a conversion is because I can output literal multibyte strings just fine. For example: OutputControl.Text = "????????????????????????????????????????????????????????????????"; works. Here, OutputControl is an ASP.Net Literal. However, OutputControl.Text = outputString; //Output from above snippet results in mangled output as described above. My hypothesis was that the database's output was somehow getting mangled by ASP.Net. If that's not the case, then what are some other possibilities?

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Trouble Shoot JavaScript Function in IE

    - by CreativeNotice
    So this function works fine in geko and webkit browsers, but not IE7. I've busted my brain trying to spot the issue. Anything stick out for you? Basic premise is you pass in a data object (in this case a response from jQuery's $.getJSON) we check for a response code, set the notification's class, append a layer and show it to the user. Then reverse the process after a time limit. function userNotice(data){ // change class based on error code returned var myClass = ''; if(data.code == 200){ myClass='success'; } else if(data.code == 400){ myClass='error'; } else{ myClass='notice'; } // create message html, add to DOM, FadeIn var myNotice = '<div id="notice" class="ajaxMsg '+myClass+'">'+data.msg+'</div>'; $("body").append(myNotice); $("#notice").fadeIn('fast'); // fadeout and remove from DOM after delay var t = setTimeout(function(){ $("#notice").fadeOut('slow',function(){ $(this).remove(); }); },5000); }

    Read the article

  • Thread Message Loop Hangs in Delphi

    - by erikjw
    Hello all. I have a simple Delphi program that I'm working on, in which I am attempting to use threading to separate the functionality of the program from its GUI, and to keep the GUI responsive during more lengthy tasks, etc. Basically, I have a 'controller' TThread, and a 'view' TForm. The view knows the controller's handle, which it uses to send the controller messages via PostThreadMessage. I have had no problem in the past using this sort of model for forms which are not the main form, but for some reason, when I attempt to use this model for the main form, the message loop of the thread just quits. Here is my code for the threads message loop: procedure TController.Execute; var Msg : TMsg; begin while not Terminated do begin if (Integer(GetMessage(Msg, hwnd(0), 0, 0)) = -1) then begin Synchronize(Terminate); end; TranslateMessage(Msg); DispatchMessage(Msg); case Msg.message of // ...call different methods based on message end; end; To set up the controller, I do this: Controller := TController.Create(true); // Create suspended Controller.FreeOnTerminate := True; Controller.Resume; For processing the main form's messages, I have tried using both Application.Run and the following loop (immediately after Controller.Resume) while not Application.Terminated do begin Application.ProcessMessages; end; I've run stuck here - any help would be greatly appreciated.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • How to nest shapes in a DSL Tools diagram?

    - by Paul Lalonde
    I have a DSL containing two main domain classes: Area and Entity. Areas are represented visually by a GeometryShape, whereas entities are represented by a CompartmentShape. Entities can be embedded in an Area, or not (in this case they are embedded in the root object, which is a kind of Area). There may be relationships between entities, including between entities in different areas. Areas cannot be embedded inside of other areas, nor entities embedded inside of other entities. My problem is that I cannot get the behavior I want from the diagram. The embedding of entities in areas works perfectly well at the model level, but the visual representation behaves erratically. For example, if I drag an entity that was created in an area outside of that area, it no longer responds to mouse clicks (I have code that performs the re-parenting, but somehow the diagram side of things is broken). I have searched high and low for samples of how to do this, and come up empty. Every example I've found on the web simulates nesting via "references" relationships, whereas I am performing true embedding of the domain classes (and therefore of their associated shape classes). Does anyone have an example of how to do this? While I'm venting, am I the only one who thinks the diagram/shape classes are massively under-documented?

    Read the article

  • Making two Windows using CreateWindowsEx() - C

    - by Jamie Keeling
    Hello, I have a windows form that has a simple menu and performs a simple operation, I want to be able to create another windows form with all the functionality of a menu bar, message pump etc.. as a separate thread so I can then share the results of the operation to the second window. I.E. 1) Form A opens Form B opens as a separate thread 2)Form A performs operation 3)Form A passes results via memory to Form B 4)Form B display results I'm confused as to how to go about it, the main app runs fine but i'm not sure how to add a second window if the first one already exists. I think that using CreateWindow will allow me to make another window but again i'm not sure how to access the message pump so I can respond to certain events like WM_CREATE on the second window. I hope it makes sense. Thanks! Edit: I've attempted to make a second window and although this does compile, no windows show atall on build. ////////////////////// // WINDOWS FUNCTION // ////////////////////// LRESULT CALLBACK WindowFunc(HWND hMainWindow, UINT message, WPARAM wParam, LPARAM lParam) { //Fields WCHAR buffer[256]; struct DiceData storage; HWND hwnd; // Act on current message switch(message) { case WM_CREATE: AddMenus(hMainWindow); hwnd = CreateWindowEx( 0, "ChildWClass", (LPCTSTR) NULL, WS_CHILD | WS_BORDER | WS_VISIBLE, 0, 0, 0, 0, hMainWindow, NULL, NULL, NULL); ShowWindow(hwnd, SW_SHOW); break; Any suggestions as to why this happens?

    Read the article

  • Using custom coordinates with QGraphicsScene

    - by Rob
    I am experimenting with a WYSIWYG editor that allows a user to draw shapes on a page and the Qt graphics scene support seems perfect for this. However, instead of working in pixels I want all my QGraphicsItem objects to work in tenths of a millimetre but I don't know how to achieve this. For example: // Create a scene that is the size if an A4 page (2100 = 21cm, 2970 = 29.7cm) QGraphicsScene* scene = new QGraphicsScene(0, 0, 2100, 2970); // Add a rectangle located 1cm across, 1cm down, 5cm wide and 2cm high QGraphicsItem* item = scene->addRect(100, 100, 500, 200); ... QGraphicsView* view = new QGraphicsView(scene); setCentralWidget(view); Now, when I display the scene above I want the shapes to appear at correct size for the screen DPI. Is this simply a case of using QGraphicsView::scale or do I have to do something more complicated? Note that if I was using a custom QWidget instead then I would use QPainter::setWindow and QPainter::setViewport to create a custom mapping mode but I can't see how to do this using the graphics scene support.

    Read the article

  • Progressive MP4 video issues in Flash- Video stops rendering

    - by Conor
    I'm currently working on a flash project that has an intro video that plays before heading into the main app. This video is an H.264 .mp4, 1550x540, and around 10MB. The problem thats currently driving me insane is that when I test it, occasionally the video will begin playing, and then suddenly stop rendering the video frames, leaving the audio playing in the background with nothing on screen. Once the file is played through fully (based on listening to the audio), my playback complete event fires like it should, but I can't find any info of people having similar issues. Attached is a trace of the .mp4 metadata in case that helps. videoframerate : 24 audiochannels : 2 audiocodecid : mp4a audiosamplerate : 48000 trackinfo: 0: length : 608000 timescale : 24000 language : eng sampledescription: 0: sampletype : avc1 1: length : 1218560 timescale : 48000 language : eng sampledescription: 0: sampletype : mp4a duration : 25.386666666666667 width : 1540 videocodecid : avc1 seekpoints: 0: time : 0 offset : 13964 1: time : 0.333 offset : 16893 2: time : 0.667 offset : 34212 ... 73: time : 24.333 offset : 9770329 74: time : 24.667 offset : 9845709 75: time : 25 offset : 9895215 moovposition : 32 height : 540 avcprofile : 77 avclevel : 51 aacaot : 2 This has been driving me absolutely insane... any help would be much appreciated!

    Read the article

  • Why do IOExceptions occur in ReadableByteChannel.read()

    - by Steffen Heil
    Hi The specification of ReadableByteChannel.read() shows -1 as result value for end-of-stream. Moreover it specifies ClosedByInterruptExceptionas possible result if the thread is interrupted. Now I thought that would be all - and it is most of the time. However, now and then I get the following: java.io.IOException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:25) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233) at sun.nio.ch.IOUtil.read(IOUtil.java:206) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236) at ... I do not unterstand why I don't get -1 in this case. Also this is not a clean exception, as I cannot catch it without catching any possible IOException. So here are my questions: Why is this exception thrown in the first place? Is it safe to assume that ANY exception thrown by read are about the socket being closed? Is all this the same for write()? And by the way: If I call SocketChannel.close() do I have to call SocketChannel.socket().close() as well or is this implied by the earlier? Thanks, Steffen

    Read the article

  • rails HABTM versus view (formtastic)

    - by VP
    I have two models: The model NetworkObject try to describe "hosts". I want to have a rule with source and destination, so i'm trying to use both objects from the same class since it dont makes sense to create two different classes. class NetworkObject < ActiveRecord::Base attr_accessible :ip, :netmask, :name has_many :statements has_many :rules, :through =>:statements end class Rule < ActiveRecord::Base attr_accessible :active, :destination_ids, :source_ids has_many :statements has_many :sources, :through=> :statements, :source=> :network_object has_many :destinations, :through => :statements, :source=> :network_object end To build the HABTM i did choose the Model JOIN. so in this case i created a model named Statement with: class Statement < ActiveRecord::Base attr_accessible :source_id, :rule_id, :destination_id belongs_to :network_object, :foreign_key => :source_id belongs_to :network_object, :foreign_key => :destination_id belongs_to :rule end The problem is: is right to add two belongs_to to the same class using different foreign_keys? I tried all combinations like: belongs_to :sources, :class_name => :network_object, :foreign_key => :source_id but no success.. anything that i am doing wrong?

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • Multithreaded IOCP Client Issue

    - by Carl
    I am writing a multithreaded client that uses an IO Completion Port. I create and connect the socket that has the WSA_FLAG_OVERLAPPED attribute set. if ((m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) == INVALID_SOCKET) { throw std::exception("Failed to create socket."); } if (WSAConnectByName(m_socket, L"server.com", L"80", &localAddressLength, reinterpret_cast<sockaddr*>(&localAddress), &remoteAddressLength, &remoteAddress, NULL, NULL) == FALSE) { throw std::exception("Failed to connect."); } I associate the IO Completion Port with the socket. if ((m_hIOCP = CreateIoCompletionPort(reinterpret_cast<HANDLE>(m_socket), m_hIOCP, NULL, 8)) == NULL) { throw std::exception("Failed to create IOCP object."); } All appears to go well until I try to send some data over the socket. SocketData* socketData = new SocketData; socketData->hEvent = 0; DWORD bytesSent = 0; if (WSASend(m_socket, socketData->SetBuffer(socketData->GenerateLoginRequestHeader()), 1, &bytesSent, NULL, reinterpret_cast<OVERLAPPED*>(socketData), NULL) == SOCKET_ERROR && WSAGetLastError() != WSA_IO_PENDING) { throw std::exception("Failed to send data."); } Instead of returning SOCKET_ERROR with the last error set to WSA_IO_PENDING, WSASend returns immediately. I need the IO to pend and for it's completion to be handled in my thread function which is also my worker thread. unsigned int __stdcall MyClass::WorkerThread(void* lpThis) { } I've done this before but I don't know what is going wrong in this case, I'd greatly appreciate any efforts in helping me fix this problem.

    Read the article

  • MFC Combo-Box Control is not showing the full list of items when I click the drop-down menu...

    - by shan23
    I'm coding an app in MSVS 2008, which has a ComboBox control which I initialize thru the code as below: static char* OptionString[4] = {"Opt1", "Opt2", "Opt3", "Opt4"}; BOOL CMyAppDlg::OnInitDialog() { CDialog::OnInitDialog(); // Set the icon for this dialog. The framework does this automatically // when the application's main window is not a dialog SetIcon(m_hIcon, TRUE); // Set big icon SetIcon(m_hIcon, FALSE); // Set small icon // TODO: Add extra initialization here m_Option.AddString(OptionString[0]); m_Option.AddString(OptionString[1]); m_Option.AddString(OptionString[2]); m_Option.AddString(OptionString[3]); m_Option.SetCurSel(0); return TRUE; // return TRUE unless you set the focus to a control } Now, when I build the app and click the down-arrow, the drop-down box shows the first option ONLY(since I've selected that thru my code). But, if i press down-arrow key on keyboard, it cycles thru the options in the order I've inserted, but never does it show more than 1 option in the box. So, In case an user wants to select option3, he has to cycle through options 1 and 2 !! Though once I select any option using the keyboard, the appropriate event handlers are fired, I'm miffed by this behaviour , as is understandable. I'm listing the properties of the combo-box control as well - only the properties that are true(rest are set to false): Type - Dropdown Vertical Scrollbar Visible Tabstop This has bugged me for weeks now. Can anyone pls enlighten me ?

    Read the article

  • auto m3u creation

    - by newbie69
    Hi, I am looking for a solution to automatically create .m3u playlists for each music folder in my sdcard so that the music player can play music by folders. I had written a simple VB.Net app in the past that does exactly the above but apparently, it has to be run from Windows. Since I have no Java nor Android developing experience I found it quite hard to try to write a similar app that can be run directly from the phone. In a few words, the app does the following: 1) Searches the SD and lists all folders that contain 2 or more .mp3 files (just for user verification) 2) Creates in every listed folder above, a .m3u file that simply lists line-by-line all the mp3 files that exist in the specific folder. Is there such an app or could someone spare some time and give me some rough instructions on how to create it in Eclipse 3.5.2 environment? (device used: Motorola Droid/Milestone, Android 2.1) I don't care about any graphics or complex UI, just a script to execute the above procedure that would give every "playlist-supporting" music player in Android, the precious ability to play music by folders. I know it is too much to ask but just in case! Thanx in advance.

    Read the article

  • How do I locate a particular word in a text file using .NET

    - by cmrhema
    I am sending mails (in asp.net ,c#), having a template in text file (.txt) like below User Name :<User Name> Address : <Address>. I used to replace the words within the angle brackets in the text file using the below code StreamReader sr; sr = File.OpenText(HttpContext.Current.Server.MapPath(txt)); copy = sr.ReadToEnd(); sr.Close(); //close the reader copy = copy.Replace(word.ToUpper(),"#" + word.ToUpper()); //remove the word specified UC //save new copy into existing text file FileInfo newText = new FileInfo(HttpContext.Current.Server.MapPath(txt)); StreamWriter newCopy = newText.CreateText(); newCopy.WriteLine(copy); newCopy.Write(newCopy.NewLine); newCopy.Close(); Now I have a new problem, the user will be adding new words within an angle, say for eg, they will be adding <Salary>. In that case i have to read out and find the word <Salary>. In other words, I have to find all the words, that are located with the angle brackets (<). How do I do that?

    Read the article

< Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >