Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 533/965 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • WCF Windows Service Monitor and process emails

    - by acadia
    Hello, I need your suggestions in solving this issue. Here is the requirement. We have a Microsoft Exchange server and we have a service email account [email protected]. We have scanners all owner the company when a user scans a document and email is sent to [email protected] as attachment. Now I need to write a Windows service which needs to monitor that email account and whenever an email is received, read the attachement and store it in the database. My question is, is it possible to do something of this sort? Any suggestions greatly appreciated. Thanks

    Read the article

  • How does one check if a table exists in an Android SQLite database?

    - by camperdave
    I have an android app that needs to check if there's already a record in the database, and if not, process some things and eventually insert it, and simply read the data from the database if the data does exist. I'm using a subclass of SQLiteOpenHelper to create and get a rewritable instance of SQLiteDatabase, which I thought automatically took care of creating the table if it didn't already exist (since the code to do that is in the onCreate(...) method). However, when the table does NOT yet exist, and the first method ran upon the SQLiteDatabase object I have is a call to query(...), my logcat shows an error of "I/Database(26434): sqlite returned: error code = 1, msg = no such table: appdata", and sure enough, the appdata table isn't being created. Any ideas on why? I'm looking for either a method to test if the table exists (because if it doesn't, the data's certainly not in it, and I don't need to read it until I write to it, which seems to create the table properly), or a way to make sure that it gets created, and is just empty, in time for that first call to query(...)

    Read the article

  • Cannot complete a basic task in QT4

    - by Harun Baris Bulut
    Hi all, I cannot open a new window in QT. I new in QT so I think I am missing something. I only write the code below and settings windows just shows itself and closes. I have commented out destructor but still problem persists. SettingsWindow s; s.show(); What do I do wrong ? By the way I cannot either debug it, debuger does not stop when it reaches to the first line for example. Thanks

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • How to draw a better looking Graph (A4 size) in Dot?

    - by Nazgulled
    Hi, I have this project that it's due in a few hours and I still have a report to write... The project has nothing to do with Dot, but we were asked to draw a Graph with Dot, which I did. It looks something like this: http://img683.imageshack.us/img683/9735/dotj.jpg The longer arrows represent smaller weights and the shorter arrows represent bigger weights. There isn't any problem in submitting my project like this, it does what's is supposed to do and this Dot thing is just an extra. But I would like to make it pretty, I just don't have time to learn about Dot right now. Basically, all I want is make pretty. Perhaps, a bigger height for the page, like A4 paper size. And have the graph display more to the bottom than everything to the side. What should I put on my .dot file to make it look better?

    Read the article

  • jquery on localhost

    - by ace
    I am using Linuxmint (= ubuntu linux 9.10) I installed LAmp server which have apache,php mysql and now i am trying to write jquery code i made a file and it worked perfectly with this link : file:///var/www/jquery/jquery.html but when i use this link, then it doesnt work anymore : http://localhost/jquery/jquery.html the file jquery.min.js is in the same folder and i already changed the src of it in source code to <script type="text/javascript" src="/var/www/jquery/jquery.min.js"></script> <script type="text/javascript" src="jquery.min.js"></script> but none of them works (with localhost link) using firebug I saw these errors: The requested URL /var/www/jquery/jquery.min.js was not found on this server. You don't have permission to access /jquery/jquery.min.js on this server. Apache/2.2.12 (Ubuntu) Server at localhost Port 80 so what do I have to do to make it run?

    Read the article

  • Loading FireMonkey style resourses with RTTI

    - by HeMet
    I am trying to write class that inherits from FMX TStyledControl. When style is updated it loads style resource objects to cache. I created project group for package with custom controls and test FMX HD project as it describes in Delphi help. After installing package and placing TsgSlideHost on the test form I run test app. It’s work well, but when I close it and try to rebuild package RAD Studio says “Error in rtl160.bpl” or “invalid pointer operation”. It seems what problem in LoadToCacheIfNeeded procedure from TsgStyledControl, but I’m not understand why. Is there any restriction on using RTTI with FMX styles or anything? TsgStyledControl sources: unit SlideGUI.TsgStyledControl; interface uses System.SysUtils, System.Classes, System.Types, FMX.Types, FMX.Layouts, FMX.Objects, FMX.Effects, System.UITypes, FMX.Ani, System.Rtti, System.TypInfo; type TCachedAttribute = class(TCustomAttribute) private fStyleName: string; public constructor Create(const aStyleName: string); property StyleName: string read fStyleName; end; TsgStyledControl = class(TStyledControl) private procedure CacheStyleObjects; procedure LoadToCacheIfNeeded(aField: TRttiField); protected function FindStyleResourceAs<T: class>(const AStyleLookup: string): T; function GetStyleName: string; virtual; abstract; function GetStyleObject: TControl; override; public procedure ApplyStyle; override; published { Published declarations } end; implementation { TsgStyledControl } procedure TsgStyledControl.ApplyStyle; begin inherited; CacheStyleObjects; end; procedure TsgStyledControl.CacheStyleObjects; var ctx: TRttiContext; typ: TRttiType; fld: TRttiField; begin ctx := TRttiContext.Create; try typ := ctx.GetType(Self.ClassType); for fld in typ.GetFields do LoadFromCacheIfNeeded(fld); finally ctx.Free end; end; function TsgStyledControl.FindStyleResourceAs<T>(const AStyleLookup: string): T; var fmxObj: TFmxObject; begin fmxObj := FindStyleResource(AStyleLookup); if Assigned(fmxObj) and (fmxObj is T) then Result := fmxObj as T else Result := nil; end; function TsgStyledControl.GetStyleObject: TControl; var S: TResourceStream; begin if (FStyleLookup = '') then begin if FindRCData(HInstance, GetStyleName) then begin S := TResourceStream.Create(HInstance, GetStyleName, RT_RCDATA); try Result := TControl(CreateObjectFromStream(nil, S)); Exit; finally S.Free; end; end; end; Result := inherited GetStyleObject; end; procedure TsgStyledControl.LoadToCacheIfNeeded(aField: TRttiField); var attr: TCustomAttribute; styleName: string; styleObj: TFmxObject; val: TValue; begin for attr in aField.GetAttributes do begin if attr is TCachedAttribute then begin styleName := TCachedAttribute(attr).StyleName; if styleName <> '' then begin styleObj := FindStyleResource(styleName); val := TValue.From<TFmxObject>(styleObj); aField.SetValue(Self, val); end; end; end; end; { TCachedAttribute } constructor TCachedAttribute.Create(const aStyleName: string); begin fStyleName := aStyleName; end; end. Using of TsgStyledControl: type TsgSlideHost = class(TsgStyledControl) private [TCached('SlideHost')] fSlideHost: TLayout; [TCached('SideMenu')] fSideMenuLyt: TLayout; [TCached('SlideContainer')] fSlideContainer: TLayout; fSideMenu: IsgSideMenu; procedure ReapplyProps; procedure SetSideMenu(const Value: IsgSideMenu); protected function GetStyleName: string; override; function GetStyleObject: TControl; override; procedure UpdateSideMenuLyt; public constructor Create(AOwner: TComponent); override; procedure ApplyStyle; override; published property SideMenu: IsgSideMenu read fSideMenu write SetSideMenu; end;

    Read the article

  • writing NSDictionary to plist

    - by ADude
    Hi I'm trying to write an NSDictionary to a plist but when I open the plist no data has been written to it. From the log my path looks correct and my code is pretty standard. Any ideas? NSArray *keys = [NSArray arrayWithObjects:@"key1", @"key2", @"key3", nil]; NSArray *objects = [NSArray arrayWithObjects:@"value1", @"value2", @"value3", nil]; NSDictionary *dictionary = [NSDictionary dictionaryWithObjects:objects forKeys:keys]; for (id key in dictionary) { NSLog(@"key: %@, value: %@", key, [dictionary objectForKey:key]); } NSString *path = [[NSBundle mainBundle] pathForResource:@"FormData" ofType:@"plist"]; NSLog(@"path:%@", path); [dictionary writeToFile:path atomically:YES];

    Read the article

  • Retrieving inline images from Lotus notes using lotusscript

    - by Nazrul
    I have some NotesDocument where some RichText fields have both text and inline images. I can get text part of that items but can't retrieve inline images using lotusscript. Could any one please suggest me a way to retrieve inline images from that documents. LotusScript code: Sub Click(Source As Button) Dim session As New NotesSession Dim db As NotesDatabase Dim mainDoc As NotesDocument Dim v As NotesView Set db = session.CurrentDatabase Dim fileName As String Dim fileNum As Integer fileNum% = Freefile() fileName$ = "D:\data.txt" Open FileName$ For Append As fileNum% Set v = db.GetView("MyView") Set mainDoc = v.GetFirstDocument While Not ( mainDoc Is Nothing ) Forall i In mainDoc.Items If i.Type = RICHTEXT Then Write #fileNum% , i.Name & ":" & i.text 'how the images?? End If End Forall Set mainDoc = v.GetNextDocument( mainDoc ) Wend End Sub Thanks.

    Read the article

  • insert and modify a record in an entity using Core Data

    - by aminfar
    I tried to find the answer of my question on the internet, but I could not. I have a simple entity in Core data that has a Value attribute (that is integer) and a Date attribute. I want to define two methods in my .m file. First method is the ADD method. It takes two arguments: an integer value (entered by user in UI) and a date (current date by default). and then insert a record into the entity based on the arguments. Second method is like an increment method. It uses the Date as a key to find a record and then increment the integer value of that record. I don't know how to write these methods. (assume that we have an Array Controller for the table in the xib file)

    Read the article

  • How to send native texture ptr from Unity web player to a browser plug-in?

    - by user2928039
    I have written an NPAPI browser plug-in (using Firebreath) that Unity uses to access Kinect camera. I can retrieve skeleton data from Unity through JavaScript easily since it isn't too big but the problem is in retrieving color image data. Is it possible to send a native texture pointer (GetNativeTexturePtr) from Unity through JavaScript into the C++ plug-in so that it can write the texture data directly? (tested in standalone version and it works) Any other suggestions on how to transfer image data from browser plug-ins to Unity web player are very welcome. Thanks.

    Read the article

  • Reserve RAM in C

    - by petersmith221
    Hi I need ideas on how to write a C program that reserve a specified amount of MB RAM until a key [ex. the any key] is pressed on a Linux 2.6 32 bit system. * /.eat_ram.out 200 # If free -m is execute at this time, it should report 200 MB more in the used section, than before running the program. [Any key is pressed] # Now all the reserved RAM should be released and the program exits. * It is the core functionality of the program [reserving the RAM] i do not know how to do, getting arguments from the commandline, printing [Any key is pressed] and so on is not a problem from me. Any ideas on how to do this?

    Read the article

  • Reserve RAM in C

    - by petersmith221
    Hi I need ideas on how to write a C program that reserve a specified amount of MB RAM until a key [ex. the any key] is pressed on a Linux 2.6 32 bit system. * /.eat_ram.out 200 # If free -m is execute at this time, it should report 200 MB more in the used section, than before running the program. [Any key is pressed] # Now all the reserved RAM should be released and the program exits. * It is the core functionality of the program [reserving the RAM] i do not know how to do, getting arguments from the commandline, printing [Any key is pressed] and so on is not a problem from me. Any ideas on how to do this?

    Read the article

  • What is the best resizable circular byte buffer available in Java?

    - by Wouter Lievens
    I need a byte buffer class in Java for single-threaded use. I should be able to insert data at the back of the buffer and read data at the front, with an amortized cost of O(1). The buffer should resize when it's full, rather than throw an exception or something. I could write one myself, but I'd be very surprised if this didn't exist yet in a standard Java package, and if it doesn't, I'd expect it to exist in some well-tested public library. What would you recommend?

    Read the article

  • threaded serial port IOException when writing

    - by John McDonald
    Hi, I'm trying to write a small application that simply reads data from a socket, extracts some information (two integers) from the data and sends the extracted information off on a serial port. The idea is that it should start and just keep going. In short, it works, but not for long. After a consistently short period I start to receive IOExceptions and socket receive buffer is swamped. The thread framework has been taken from the MSDN serial port example. The delay in send(), readThread.Join(), is an effort to delay read() in order to allow serial port interrupt processing a chance to occur, but I think I've misinterpreted the join function. I either need to sync the processes more effectively or throw some data away as it comes in off the socket, which would be fine. The integer data is controlling a pan tilt unit and I'm sure four times a second would be acceptable, but not sure on how to best acheive either, any ideas would be greatly appreciated, cheers. using System; using System.Collections.Generic; using System.Text; using System.IO.Ports; using System.Threading; using System.Net; using System.Net.Sockets; using System.IO; namespace ConsoleApplication1 { class Program { static bool _continue; static SerialPort _serialPort; static Thread readThread; static Thread sendThread; static String sendString; static Socket s; static int byteCount; static Byte[] bytesReceived; // synchronise send and receive threads static bool dataReceived; const int FIONREAD = 0x4004667F; static void Main(string[] args) { dataReceived = false; readThread = new Thread(Read); sendThread = new Thread(Send); bytesReceived = new Byte[16384]; // Create a new SerialPort object with default settings. _serialPort = new SerialPort("COM4", 38400, Parity.None, 8, StopBits.One); // Set the read/write timeouts _serialPort.WriteTimeout = 500; _serialPort.Open(); string moveMode = "CV "; _serialPort.WriteLine(moveMode); s = null; IPHostEntry hostEntry = Dns.GetHostEntry("localhost"); foreach (IPAddress address in hostEntry.AddressList) { IPEndPoint ipe = new IPEndPoint(address, 10001); Socket tempSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp); tempSocket.Connect(ipe); if (tempSocket.Connected) { s = tempSocket; s.ReceiveBufferSize = 16384; break; } else { continue; } } readThread.Start(); sendThread.Start(); while (_continue) { Thread.Sleep(10); ;// Console.WriteLine("main..."); } readThread.Join(); _serialPort.Close(); s.Close(); } public static void Read() { while (_continue) { try { //Console.WriteLine("Read"); if (!dataReceived) { byte[] outValue = BitConverter.GetBytes(0); // Check how many bytes have been received. s.IOControl(FIONREAD, null, outValue); uint bytesAvailable = BitConverter.ToUInt32(outValue, 0); if (bytesAvailable > 0) { Console.WriteLine("Read thread..." + bytesAvailable); byteCount = s.Receive(bytesReceived); string str = Encoding.ASCII.GetString(bytesReceived); //str = Encoding::UTF8->GetString( bytesReceived ); string[] split = str.Split(new Char[] { '\t', '\r', '\n' }); string filteredX = (split.GetValue(7)).ToString(); string filteredY = (split.GetValue(8)).ToString(); string[] AzSplit = filteredX.Split(new Char[] { '.' }); filteredX = (AzSplit.GetValue(0)).ToString(); string[] ElSplit = filteredY.Split(new Char[] { '.' }); filteredY = (ElSplit.GetValue(0)).ToString(); // scale values int x = (int)(Convert.ToInt32(filteredX) * 1.9); string scaledAz = x.ToString(); int y = (int)(Convert.ToInt32(filteredY) * 1.9); string scaledEl = y.ToString(); String moveAz = "PS" + scaledAz + " "; String moveEl = "TS" + scaledEl + " "; sendString = moveAz + moveEl; dataReceived = true; } } } catch (TimeoutException) {Console.WriteLine("timeout exception");} catch (NullReferenceException) {Console.WriteLine("Read NULL reference exception");} } } public static void Send() { while (_continue) { try { if (dataReceived) { // sleep Read() thread to allow serial port interrupt processing readThread.Join(100); // send command to PTU dataReceived = false; Console.WriteLine(sendString); _serialPort.WriteLine(sendString); } } catch (TimeoutException) { Console.WriteLine("Timeout exception"); } catch (IOException) { Console.WriteLine("IOException exception"); } catch (NullReferenceException) { Console.WriteLine("Send NULL reference exception"); } } } } }

    Read the article

  • Optimizations employed by ORM's

    - by Kartoch
    I'm teaching JEE, especially JPA, Spring and Spring MVC. As I have not so much experience in large projects, it is difficult to know what to present to students about optimisation of ORM. At the present time, I present some classic optimisation tricks: prepared statements (most of ORM implicitely uses it by default) first and second-level caches "write first, optimize later" it is possible to switch off ORM and send SQL commands directly to the database for very frequent, specialized and costly requests Is there any other point the community see about other way to optimize ORM ? I'm especially interested by DAO patterns...

    Read the article

  • Rails Catch All Route URL Helpers?

    - by viatropos
    If I have a catch-all-route like this: match '*request_path' => "pages#show", :as => :page ...and the pages can be arbitrarily nested, how do I make it so I can use the url helper methods? If I have a page structure like this: /about /about/people /about/story /about/story/in-depth Then I want to be able to write page_path(@page) and get /about/story/in-depth for the hypothetical "In Depth Story" page. But instead I'm just getting /in-depth. If I override Page#to_param, and do something like this: def to_param result = "" if parent result << parent.to_param result << "/" end result << super end ... it returns an encoded string like this: /about%2Fstory%2Fin-depth Is there a way to make this work?

    Read the article

  • ODBC Connection Setup in Java

    - by Jessica
    I want to write a Java program that automates the work that the ODBC Data Source Administrator does in Windows. That is, given an ODBC connection name and a path to the database on the hard drive, I want it to create the connection. I really have no idea where to even begin with this. I looked at this but it said it was for C and I don't think that's very helpful. If anyone could point me in the right direction for this at all, I would appreciate it. (I realize this question is REALLY vague, but that's all the information I was given.)

    Read the article

  • Maven Plugin - Restart Jetty with new WAR?

    - by Walter White
    Hi all, What I would like to do is automatically test against several different maven build profiles. I want to write a maven plugin that iterates through each profile so I don't have to manually list them for the CI process. I just want to verify that the code works in all development, testing, staging, and production once deployed there. I want it to automatically test against those profiles so I could keep it a part of the same maven build? How would I best set that up to log those changes in Sonar or another tool? Walter

    Read the article

  • execcommand("SaveAs",null,"file.csv") is not working in IE8

    - by anbu
    var doc = w.docment; doc.open('application/CSV','replace'); doc.charset = "utf-8"; doc.write("all,hello"); doc.close(); if(doc.execcommand("SaveAs",null,"file.csv")) { window.alert("saved "); }else { window.alert("cannot be saved"); } not working in IE 8 but woks in IE 6 what is the problem ? it is alerting "cannot be saved" help me !!! advance thanks

    Read the article

  • What is the correct Qt idiom for exposing signals/slots of contained widgets?

    - by Tyler McHenry
    Suppose I have a MyWidget which contains a MySubWidget, e.g. a custom widget that contains a text field or something. I want other classes to be able to connect to signals and slots exposed by the contained MySubWidget instance. Is the conventional way to do this: Expose a pointer to the MySubWidget instance through a subWidget() method in MyWidget Duplicate the signals and slots of MySubWidget in the MyWidget class and write "forwarding" code Something else? Choice 1 seems like the least code, but it also sort of breaks encapsulation, since now other classes know what the contained widgets of MyWidget are and might become dependent on their functionality. Choice 2 seems like it keeps encapsulation, but it's a lot of seemingly redundant and potentially convoluted code that kind of messes up the elegance of the whole signals and slots system. What is normally done in this situation?

    Read the article

  • Multi-statement Table Valued Function vs Inline Table Valued Function

    - by AndyC
    ie: CREATE FUNCTION MyNS.GetUnshippedOrders() RETURNS TABLE AS RETURN SELECT a.SaleId, a.CustomerID, b.Qty FROM Sales.Sales a INNER JOIN Sales.SaleDetail b ON a.SaleId = b.SaleId INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.ShipDate IS NULL GO versus: CREATE FUNCTION MyNS.GetLastShipped(@CustomerID INT) RETURNS @CustomerOrder TABLE (SaleOrderID INT NOT NULL, CustomerID INT NOT NULL, OrderDate DATETIME NOT NULL, OrderQty INT NOT NULL) AS BEGIN DECLARE @MaxDate DATETIME SELECT @MaxDate = MAX(OrderDate) FROM Sales.SalesOrderHeader WHERE CustomerID = @CustomerID INSERT @CustomerOrder SELECT a.SalesOrderID, a.CustomerID, a.OrderDate, b.OrderQty FROM Sales.SalesOrderHeader a INNER JOIN Sales.SalesOrderHeader b ON a.SalesOrderID = b.SalesOrderID INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.OrderDate = @MaxDate AND a.CustomerID = @CustomerID RETURN END GO Is there an advantage to using one over the other? Is there certain scenarios when one is better than the other or are the differences purely syntactical? I realise the 2 example queries are doing different things but is there a reason I would write them in that way? Reading about them and the advantages/differences haven't really been explained. Thanks

    Read the article

  • commands&creating pointer [closed]

    - by gcc
    input 23 3 4 4 42 n 23 0 9 9 n n n 3 9 9 x //according to input,i should create int pointer arrays. pointer arrays // starting from 1 (that is initial arrays is arrays[1].when program sees n ,it // must be jumb to arrays 2 // the first int input 23 is num_arrays which used in malloc(sizeof(int)*num_arrays expected output: elements of arrays[1] 3 4 5 42 elements of arrays[2] 23 0 9 9 elements of arrays[5] 3 9 9 another input 12 2 3 4 n n 2 3 4 n 12 3 x expected output elements of arrays[1] 2 3 4 elements of arrays[3] 2 3 4 elements of arrays[4] 12 3 specification: x is stopper n is comman to create new pointer array i am new in this site anyone help me how can i write

    Read the article

  • jQuery in sharepoint retruning Object Expected

    - by Simon Thompson
    When I add jquery to sharepoint 2007 (MOSS) and try and use it on a page no matter what I write on the client i get an "object expected" at the line/column where the "$" appears. I have used fiddler to check that the client is downloading the query JS (which it is) But its like its being ignore and therefor eth "$" is not understood. Searching google everybody is saying its the selector not finding the elements but see code below I do not see how it can not find my very simple example. In master page in header <script type="text/javascript" src="jquery.min.js"></script> version 1.4.2 On a page <a href="javascript:abc();">Testing</a> <script> function abc(){ $("#simon").css("border","3px solid red"); } </script> <div id="simon">

    Read the article

  • Android: Easiest way to make a WebView display a Bitmap?

    - by legr3c
    I have some images that I loaded from a remote source stored in Bitmap variables and I want to display them. In addition to switching between these images the user should also be able to zoom and pan them. My first idea was to somehow pass them via an intent to the built-in gallery application but this doesn't seem to be possible. A solution that is suggested in several places is using a WebView since it already supports zooming and panning. My question is how does my Bitmap data get into the WebView? Do I have to write it to a file first, which I would have to remove again later, or is there an easier way? Or are there even better ways to accomplish my main goal, which is displaying Bitmap data as zoomable and panable images?

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >