Search Results

Search found 501 results on 21 pages for 'sequential'.

Page 14/21 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • How to pass SAFEARRAY of UDTs to unmaged code from C#.

    - by themilan
    I also used VT_RECORD. But didn't got success in passing safearray of UDTs. [ComVisible(true)] [StructLayout(LayoutKind.Sequential)] public class MY_CLASS { [MarshalAs(UnmanagedType.U4)] public Int32 width; [MarshalAs(UnmanagedType.U4)] public Int32 height; }; [DllImport("mydll.dll")] public static extern Int32 GetTypes( [In, Out][MarshalAs(UnmanagedType.SafeArray, SafeArraySubType = VarEnum.VT_RECORD, SafeArrayUserDefinedSubType = typeof(MY_CLASS))]MY_CLASS[] myClass, [In, Out][MarshalAs(UnmanagedType.SafeArray, SafeArraySubType = VarEnum.VT_RECORD, SafeArrayUserDefinedSubType = typeof(Guid))]Guid[] guids ); If i communicate with my unmanaged code without 1st parameter, Then there is no error in passing "guids" parameter to unmanaged code. I also able to cast elements of obtained SAFEARRAY at unmanaged side to GUID type. But If i tried to pass my UDT class MY_CLASS to unmanaged code with SAFEARRAY then it fails on managed code. (as above code snippet) It shows exception "An unhandled exception of type 'System.Runtime.InteropServices.SafeArrayTypeMismatchException' occurred in myapp.exe" "Additional information: Specified array was not of the expected type." Plz help me in such a situation to pass SAFEARRAY of UDTs to unmaged code.

    Read the article

  • C# wrapper and Callbacks

    - by fergs
    I'm in the process of writing a C# wrapper for Dallmeier Common API light (Camera & Surviellance systems) and I've never written a wrapper before, but I have used the Canon EDSDK C# wrapper. So I'm using the Canon wrapper as a guide to writing the Dallmeier wrapper. I'm currently having issues with wrapping a callback. In the API manual it has the following: dlm_connect int(unsigned long uLWindowHandle, const char * strIP, const char* strUser1, const char* strPwd1, const char* strUser2, const char* strPwd2, void (*callback)(void *pParameters, void *pResult, void *pInput), void * pInput) Arguments - ulWindowhandle - handle of the window that is passed to the ViewerSDK to display video and messages there - strUser1/2 - names of the users to log in. If only single user login is used strUser2 is - NULL - strPwd1/2 - passwords of both users. If strUser2 is NULL strPwd2 is ignored. Return This function creates a SessionHandle that has to be passed Callback pParameters will be structured: - unsigned long ulFunctionID - unsigned long ulSocketHandle, //handle to socket of the established connection - unsigned long ulWindowHandle, - int SessionHandle, //session handle of the session created - const char * strIP, - const char* strUser1, - const char* strPwd1, - const char* strUser2, - const char * strPWD2 pResult is a pointer to an integer, representing the result of the operation. Zero on success. Negative values are error codes. So from what I've read on the Net and Stack Overflow - C# uses delegates for the purpose of callbacks. So I create a my Callback function : public delegate uint DallmeierCallback(DallPparameters pParameters, IntPtr pResult, IntPtr pInput); I create the connection function [DllImport("davidapidis.dll")] public extern static int dlm_connect(ulong ulWindowHandle, string strIP, string strUser1, string strPwd1, string strUser2, string strPwd2, DallmeierCallback inDallmeierFunc And (I think) the DallPParameters as a struct : [StructLayout(LayoutKind.Sequential)] public struct DallPParameters { public ulong ulfunctionID; public ulong ulsocketHandle; public ulong ulWindowHandle; ... } All of this is in my wrapper class. Am I heading in the right direction or is this completely wrong?

    Read the article

  • Unmanaged DLL in C# Web Service

    - by Telis
    Hi Guys, please help µe as I am new into accessing an unmanaged DLL from C#.. I have a large unmanaged DLL in C++ and I am trying to access the DLL's classes and functions from a C# Web Service. I have seen many examples how to use DLLImport, but for some reason I am stuck with my very first wrapper method spending many hours with no luck.. What should I do to return an object in my 'Marshaled' [DllImport..] function? I would like to do something like that: [DllImport("unmanaged.dll")] public static extern MyClass MyFunction(); Here is the definition of my C++ class and the function that I want to access: class __declspec(dllexport) TPDate { public: TPDate(); TPDate(const TPDate& rhs); ... //today's date. static TPDate AsOfDate(void); ... } In my Web service I have declared the following StructLayout: [StructLayout(LayoutKind.Sequential)] public class TPDate { public TPDate(TPDate d) { _tpDate = d; } public TPDate _tpDate; } and here's where I think that I'm not doing something right: class WrapperTPDate { [DllImport("TPTools.dll", ExactSpelling=false, EntryPoint = "?AsOfDate@TPDate@@SA?AV1@XZ", CallingConvention = CallingConvention.StdCall)] [return: MarshalAs(UnmanagedType.Struct)] **public static extern TPDate AsOfDate();**// HERE THERE IS PROBLEM }; I am calling the wrapper as follows from my WebMethod: [WebMethod] public void ConstructModel() { TPDate date1 = WrapperTPDate.AsOfDate();// Here I get exception TPDate date = new TPDate(date1); } The exception i am getting is: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Runtime.InteropServices.MarshalDirectiveException: Cannot marshal 'return value': Invalid managed/unmanaged type combination (this type must be paired with LPStruct or Interface). If I change it to LPSTRUCT, I am getting another exception: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt Could you please tell me where I'm doing wrong here Thanks

    Read the article

  • Extra fulltext ordering criteria beyond default relevance

    - by Jeremy Warne
    I'm implementing an ingredient text search, for adding ingredients to a recipe. I've currently got a full text index on the ingredient name, which is stored in a single text field, like so: "Sauce, tomato, lite, Heinz" I've found that because there are a lot of ingredients with very similar names in the database, simply sorting by relevance doesn't work that well a lot of the time. So, I've found myself sorting by a bunch of my own rules of thumb, which probably duplicates a lot of the full-text search algorithm which spits out a numerical relevance. For instance (abridged): ORDER BY [ingredient name is exactly search term], [ingredient name starts with search term], [ingredient name starts with any word from the search and contains all search terms in some order], [ingredient name contains all search terms in some order], ...and so on. Each of these is defined in the SELECT specification as an expression returning either 1 or 0, and so I order by those in sequential order. I would love to hear suggestions for: A better way to define complicated order-by criteria in one place, say perhaps in a view or stored procedure that you can pass just the search term to and get back a set of results without having to worry about how they're ordered? A better tool for this than MySQL's fulltext engine -- perhaps if I was using Sphinx or something [which I've heard of but not used before], would I find some sort of complicated config option designed to solve problems like this? Some google search terms which might turn up discussion on how to order text items within a specific domain like this? I haven't found much that's of use. Thanks for reading!

    Read the article

  • Getting Vars to bind properly across multiple files

    - by Alex Baranosky
    I am just learning Clojure and am having trouble moving my code into different files. I keep detting this error from the appnrunner.clj - Exception in thread "main" java.lang.Exception: Unable to resolve symbol: -run-application in this context It seems to be finding the namespaces fine, but then not seeing the Vars as being bound... Any idea how to fix this? Here's my code: APPLICATION RUNNER - (ns src/apprunner (:use src/functions)) (def input-files [(resource-path "a.txt") (resource-path "b.txt") (resource-path "c.txt")]) (def output-file (resource-path "output.txt")) (defn run-application [] (sort-files input-files output-file)) (-run-application) APPLICATION FUNCTIONS - (ns src/functions (:use clojure.contrib.duck-streams)) (defn flatten [x] (let [s? #(instance? clojure.lang.Sequential %)] (filter (complement s?) (tree-seq s? seq x)))) (defn resource-path [file] (str "C:/Users/Alex and Paula/Documents/SoftwareProjects/MyClojureApp/resources/" file)) (defn split2 [str delim] (seq (.split str delim))) (defstruct person :first-name :last-name) (defn read-file-content [file] (apply str (interpose "\n" (read-lines file)))) (defn person-from-line [line] (let [sections (split2 line " ")] (struct person (first sections) (second sections)))) (defn formatted-for-display [person] (str (:first-name person) (.toUpperCase " ") (:last-name person))) (defn sort-by-keys [struct-map keys] (sort-by #(vec (map % [keys])) struct-map)) (defn formatted-output [persons output-number] (let [heading (str "Output #" output-number "\n") sorted-persons-for-output (apply str (interpose "\n" (map formatted-for-display (sort-by-keys persons (:first-name :last-name)))))] (str heading sorted-persons-for-output))) (defn read-persons-from [file] (let [lines (read-lines file)] (map person-from-line lines))) (defn write-persons-to [file persons] (dotimes [i 3] (append-spit file (formatted-output persons (+ 1 i))))) (defn sort-files [input-files output-file] (let [persons (flatten (map read-persons-from input-files))] (write-persons-to output-file persons)))

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • Sqlite3 "chained" query

    - by Arrieta
    I need to create a configuration file from a data file that looks as follows: MAN1_TIME '01-JAN-2010 00:00:00.0000 UTC' MAN1_RX 123.45 MAN1_RY 123.45 MAN1_RZ 123.45 MAN1_NEXT 'MAN2' MAN2_TIME '01-MAR-2010 00:00:00.0000 UTC' MAN2_RX 123.45 [...] MAN2_NEXT 'MANX' [...] MANX_TIME [...] This file describes different "legs" of a trajectory. In this case, MAN1 is chained to MAN2, and MAN2 to MANX. In the original file, the chains are not as obvious (i.e., they are non-sequential). I've managed to read the file and store in an Sqlite3 database (I'm using the Python interface). The table is stored with three columns: Id, Par, and Val; for instance, Id='MAN1', Par='RX', and Val='123.45'. I'm interested in querying such database for obtaining the information related to 'n' legs. In English, that would be: "Select RX,RY,RZ for the next five legs starting on MAN1" So the query would go to MAN1, retrieve RX, RY, RZ, then read the parameter NEXT and go to that Id, retrieve RX, RY, RZ; read the parameter NEXT; go to that one ... like this five times. How can I pass such query with "dynamic parameters"? Thank you.

    Read the article

  • Trouble with an depreciated constrictor visual basic visual studio 2010

    - by VBPRIML
    My goal is to print labels with barcodes and a date stamp from an entry to a zebra TLP 2844 when the user clicks the ok button/hits enter. i found what i think might be the code for this from zebras site and have been integrating it into my program but part of it is depreciated and i cant quite figure out how to update it. below is what i have so far. The printer is attached via USB and the program will also store the entered numbers in a database but i have that part done. any help would be greatly Appreciated.   Public Class ScanForm      Inherits System.Windows.Forms.Form    Public Const GENERIC_WRITE = &H40000000    Public Const OPEN_EXISTING = 3    Public Const FILE_SHARE_WRITE = &H2      Dim LPTPORT As String    Dim hPort As Integer      Public Declare Function CreateFile Lib "kernel32" Alias "CreateFileA" (ByVal lpFileName As String,                                                                           ByVal dwDesiredAccess As Integer,                                                                           ByVal dwShareMode As Integer, <MarshalAs(UnmanagedType.Struct)> ByRef lpSecurityAttributes As SECURITY_ATTRIBUTES,                                                                           ByVal dwCreationDisposition As Integer, ByVal dwFlagsAndAttributes As Integer,                                                                           ByVal hTemplateFile As Integer) As Integer          Public Declare Function CloseHandle Lib "kernel32" Alias "CloseHandle" (ByVal hObject As Integer) As Integer      Dim retval As Integer           <StructLayout(LayoutKind.Sequential)> Public Structure SECURITY_ATTRIBUTES          Private nLength As Integer        Private lpSecurityDescriptor As Integer        Private bInheritHandle As Integer      End Structure            Private Sub OKButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OKButton.Click          Dim TrNum        Dim TrDate        Dim SA As SECURITY_ATTRIBUTES        Dim outFile As FileStream, hPortP As IntPtr          LPTPORT = "USB001"        TrNum = Me.ScannedBarcodeText.Text()        TrDate = Now()          hPort = CreateFile(LPTPORT, GENERIC_WRITE, FILE_SHARE_WRITE, SA, OPEN_EXISTING, 0, 0)          hPortP = New IntPtr(hPort) 'convert Integer to IntPtr          outFile = New FileStream(hPortP, FileAccess.Write) 'Create FileStream using Handle        Dim fileWriter As New StreamWriter(outFile)          fileWriter.WriteLine(" ")        fileWriter.WriteLine("N")        fileWriter.Write("A50,50,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrNum) 'prints the tracking number variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.Write("A50,100,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrDate) 'prints the date variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.WriteLine("P1")        fileWriter.Flush()        fileWriter.Close()        outFile.Close()        retval = CloseHandle(hPort)          'Add entry to database        Using connection As New SqlClient.SqlConnection("Data Source=MNGD-LABS-APP02;Initial Catalog=ScannedDB;Integrated Security=True;Pooling=False;Encrypt=False"), _        cmd As New SqlClient.SqlCommand("INSERT INTO [ScannedDBTable] (TrackingNumber, Date) VALUES (@TrackingNumber, @Date)", connection)            cmd.Parameters.Add("@TrackingNumber", SqlDbType.VarChar, 50).Value = TrNum            cmd.Parameters.Add("@Date", SqlDbType.DateTime, 8).Value = TrDate            connection.Open()            cmd.ExecuteNonQuery()            connection.Close()        End Using          'Prepare data for next entry        ScannedBarcodeText.Clear()        Me.ScannedBarcodeText.Focus()      End Sub

    Read the article

  • Multiple connections in a single SSH SOCKS 5 Proxy

    - by Elie Zedeck
    Hey guys, My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections? What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made. I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow. network.http.pipelining;true network.http.pipelining.maxrequests;8 network.http.pipelining.ssl;true network.http.proxy.pipelining;true network.http.max-persistent-connections-per-proxy;100 network.proxy.socks_remote_dns;true My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this. The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it. Thanks for all the inputs.

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • Converting a PHP associative array to a JSON associative array

    - by Extrakun
    I am converting a look-up table in PHP which looks like this to JavaScript using json_encode: AbilitiesLookup Object ( [abilities:private] => Array ( [1] => Ability_MeleeAttack Object ( [abilityid:protected] => [range:protected] => 1 [name:protected] => MeleeAttack [ability_identifier:protected] => MeleeAttack [aoe_row:protected] => 1 [aoe_col:protected] => 1 [aoe_shape:protected] => [cooldown:protected] => 0 [focusCost:protected] => 0 [possibleFactions:protected] => 2 [abilityDesc:protected] => Basic Attack ) .....snipped... And in JSON, it is: {"1":{"name":"MeleeAttack","fof":"2","range":"1","aoe":[null,"1","1"],"fp":"0","image":"dummy.jpg"},.... The problem is I get a JS object, not an array, and the identifier is a number. I see 2 ways around this problem - either find a way to access the JSON using a number (which I do not know how) or make it such that json_encode (or some other custom encoding functions) can give a JavaScript associative array. (Yes, I am rather lacking in my JavaScript department). Note: The JSON output doesn't match the array - this is because I do a manual json encoding for each element in the subscript, before pushing it onto an array (with the index as the key), then using json_encode on it. To be clear, the number are not sequential because it's an associative array (which is why the JSON output is not an array).

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Unit Testing Error - The unit test adapter failed to connect to the data source or to read the data

    - by michael.lukatchik
    I'm using VSTS 2K8 and I've set up a Unit Test Project. In it, I have a test class with a method that does a simple assertion. I'm using an Excel 2007 spreadsheet as my data source. My test method looks like this: [DataSource("System.Data.Odbc", "Dsn=Excel Files;dbq=|DataDirectory|\\MyTestData.xlsx;defaultdir=C:\\TestData;driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1", DataAccessMethod.Sequential)] [DeploymentItem("MyTestData.xlsx")] [TestMethod()] public void State_Value_Is_Set() { string expected = "MD"; string actual = TestContext.DataRow["State"] as string; Assert.AreEqual(expected, actual); } As indicated in the method decoration attributes, my Excel spreadsheet is on my local C:/ Drive. In it, the sheet where all of my data is located is named "Sheet1". I've copied the Excel spreadsheet into my project and I've set its Build Action = "Content" and I've set its Copy to Output Directory = "Copy if Newer". When trying to run this simple unit test, I receive the following error: The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42S02] [Microsoft][ODBC Excel Driver] The Microsoft Office Access database engine could not find the object 'Sheet1'. Make sure the object exists and that you spell its name and the path name correctly. I've verified that the sheet name is spelled correctly (i.e. Sheet1) and I've verified that my data sources are set correctly. Web searches haven't turned up much at all. And I'm totally stumped. All help or input is appreciated!!!!

    Read the article

  • log4js ConsoleAppender initialization

    - by perrierism
    I'm wondering if anyone happens to have some experience using Log4js? It seems its normal ConsoleAppender isn't always ready to use immediately after it's added to a logger object... If I have two sequential script tags in a document like: //Initialize logger <script type="text/javascript"> var logger = new Log4js.getLogger("JSLOG"); logger.addAppender(new Log4js.ConsoleAppender(logger, false)); logger.setLevel(Log4js.Level.INFO); </script> //Use logger <script type="text/javascript"> logger.info('Test test'); </script> ... It causes the console pop-up (pop-up window) to appear with an error message on page load: 12:58:23 PM WARN Log4js - Could not run the listener function () { return fn.apply(object, arguments); }. TypeError: this.outputElement is null The console is still initialised, it's there afterward, but for just that first logger call it doesn't seem to be there fully. If I make the first logger call setTimeout("logger.info('test test')", 1000), it doesn't have the error. So it's like it's not ready immediately. Anyone see this before or know what a workaround might be? Cheers

    Read the article

  • Is there a java library / package analogous to <stdio.h>?

    - by Roboprog
    I have been doing Java on and off for about 14 years, and almost nothing else the last 6 years or so. I really hate the java.io package -- its legion of subclasses and adapters. I do like exceptions, rather than having to always poll "errno" and the like, but I could surely live without declared exceptions. Is there anything that functions like the Unix/ANSI stdio.h routines in C? I know we will never be rid of java.io and its conventions until java itself is retired, as they have metastasized throughout the many frameworks that have accreted to java. That said, I would like something that works kind of like this (let's call it package javax.stdio): Have a main utility class, perhaps FileStar, that can read and write files (or pipes), either text or binary, either sequentially or random access, with constructors that mimic fopen() and popen(). This class should have a load of useful methods that do things like fread(), fwrite(), fgets(), fputs(), fseek(), and whatever else (fprintf()?). Methods that are incompatible with the open/construct mode simply throw up (just like some of the collections classes/methods do when restricted). Then, have a bunch of interfaces that suggest how you intend to use the stream once you have created it: Sequential, RandomAccess, ReadOnly, WriteOnly, Text, Binary, plus combinations of these that make sense. Perhaps even have methods to return the appropriate type-cast (interface), throwing up if you have asked for something incompatible. For extra flavor, skip the declared exceptions -- e.g. - javax.stdio.IOException extends RuntimeException. Is there an open source project like this floating around?

    Read the article

  • How to handle "Remember me" in the Asp.Net Membership Provider

    - by RemotecUk
    Ive written a custom membership provider for my ASP.Net website. Im using the default Forms.Authentication redirect where you simply pass true to the method to tell it to "Remember me" for the current user. I presume that this function simply writes a cookie to the local machine containing some login credential of the user. What does ASP.Net put in this cookie? Is it possible if the format of my usernames was known (e.g. sequential numbering) someone could easily copy this cookie and by putting it on their own machine be able to access the site as another user? Additionally I need to be able to inercept the authentication of the user who has the cookie. Since the last time they logged in their account may have been cancelled, they may need to change their password etc so I need the option to intercept the authentication and if everything is still ok allow them to continue or to redirect them to the proper login page. I would be greatful for guidance on both of these two points. I gather for the second I can possibly put something in global.asax to intercept the authentication? Thanks in advance.

    Read the article

  • Optimize CUDA with Thrust in a loop

    - by macs
    Given the following piece of code, generating a kind of code dictionary with CUDA using thrust (C++ template library for CUDA): thrust::device_vector<float> dCodes(codes->begin(), codes->end()); thrust::device_vector<int> dCounts(counts->begin(), counts->end()); thrust::device_vector<int> newCounts(counts->size()); for (int i = 0; i < dCodes.size(); i++) { float code = dCodes[i]; int count = thrust::count(dCodes.begin(), dCodes.end(), code); newCounts[i] = dCounts[i] + count; //Had we already a count in one of the last runs? if (dCounts[i] > 0) { newCounts[i]--; } //Remove thrust::detail::normal_iterator<thrust::device_ptr<float> > newEnd = thrust::remove(dCodes.begin()+i+1, dCodes.end(), code); int dist = thrust::distance(dCodes.begin(), newEnd); dCodes.resize(dist); newCounts.resize(dist); } codes->resize(dCodes.size()); counts->resize(newCounts.size()); thrust::copy(dCodes.begin(), dCodes.end(), codes->begin()); thrust::copy(newCounts.begin(), newCounts.end(), counts->begin()); The problem is, that i've noticed multiple copies of 4 bytes, by using CUDA visual profiler. IMO this is generated by The loop counter i float code, int count and dist Every access to i and the variables noted above This seems to slow down everything (sequential copying of 4 bytes is no fun...). So, how i'm telling thrust, that these variables shall be handled on the device? Or are they already? Using thrust::device_ptr seems not sufficient for me, because i'm not sure whether the for loop around runs on host or on device (which could also be another reason for the slowliness).

    Read the article

  • check whether mmap'ed address is correct

    - by reddot
    I'm writing a high-loaded daemon that should be run on the FreeBSD 8.0 and on Linux as well. The main purpose of daemon is to pass files that are requested by their identifier. Identifier is converted into local filename/file size via request to db. And then I use sequential mmap() calls to pass file blocks with send(). However sometimes there are mismatch of filesize in db and filesize on filesystem (realsize < size in db). In this situation I've sent all real data blocks and when next data block is mapped -- mmap returns no errors, just usual address (I've checked errno variable also, it's equal to zero after mmap). And when daemon tries to send this block it gets Segmentation Fault. (This behaviour is guarantedly issued on FreeBSD 8.0 amd64) I was using safe check before open to ensure size with stat() call. However real life shows to me that segfault still can be raised in rare situtaions. So, my question is there a way to check whether pointer is accessible before dereferencing it? When I've opened core in gdb, gdb says that given address is out of bound. Probably there is another solution somebody can propose.

    Read the article

  • How to macro-ify ant targets?

    - by Jonas Byström
    I want to be able to have different targets doing nearly the same thing, as so: ant build <- this would be a normal (default) build ant safari <- building the safari target. The targets look like this: <target name="build" depends="javac" description="GWT compile to JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.target}"/> </java> </target> <target name="safari" depends="javac" description="GWT compile to Safari/JavaScript"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <pathelement location="src"/> <path refid="project.class.path"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${lhs.safari.target}"/> </java> </target> (Nevermind the first thought that strikes: throw out ant! That's not an option just yet.) I tried using macrodef, but got a strange error message (even though the message didn't imply it, it think it had to do with putting a target in sequential). I don't want to do ant -Dwhatever=nevermind. Any ideas?

    Read the article

  • updating system's time using .Net

    - by user62958
    I am trying to update my system time using the following: [StructLayout(LayoutKind.Sequential)] private struct SYSTEMTIME { public ushort wYear; public ushort wMonth; public ushort wDayOfWeek; public ushort wDay; public ushort wHour; public ushort wMinute; public ushort wSecond; public ushort wMilliseconds; } [DllImport("kernel32.dll", EntryPoint = "GetSystemTime", SetLastError = true)] private extern static void Win32GetSystemTime(ref SYSTEMTIME lpSystemTime); [DllImport("kernel32.dll", EntryPoint = "SetSystemTime", SetLastError = true)] private extern static bool Win32SetSystemTime(ref SYSTEMTIME lpSystemTime); public void SetTime() { TimeSystem correctTime = new TimeSystem(); DateTime sysTime = correctTime.GetSystemTime(); // Call the native GetSystemTime method // with the defined structure. SYSTEMTIME systime = new SYSTEMTIME(); Win32GetSystemTime(ref systime); // Set the system clock ahead one hour. systime.wYear = (ushort)sysTime.Year; systime.wMonth = (ushort)sysTime.Month; systime.wDayOfWeek = (ushort)sysTime.DayOfWeek; systime.wDay = (ushort)sysTime.Day; systime.wHour = (ushort)sysTime.Hour; systime.wMinute = (ushort)sysTime.Minute; systime.wSecond = (ushort)sysTime.Second; systime.wMilliseconds = (ushort)sysTime.Millisecond; Win32SetSystemTime(ref systime); } When I debug everything looks good and all the values are correct but when it calles the Win32SetSystemTime(ref systime) th actual time of system(display time) doesn't change and stays the same. The strange part is that when I call the Win32GetSystemTime(ref systime) it gives me the new updated time. Can someone give me some help on this?

    Read the article

  • Data Flow Object Graph and An Execution Engine for it

    - by M Dotnet
    I would like to build a custom workflow engine from scratch. The input to this workflow is a data flow diagram which is composed of a series of activities connected together through lines where each each represent the data flow. Each activity can export multiple outputs. Activities are complex math functions but the logic is hidden from the user. My workflow engine job is to execute the given data flow diagram. Each activity within the data flow diagram is a custom activity and each activity can output different outputs. How do you suggest to model the data flow diagram object? I need to be able to construct the data flow diagram problematically (no need for drag and drop) but I need to display the final result graphically (for display and debugging purposes). Are there any libraries out there that I could use? Should I keep the workflow presentable as an xml? I know that there are many projects out there trying to essentially doing similar thing by building such workflow engines but I need something light weight and open source. I do not need any state machine execution engine and mine is primarily sequential workflow with fork and join capabilities. My activities are wrappable as basic C# classes and I do NOT want to use anything as heavy as .NET workflow foundation.

    Read the article

  • Start-job to call script from main

    - by Naveen
    I have three script , from main - 1-script , I am calling other two scripts. so that I can execute both scripts parallely because it's taking too much time in sequential order. Only variables are different in the script. How can I merge script 2 & 3 in a single script so that I can call from the main script and it will run as parallel. 1 CompareCtrlM... Completed False localhost ######################... 3 CompareCtrlM... Completed True localhost ######################... Main -1 Script Start-Job -Name "LoopComparectrlMasterModel" -filepath D:\tmp\naveen\Script\CompareCtrlMasterCtrlModel.ps1 Start-Job -Name "LoopCompareProdMasterModel" -filepath D:\idv\CA\rcm_data\tmp\work\CompareCtrlMasterProdModel.ps1 Wait-Job -Name "LoopComparectrlMasterModel" Receive-Job "LoopComparectrlMasterModel" Wait-Job -Name "LoopCompareProdMasterModel" Receive-Job "LoopCompareProdMasterModel" =============================================== Script 2- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterProdModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Prod Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterProdModel[$i-1] $cfgProdModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterProdModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } ========================================================================== Script 3- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterCtrlModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Ctrl Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterCtrlModel[$i-1] $cfgCtrlModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterCtrlModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } write-output $results Thanks a lot for help Regards Naven

    Read the article

  • Trouble with an depreciated constructor visual basic visual studio 2010

    - by VBPRIML
    My goal is to print labels with barcodes and a date stamp from an entry to a zebra TLP 2844 when the user clicks the ok button/hits enter. i found what i think might be the code for this from zebras site and have been integrating it into my program but part of it is depreciated and i cant quite figure out how to update it. below is what i have so far. The printer is attached via USB and the program will also store the entered numbers in a database but i have that part done. any help would be greatly Appreciated.   Public Class ScanForm      Inherits System.Windows.Forms.Form    Public Const GENERIC_WRITE = &H40000000    Public Const OPEN_EXISTING = 3    Public Const FILE_SHARE_WRITE = &H2      Dim LPTPORT As String    Dim hPort As Integer      Public Declare Function CreateFile Lib "kernel32" Alias "CreateFileA" (ByVal lpFileName As String,                                                                           ByVal dwDesiredAccess As Integer,                                                                           ByVal dwShareMode As Integer, <MarshalAs(UnmanagedType.Struct)> ByRef lpSecurityAttributes As SECURITY_ATTRIBUTES,                                                                           ByVal dwCreationDisposition As Integer, ByVal dwFlagsAndAttributes As Integer,                                                                           ByVal hTemplateFile As Integer) As Integer          Public Declare Function CloseHandle Lib "kernel32" Alias "CloseHandle" (ByVal hObject As Integer) As Integer      Dim retval As Integer           <StructLayout(LayoutKind.Sequential)> Public Structure SECURITY_ATTRIBUTES          Private nLength As Integer        Private lpSecurityDescriptor As Integer        Private bInheritHandle As Integer      End Structure            Private Sub OKButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OKButton.Click          Dim TrNum        Dim TrDate        Dim SA As SECURITY_ATTRIBUTES        Dim outFile As FileStream, hPortP As IntPtr          LPTPORT = "USB001"        TrNum = Me.ScannedBarcodeText.Text()        TrDate = Now()          hPort = CreateFile(LPTPORT, GENERIC_WRITE, FILE_SHARE_WRITE, SA, OPEN_EXISTING, 0, 0)          hPortP = New IntPtr(hPort) 'convert Integer to IntPtr          outFile = New FileStream(hPortP, FileAccess.Write) 'Create FileStream using Handle        Dim fileWriter As New StreamWriter(outFile)          fileWriter.WriteLine(" ")        fileWriter.WriteLine("N")        fileWriter.Write("A50,50,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrNum) 'prints the tracking number variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.Write("A50,100,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrDate) 'prints the date variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.WriteLine("P1")        fileWriter.Flush()        fileWriter.Close()        outFile.Close()        retval = CloseHandle(hPort)          'Add entry to database        Using connection As New SqlClient.SqlConnection("Data Source=MNGD-LABS-APP02;Initial Catalog=ScannedDB;Integrated Security=True;Pooling=False;Encrypt=False"), _        cmd As New SqlClient.SqlCommand("INSERT INTO [ScannedDBTable] (TrackingNumber, Date) VALUES (@TrackingNumber, @Date)", connection)            cmd.Parameters.Add("@TrackingNumber", SqlDbType.VarChar, 50).Value = TrNum            cmd.Parameters.Add("@Date", SqlDbType.DateTime, 8).Value = TrDate            connection.Open()            cmd.ExecuteNonQuery()            connection.Close()        End Using          'Prepare data for next entry        ScannedBarcodeText.Clear()        Me.ScannedBarcodeText.Focus()      End Sub

    Read the article

  • Suggestions on implementing an iPad magazine app

    - by alku83
    I've been tasked with creating a magazine style app for iPad. Ideally it would look a little something like the Zinio app: http://www.zinio.com/ipad/ . This app is effectively a shell, allowing you to sample magazines (eg. read the first few pages) and select them for download. The magazines appear to have some sort of overlay, allowing you to interact with some things (eg. tap to watch a video). A few questions that come to mind: How would I go about delivering content to the user? In-app purchases aren't really an option, as some content will need to be delivered for free. Is it possible to download a package and make this available within the application? What format would be suitable for displaying the magazine? Sequential images, PDF, ebook? I'll need to have some form of interactivity. I guess I could have some form of lookup table, which would include information such as if the user taps on this page, within these coordinates, then launch this item. Anyone dealt with any similar issues?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >