Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 70/87 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • BASH, multiple arrays and a loop.

    - by S1syphus
    At work, we 7 or 8 hardrives we dispatch over the country, each have unique labels which are not sequential. Ideally drives are plugged in our desktop, then gets folders from the server that correspond to the drive name. Sometimes, only one hard drive gets plugged in sometimes multiples, possibly in the future more will be added. Each is mounts to /Volumes/ and it's identifier; so for example /Volumes/f00, where f00 is the identifier. What I want to happen, scan volumes see if any any of the drives are plugged in, then checks the server to see if the folder exists, if ir does copy folder and recursive folders. Here is what I have so far, it checks if the drive exists in Volumes: #!/bin/sh #Declare drives in the array ARRAY=( foo bar long ) #Get the drives from the array DRIVES=${#ARRAY[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points #I plan to use AFP eventually, but for the sake of ease #using a local mount. ServerMount="BigBlue" #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #Loop through each item in the array and check if exists on /Volumes for (( i=0;i<$DRIVES;i++)); do dir="$BaseDir/${ARRAY[${i}]}" if [ -d "$dir" ]; then echo "$dir exists, you win." else echo "$dir is not attached." fi done What I can't figure out how to do, is how to check the volumes for the server while looping through the harddrive mount points. So I could do something like: #!/bin/sh #Declare drives, and folder location in arrays ARRAY=( foo bar long ) ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) #Get the drives from the array DRIVES=${#ARRAY[@]} SERVERFOLDER=${#ARRAY1[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points ServerMount="BigBlue #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #List the contents from server directory into array ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) echo ${list[@]} for (( i=0;i<$DRIVES;i++)); (( i=0;i<$SERVERFOLDER;i++)); do dir="$BaseDir/${ARRAY[${i}]}" ser="${ARRAY1[${i}]}" if [ "$dir" =~ "$sir" ]; then cp "$sir" "$dir" else echo "$dir is not attached." fi done I know, that is pretty wrong... well very, but I hope it gives you the idea of what I am trying to achieve. Any ideas or suggestions?

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

  • Problem getting correct parameters for C# P/Invoke call to C++ dll

    - by Jim Jones
    Trying to Interop a functionality from the Outside In API from Oracle. Have the following function: SCCERR EXOpenExport {VTHDOC hDoc, VTDWORD dwOutputId, VTDWORD dwSpecType, VTLPVOID pSpec, VTDWORD dwFlags, VTSYSPARAM dwReserved, VTLPVOID pCallbackFunc, VTSYSPARAM dwCallbackData, VTLPHEXPORT phExport); From the header files I reduced the parameters to: typedef VTSYSPARAM VTHDOC, VTLPHDOC * typedef DWORD_PTR VTSYSPARAM typedef unsigned long DWORD_PTR typedef unsigned long VTDWORD typedef VTVOID* VTLPVOID #define VTVOID void typedef VTHDOC VTHEXPORT, *VTLPEXPORT These are for 32 bit windows Going through the header files, the example programs, and the documentation I found: 1. That pSpec could be a pointer to a buffer or NULL, so I set it to a IntPtr.Zero (documentation). 2. That dwFlags and dwReserved according to the documentation "Must be set by the developer to 0". 3. That pCallbackFunc can be set to NULL if I don't want to handle callbacks. 4. That the last two are based on structs that I wrote C# wrappers for using the [StructLayout(LayoutKind.Sequential)]. Then instatiated an instance and generated the parameters by first creating a IntPtr with Marshal.AllocHGlobal(Marshal.SizeOf(instance)), then getting the address value which is passed as a uint for dwCallbackData and a IntPtr for phExport. The final parameter list is as follows: 1. phDoc as a IntPtr which was loaded with an address by the DAOpenDocument function called before. 2. dwOutputId as uint set to 1535 which represents FI_JPEGFIF 3. dwSpecType as int set to 2 which represents IOTYPE_ANSIPATH 4. pSpec as an IntPtr.Zero where the output will be written 5. dwFlags as uint set to 0 as directed 6. dwReserved as uint set to 0 as directed 7. pCallbackFunc as IntPtr set to NULL as I will handle results 8. dwCallBackDate as uint the address of a buffer for a struct 9. phExport as IntPtr to another struct buffer still get an undefined error from the API. Meaning that the call returns a 961 which is not defined in any of the header files. In the past I have gotten this when my choice of parameter types are incorrect. I started out using Interop Assistant which was helpful in learning how many of the parameter types get translated. It is however limited by how well I am able to glean the correct native type from the header files. For example the hDoc parameter used in the preceding function was defined as a non-filesytem handle, so attempted to use Marshal to create a handle, then used an IntPtr, and finally it turned out to be an int (actually it was &phDoc used here). So is there a more scientific way of doing this, other than trial and error? Jim

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • Multiprogramming in Django, writing to the Database

    - by Marcus Whybrow
    Introduction I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model: class BookProfile(): # ... def save(self, *args, **kwargs): uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection} # Test for other objects with identical values profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk)) # If none are found create the object, else fail. if len(profiles) == 0: super(BookProfile, self).save(*args, **kwargs) else: raise ValidationError('A Book Profile for that book instance in that collection already exists') I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk). I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors. Problem Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second. I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless. My Thoughts The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects. It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors. If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?

    Read the article

  • java.util.zip - ZipInputStream v.s. ZipFile

    - by lucho
    Hello, community! I have some general questions regarding the java.util.zip library. What we basically do is an import and an export of many small components. Previously these components were imported and exported using a single big file, e.g.: <component-type-a id="1"/> <component-type-a id="2"/> <component-type-a id="N"/> <component-type-b id="1"/> <component-type-b id="2"/> <component-type-b id="N"/> Please note that the order of the components during import is relevant. Now every component should occupy its own file which should be externally versioned, QA-ed, bla, bla. We decided that the output of our export should be a zip file (with all these files in) and the input of our import should be a similar zip file. We do not want to explode the zip in our system. We do not want opening separate streams for each of the small files. My current questions: Q1. May the ZipInputStream guarantee that the zip entries (the little files) will be read in the same order in which they were inserted by our export that uses ZipOutputStream? I assume reading is something like: ZipInputStream zis = new ZipInputStream(new BufferedInputStream(fis)); ZipEntry entry; while((entry = zis.getNextEntry()) != null) { //read from zis until available } I know that the central zip directory is put at the end of the zip file but nevertheless the file entries inside have sequential order. I also know that relying on the order is an ugly idea but I just want to have all the facts in mind. Q2. If I use ZipFile (which I prefer) what is the performance impact of calling getInputStream() hundreds of times? Will it be much slower than the ZipInputStream solution? The zip is opened only once and ZipFile is backed by RandomAccessFile - is this correct? I assume reading is something like: ZipFile zipfile = new ZipFile(argv[0]); Enumeration e = zipfile.entries();//TODO: assure the order of the entries while(e.hasMoreElements()) { entry = (ZipEntry) e.nextElement(); is = zipfile.getInputStream(entry)); } Q3. Are the input streams retrieved from the same ZipFile thread safe (e.g. may I read different entries in different threads simultaneously)? Any performance penalties? Thanks for your answers!

    Read the article

  • PInvokeStackImbalance C# call to unmanaged C++ function

    - by user287498
    After switching to VS2010, the managed debug assistant is displaying an error about an unbalanced stack from a call to an unmanaged C++ function from a C# application. The usuals suspects don't seem to be causing the issue. Is there something else I should check? The VS2008 built C++ dll and C# application never had a problem, no weird or mysterious bugs - yeah, I know that doesn't mean much. Here are the things that were checked: The dll name is correct. The entry point name is correct and has been verified with depends.exe - the code has to use the mangled name and it does. The calling convention is correct. The sizes and types all seem to be correct. The character set is correct. There doesn't seem to be any issues after ignoring the error and there isn't an issue when running outside the debugger. C#: [DllImport("Correct.dll", EntryPoint = "SuperSpecialOpenFileFunc", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, ExactSpelling = true)] public static extern short SuperSpecialOpenFileFunc(ref SuperSpecialStruct stuff); [StructLayout(LayoutKind.Sequential, Pack = 1, CharSet = CharSet.Ansi)] public struct SuperSpecialStruct { public int field1; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 256)] public string field2; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 20)] public string field3; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 10)] public string field4; public ushort field5; public ushort field6; public ushort field7; public short field8; public short field9; public uint field10; public short field11; }; C++: short SuperSpecialOpenFileFunc(SuperSpecialStruct * stuff); struct SuperSpecialStruct { int field1; char field2[256]; char field3[20]; char field4[10]; unsigned short field5; unsigned short field6; unsigned short field7; short field8; short field9; unsigned int field10; short field11; }; Here is the error: Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem in 'Managed application path'. Additional Information: A call to PInvoke function 'SuperSpecialOpenFileFunc' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature.

    Read the article

  • how to create a system-wide independent universal counter object primarily for Database keys?

    - by andora
    I would like to create/use a system-wide independent universal 'counter object' that can be called via COM in a thread-safe manner. The counter object will be passed an ID to identify which counter to return, handle the counting, 'persist' the count (occasionally), have reasonable performance (as fast as possible) perhaps capable of 1000 counts per second or better (1mS) and be accessible cross-process/out-of-process. The current count status must be persisted between object restarts/shutdowns. The counter object is liklely to be a 'singleton' type object implemented in some form of free-threaded dictionary, containing maybe 10 counters (perhaps 50 max). The count needs to be monotonic and consistent, (ie: guaranteed unique sequential values). Each counter should have a few methods, like reset, inc, dec, set, clear, remove. As a luxury, I would like to have a variable-increment (ie: 'step by' value). To support thread-safefty, perhaps some sorm of critical-section or mutex call. It just needs to return a long/4byte signed integer. I really want something that can be called from anywhere, including VBScript, so I figure COM is my preferred solution. The primary use of this is for database keys. I am unable to use autoinc or guid type keys and have ruled out database-generated counting systems at this point. I've spent days researching this and I have really struggled to find a solution. The best I can find is a free-threaded dictionary object that can be instantiated using COM+ from Motobit - it seems to offer all the 'basics' and I guess I could create some form of wrapper for this. So, here are my questions: Does such a 'general purpose counter-object already exist? Can you direct me to it? (MS did do an IIS/ASP object called 'MSWC.Counter' but this isn't 'cross-process'/ out-of-process component and isn't thread-safe. (but if it was, it would do!) What is the best way of creating such a Component? (I'd prefer VB6 right-now, [don't ask!] but can do in VB.NET2005 if I had to). I don't have the skills/knowledge/tools to use anything else. I am desparate for a workable solution. I need specific guidance! If anybody can code something up for me I am prepared to pay for it.

    Read the article

  • How to define and work with an array of bits in C?

    - by Eddy
    I want to create a very large array on which I write '0's and '1's. I'm trying to simulate a physical process called random sequential adsorption, where units of length 2, dimers, are deposited onto an n-dimensional lattice at a random location, without overlapping each other. The process stops when there is no more room left on the lattice for depositing more dimers (lattice is jammed). Initially I start with a lattice of zeroes, and the dimers are represented by a pair of '1's. As each dimer is deposited, the site on the left of the dimer is blocked, due to the fact that the dimers cannot overlap. So I simulate this process by depositing a triple of '1's on the lattice. I need to repeat the entire simulation a large number of times and then work out the average coverage %. I've already done this using an array of chars for 1D and 2D lattices. At the moment I'm trying to make the code as efficient as possible, before working on the 3D problem and more complicated generalisations. This is basically what the code looks like in 1D, simplified: int main() { /* Define lattice */ array = (char*)malloc(N * sizeof(char)); total_c = 0; /* Carry out RSA multiple times */ for (i = 0; i < 1000; i++) rand_seq_ads(); /* Calculate average coverage efficiency at jamming */ printf("coverage efficiency = %lf", total_c/1000); return 0; } void rand_seq_ads() { /* Initialise array, initial conditions */ memset(a, 0, N * sizeof(char)); available_sites = N; count = 0; /* While the lattice still has enough room... */ while(available_sites != 0) { /* Generate random site location */ x = rand(); /* Deposit dimer (if site is available) */ if(array[x] == 0) { array[x] = 1; array[x+1] = 1; count += 1; available_sites += -2; } /* Mark site left of dimer as unavailable (if its empty) */ if(array[x-1] == 0) { array[x-1] = 1; available_sites += -1; } } /* Calculate coverage %, and add to total */ c = count/N total_c += c; } For the actual project I'm doing, it involves not just dimers but trimers, quadrimers, and all sorts of shapes and sizes (for 2D and 3D). I was hoping that I would be able to work with individual bits instead of bytes, but I've been reading around and as far as I can tell you can only change 1 byte at a time, so either I need to do some complicated indexing or there is a simpler way to do it? Thanks for your answers

    Read the article

  • Writing a managed wrapper for unmanaged (C++) code - custom types/structs

    - by Bobby
    faacEncConfigurationPtr FAACAPI faacEncGetCurrentConfiguration( faacEncHandle hEncoder); I'm trying to come up with a simple wrapper for this C++ library; I've never done more than very simple p/invoke interop before - like one function call with primitive arguments. So, given the above C++ function, for example, what should I do to deal with the return type, and parameter? FAACAPI is defined as: #define FAACAPI __stdcall faacEncConfigurationPtr is defined: typedef struct faacEncConfiguration { int version; char *name; char *copyright; unsigned int mpegVersion; unsigned long bitRate; unsigned int inputFormat; int shortctl; psymodellist_t *psymodellist; int channel_map[64]; } faacEncConfiguration, *faacEncConfigurationPtr; AFAIK this means that the return type of the function is a reference to this struct? And faacEncHandle is: typedef struct { unsigned int numChannels; unsigned long sampleRate; ... SR_INFO *srInfo; double *sampleBuff[MAX_CHANNELS]; ... double *freqBuff[MAX_CHANNELS]; double *overlapBuff[MAX_CHANNELS]; double *msSpectrum[MAX_CHANNELS]; CoderInfo coderInfo[MAX_CHANNELS]; ChannelInfo channelInfo[MAX_CHANNELS]; PsyInfo psyInfo[MAX_CHANNELS]; GlobalPsyInfo gpsyInfo; faacEncConfiguration config; psymodel_t *psymodel; /* quantizer specific config */ AACQuantCfg aacquantCfg; /* FFT Tables */ FFT_Tables fft_tables; int bitDiff; } faacEncStruct, *faacEncHandle; So within that struct we see a lot of other types... hmm. Essentially, I'm trying to figure out how to deal with these types in my managed wrapper? Do I need to create versions of these types/structs, in C#? Something like this: [StructLayout(LayoutKind.Sequential)] struct faacEncConfiguration { uint useTns; ulong bitRate; ... } If so then can the runtime automatically "map" these objects onto eachother? And, would I have to create these "mapped" types for all the types in these return types/parameter type hierarchies, all the way down until I get to all primitives? I know this is a broad topic, any advice on getting up-to-speed quickly on what I need to learn to make this happen would be very much appreciated! Thanks!

    Read the article

  • Calling CryptUIWizDigitalSign from .NET on x64

    - by Joe Kuemerle
    I am trying to digitally sign files using the CryptUIWizDigitalSign function from a .NET 2.0 application compiled to AnyCPU. The call works fine when running on x86 but fails on x64, it also works on an x64 OS when compiled to x86. Any idea on how to better marshall or call from x64? The Win32exception returned is "Error encountered during digital signing of the file ..." with a native error code of -2146762749. The relevant portion of the code are: [StructLayout(LayoutKind.Sequential)] public struct CRYPTUI_WIZ_DIGITAL_SIGN_INFO { public Int32 dwSize; public Int32 dwSubjectChoice; [MarshalAs(UnmanagedType.LPWStr)] public string pwszFileName; public Int32 dwSigningCertChoice; public IntPtr pSigningCertContext; [MarshalAs(UnmanagedType.LPWStr)] public string pwszTimestampURL; public Int32 dwAdditionalCertChoice; public IntPtr pSignExtInfo; } [DllImport("Cryptui.dll", CharSet=CharSet.Unicode, SetLastError=true)] public static extern bool CryptUIWizDigitalSign(int dwFlags, IntPtr hwndParent, string pwszWizardTitle, ref CRYPTUI_WIZ_DIGITAL_SIGN_INFO pDigitalSignInfo, ref IntPtr ppSignContext); CRYPTUI_WIZ_DIGITAL_SIGN_INFO digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo = new CRYPTUI_WIZ_DIGITAL_SIGN_INFO(); digitalSignInfo.dwSize = Marshal.SizeOf(digitalSignInfo); digitalSignInfo.dwSubjectChoice = 1; digitalSignInfo.dwSigningCertChoice = 1; digitalSignInfo.pSigningCertContext = pSigningCertContext; digitalSignInfo.pwszTimestampURL = timestampUrl; digitalSignInfo.dwAdditionalCertChoice = 0; digitalSignInfo.pSignExtInfo = IntPtr.Zero; digitalSignInfo.pwszFileName = filepath; CryptUIWizDigitalSign(1, IntPtr.Zero, null, ref digitalSignInfo, ref pSignContext)); And here is how the SigningCertContext is retrieved (minus various error handling) public IntPtr GetCertContext(String pfxfilename, String pswd) IntPtr hMemStore = IntPtr.Zero; IntPtr hCertCntxt = IntPtr.Zero; IntPtr pProvInfo = IntPtr.Zero; uint provinfosize = 0; try { byte[] pfxdata = PfxUtility.GetFileBytes(pfxfilename); CRYPT_DATA_BLOB ppfx = new CRYPT_DATA_BLOB(); ppfx.cbData = pfxdata.Length; ppfx.pbData = Marshal.AllocHGlobal(pfxdata.Length); Marshal.Copy(pfxdata, 0, ppfx.pbData, pfxdata.Length); hMemStore = Win32.PFXImportCertStore(ref ppfx, pswd, CRYPT_USER_KEYSET); pswd = null; if (hMemStore != IntPtr.Zero) { Marshal.FreeHGlobal(ppfx.pbData); while ((hCertCntxt = Win32.CertEnumCertificatesInStore(hMemStore, hCertCntxt)) != IntPtr.Zero) { if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, IntPtr.Zero, ref provinfosize)) pProvInfo = Marshal.AllocHGlobal((int)provinfosize); else continue; if (Win32.CertGetCertificateContextProperty(hCertCntxt, CERT_KEY_PROV_INFO_PROP_ID, pProvInfo, ref provinfosize)) break; } } finally { if (pProvInfo != IntPtr.Zero) Marshal.FreeHGlobal(pProvInfo); if (hMemStore != IntPtr.Zero) Win32.CertCloseStore(hMemStore, 0); } return hCertCntxt; }

    Read the article

  • Marshalling non-Blittable Structs from C# to C++

    - by Greggo
    I'm in the process of rewriting an overengineered and unmaintainable chunk of my company's library code that interfaces between C# and C++. I've started looking into P/Invoke, but it seems like there's not much in the way of accessible help. We're passing a struct that contains various parameters and settings down to unmanaged codes, so we're defining identical structs. We don't need to change any of those parameters on the C++ side, but we do need to access them after the P/Invoked function has returned. My questions are: What is the best way to pass strings? Some are short (device id's which can be set by us), and some are file paths (which may contain Asian characters) Should I pass an IntPtr to the C# struct or should I just let the Marshaller take care of it by putting the struct type in the function signature? Should I be worried about any non-pointer datatypes like bools or enums (in other, related structs)? We have the treat warnings as errors flag set in C++ so we can't use the Microsoft extension for enums to force a datatype. Is P/Invoke actually the way to go? There was some Microsoft documentation about Implicit P/Invoke that said it was more type-safe and performant. For reference, here is one of the pairs of structs I've written so far: C++ /** Struct used for marshalling Scan parameters from managed to unmanaged code. */ struct ScanParameters { LPSTR deviceID; LPSTR spdClock; LPSTR spdStartTrigger; double spinRpm; double startRadius; double endRadius; double trackSpacing; UINT64 numTracks; UINT32 nominalSampleCount; double gainLimit; double sampleRate; double scanHeight; LPWSTR qmoPath; //includes filename LPWSTR qzpPath; //includes filename }; C# /// <summary> /// Struct used for marshalling scan parameters between managed and unmanaged code. /// </summary> [StructLayout(LayoutKind.Sequential)] public struct ScanParameters { [MarshalAs(UnmanagedType.LPStr)] public string deviceID; [MarshalAs(UnmanagedType.LPStr)] public string spdClock; [MarshalAs(UnmanagedType.LPStr)] public string spdStartTrigger; public Double spinRpm; public Double startRadius; public Double endRadius; public Double trackSpacing; public UInt64 numTracks; public UInt32 nominalSampleCount; public Double gainLimit; public Double sampleRate; public Double scanHeight; [MarshalAs(UnmanagedType.LPWStr)] public string qmoPath; [MarshalAs(UnmanagedType.LPWStr)] public string qzpPath; }

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • jQuery image loop not displaying any images

    - by user1871097
    I'm trying to create a very basic image gallery in jQuery. The goal is to have 3 images fade in and out in a sequential order. So image 1 is displayed, fades to image 2 etc. then the whole thing loops again. My HTML code so far is as follows: <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Slider</title> <style type="text/css"> .slider{ width: 2848px; height: 2136px; overflow: hidden; margin: 30px auto; } .slider img{ width:2848px; height:2136px; display:none; } </style> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.9.2/jquery-ui.min.js"></script> <script src="Slider2.js"></script> </head> <body onload="Slider2"();> <div class="slider"> <img id="1" src="31.jpg" border="0" alt="city"/> <img id="2" src="2vrtigo2.jpg" border="0" alt="roof"/> <img id="3" src="3.jpg" border="0" alt="sea"/> </div> </body> And the jQuery code looks like this: function Slider2() { var total = $(".slider img").size(); for (i=1; i<=total; i+=1) { $(".slider #"+i).fadeIn(600); $(".slider #"+i).delay(2000).hide; }} A quick syntactical note, I've also tried using i++ in the last argument of the For Loop. The result of this code is a blank, white page. I know some of the HTML is being compiled because the enormous 2848x2136 div creates scroll bars on the browser. If anyone could help me out, that would be greatly appreciated. Obviously I'm relatively new to web programming and would love some insight into why this isn't working. Thanks!

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Visual Studio App.config XML Transformation

    - by João Angelo
    Visual Studio 2010 introduced a much-anticipated feature, Web configuration transformations. This feature allows to configure a web application project to transform the web.config file during deployment based on the current build configuration (Debug, Release, etc). If you haven’t already tried it there is a nice step-by-step introduction post to XML transformations on the Visual Web Developer Team Blog and for a quick reference on the supported syntax you have this MSDN entry. Unfortunately there are some bad news, this new feature is specific to web application projects since it resides in the Web Publishing Pipeline (WPP) and therefore is not officially supported in other project types like such as a Windows applications. The keyword here is officially because Vishal Joshi has a nice blog post on how to extend it’s support to app.config transformations. However, the proposed workaround requires that the build action for the app.config file be changed to Content instead of the default None. Also from the comments to the said post it also seems that the workaround will not work for a ClickOnce deployment. Working around this I tried to remove the build action change requirement and at the same time add ClickOnce support. This effort resulted in a single MSBuild project file (AppConfig.Transformation.targets) available for download from GitHub. It integrates itself in the build process so in order to add app.config transformation support to an existing Windows Application Project you just need to import this targets file after all the other import directives that already exist in the *.csproj file. Before – Without App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> After – With App.config transformation support ... <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="C:\MyExtensions\AppConfig.Transformation.targets" /> <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> </Project> As a final disclaimer, the testing time was limited so any problem that you find let me know. The MSBuild project invokes the mage tool so the Framework SDK must be installed. Update: I finally had some spare time and was able to check the problem reported by Geoff Smith and believe the problem is solved. The Publish command inside Visual Studio triggers a build workflow different than through MSBuild command line and this was causing problems. I posted a new version in GitHub that should now support ClickOnce deployment with app.config tranformation from within Visual Studio and MSBuild command line. Also here is a link for the sample application used to test the new version using the Publish command with the install location set to be from a CD-ROM or DVD-ROM and selected that the application will not check for updates. Thanks to Geoff for spotting the problem.

    Read the article

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • Application Demos in UPK

    - by [email protected]
    Over the years, User Productivity Kit has expanded to include solutions to many project challenges. As of UPK 3.6.1, solutions are provided for pre and post application go-live learning, application testing, system documentation, presentation output, and more. New in UPK 3.6.1 are additional features that can be used effectively for application demo purposes. This can come in handy when you need to do a demo but don't want to show or can't show the live application. Maybe you're doing a presentation for a group of project stakeholders and want to focus on the business workflow implemented by the application rather than the mechanics of using it. Or possibly, you need to show the application but you're disconnected from any network preventing you from running the live application. In any of these cases, a presentation aid that represents the live application is what's needed. Previous versions of the UPK topic player would allow you to do this but would always show those UPK user interface elements that help a user learn the application. When you're presenting the narrative live, the UPK bubbles can be a distraction. UPK 3.6.1 provides some new features that allow you to control whether the bubbles display. There are two ways to hide bubbles in a topic. The first is a topic property that allows you to control bubbles across the entire topic. There are 3 settings for the Show Bubbles topic property. The default setting is Use frame settings which allows you to control whether bubbles display on a frame by frame basis. When you choose Always, the bubbles will always display regardless of the frame setting. The final choice is Never. Choosing Never will hide every bubble in your topic with one setting change. As with Always, choosing Never will ignore the frame setting. The second way to control the bubbles is at the frame level. First ensure that the topic's Show Bubbles property is set to Use frame settings. Navigate to the frame on which you want to turn off the bubble and click the Display bubble for this frame button to turn off the bubble. When you play the topic, the bubble will no longer be displayed. Depending on your needs, you might also use another longstanding UPK feature that allows you to control whether the action area displays on a frame. Just click the Action area on/off button to toggle its display. I've found the frame properties to be useful beyond creating presentation aids. When creating "See It!" only topics for more advanced users, I may hide the bubbles on some of the more straightforward frames. For example, if I have a form where one needs to fill out an address, I may display the first bubble in the sequence and explain what the subsequent steps are doing. I then hide bubbles on the remaining frames which are the more mechanical steps of entering the address. We'd like to hear your thoughts on this new UPK feature. Use the comments below to tell us how you've used it. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Framework 4 Features: Support for Timed Jobs

    - by Anthony Shorten
    One of the new features of the Oracle Utilities Application Framework V4 is the ability for the batch framework to support Timed Batch. Traditionally batch is associated with set processing in the background in a fixed time frame. For example, billing customers. Over the last few versions their has been functionality required by the products required a more monitoring style batch process. The monitor is a batch process that looks for specific business events based upon record status or other pieces of data. For example, the framework contains a fact monitor (F1-FCTRN) that can be configured to look for specific status's or other conditions. The batch process then uses the instructions on the object to determine what to do. To support monitor style processing, you need to run the process regularly a number of times a day (for example, every ten minutes). Traditional batch could support this but it was not as optimal as expected (if you are a site using the old Workflow subsystem, you understand what I mean). The Batch framework was extended to add additional facilities to support times (and continuous batch which is another new feature for another blog entry). The new facilities include: The batch control now defines the job as Timed or Not Timed. Non-Timed batch are traditional batch jobs. The timer interval (the interval between executions) can be specified The timer can be made active or inactive. Only active timers are executed. Setting the Timer Active to inactive will stop the job at the next time interval. Setting the Timer Active to Active will start the execution of the timed job. You can specify the credentials, language to view the messages and an email address to send the a summary of the execution to. The email address is optional and requires an email server to be specified in the relevant feature configuration. You can specify the thread limits and commit intervals to be sued for the multiple executions. Once a timer job is defined it will be executed automatically by the Business Application Server process if the DEFAULT threadpool is active. This threadpool can be started using the online batch daemon (for non-production) or externally using the threadpoolworker utility. At that time any batch process with the Timer Active set to Active and Batch Control Type of Timed will begin executing. As Timed jobs are executed automatically then they do not appear in any external schedule or are managed by an external scheduler (except via the DEFAULT threadpool itself of course). Now, if the job has no work to do as the timer interval is being reached then that instance of the job is stopped and the next instance started at the timer interval. If there is still work to complete when the interval interval is reached, the instance will continue processing till the work is complete, then the instance will be stopped and the next instance scheduled for the next timer interval. One of the key ways of optimizing this processing is to set the timer interval correctly for the expected workload. This is an interesting new feature of the batch framework and we anticipate it will come in handy for specific business situations with the monitor processes.

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Introducing .NET 4.0 with Visual Studio 2010 by Alex Mackey - Book review

    - by Malisa L. Ncube
    Alex (http://simpleisbest.co.uk/) does a very good job in covering the new features of .NET 4.0 and Visual Studio 2010. His focus is on the developers that have experience in development using previous versions of Visual Studio, more specifically Visual Studio 2008.     The following are my views towards his book. 1. Scope / Coverage Even as the book is labeled as introduction, it is covers a broad spectrum of technologies, features and references that are focused into helping a developer quickly decide what to use in the new .NET framework. a. Content The content included covers as much as possible the new additions that are included in the new .NET version 4.0. He shows the Visual Studio 2010 new features and quickly shows how to extend it using Managed Extensibility Framework. Some of my favorites are parallel debugging enhancements. The author delves into JQuery, which Microsoft has decided to support. Some of the very interesting content is on the out-of-band releases including ASP.NET MVC, Windows Azure Silverlight 3 and WCF Data Services. b. What is not included? Windows Phone 7 Series. This was only talked about in the MIX10. The data may not have been available at the time of writing. Microsoft Pinpoint (Microsoft code name "Dallas") Windows Embedded development. c. Table of Contents Chapter 1: Introduction Chapter 2: Visual Studio IDE and MEF Chapter 3: Language and Dynamic Changes Chapter 4: CLR and BCL Changes Chapter 5: Parallelization and Threading Enhancements Chapter 6: Windows Workflow Foundation 4 Chapter 7: Windows Communication Foundation Chapter 8: Entity Framework Chapter 9: WCF Data Services Chapter 10: ASPNET Chapter 11: Microsoft AJAX Library Chapter 12: jQuery Chapter 13: ASPNET MVC Chapter 14: Silverlight Introduction Chapter 15: WPF 4.0 and Silverlight 3.0 Chapter 16: Windows Azure 2. Depth Avoids getting into depth on the topics presented, to present the new concepts in assumption of the developer’s existing knowledge. Code samples are on book and exist mostly as snippets and very easy to follow. There are no downloadable examples. 3. Complexity The book is written in a very simple way and easy to follow. There are no irrelevant intimidating details. So it’s a book that you can grab and never put down until you’ve finished reading the entire book. 4. References The author includes reference links to blogs, Wikis and a lot of online resources including the MSDN documentation, which is a very convenient strategy to avoid flooding the reader with details which may not be of interest to them. Most sites do not use url routing and that is really not nice. There are notes from interviews between the author and people behind the new technologies, in which they explain what some specific areas that need clarifications and what their future views are in relation to the features they are working on. 5. Target The author targets experts that want to make a transition from .NET 3.5 to 4.0. Some obvious 3.5 features have been purposely excluded from the text 6. Overrall It is evident that the author has made extensive research into the breadth of what MS is working on, in relation to .NET and Visual Studio and has also been watching the online community. What I would like to see in the next edition are some details on OData protocol, Expression Blend 4 and Embedded development and Windows Phone development. I should say I’m one of the beneficiaries of this book. Excellent work Alex.   Technorati Tags: .NET,Book-Review,Visual Studio

    Read the article

  • Collation errors in business

    - by Rob Farley
    At the PASS Summit last month, I did a set (Lightning Talk) about collation, and in particular, the difference between the “English” spoken by people from the US, Australia and the UK. One of the examples I gave was that in the US drivers might stop for gas, whereas in Australia, they just open the window a little. This is what’s known as a paraprosdokian, where you suddenly realise you misunderstood the first part of the sentence, based on what was said in the second. My current favourite is Emo Phillip’s line “I like to play chess with old men in the park, but it can be hard to find thirty-two of them.” Essentially, this a collation error, one that good comedians can get mileage from. Unfortunately, collation is at its worst when we have a computer comparing two things in different collations. They might look the same, and sound the same, but if one of the things is in SQL English, and the other one is in Windows English, the poor database server (with no sense of humour) will get suspicious of developers (who all have senses of humour, obviously), and declare a collation error, worried that it might not realise some nuance of the language. One example is the common scenario of a case-sensitive collation and a case-insensitive one. One may think that “Rob” and “rob” are the same, but the other might not. Clearly one of them is my name, and the other is a verb which means to steal (people called “Nick” have the same problem, of course), but I have no idea whether “Rob” and “rob” should be considered the same or not – it depends on the collation. I told a lie before – collation isn’t at its worst in the computer world, because the computer has the sense to complain about the collation issue. People don’t. People will say something, with their own understanding of what they mean. Other people will listen, and apply their own collation to it. I remember when someone was asking me about a situation which had annoyed me. They asked if I was ‘pissed’, and I said yes. I meant that I was annoyed, but they were asking if I’d been drinking. It took a moment for us to realise the misunderstanding. In business, the problem is escalated. A business user may explain something in a particular way, using terminology that they understand, but using words that mean something else to a technical person. I remember a situation with a checkbox on a form (back in VB6 days from memory). It was used to indicate that something was approved, and indicated whether a particular database field should store True or False – nothing more. However, the client understood it to mean that an entire workflow system would be implemented, with different users have permission to approve items and more. The project manager I’d just taken over from clearly hadn’t appreciated that, and I faced a situation of explaining the misunderstanding to the client. Lots of fun... Collation errors aren’t just a database setting that you can ignore. You need to remember that Americans speak a different type of English to Aussies and Poms, and techies speak a different language to their clients.

    Read the article

  • Oracle Enterprise Manager 11g is Here!

    - by chung.wu
    We hope that you enjoyed the launch event. If you missed it, you may still watch it via our on demand webcast, which is being produced and will be posted very shortly. 11gR1 is a major release of Oracle Enterprise Manager, and as one would expect from a big release, there are many new capabilities that appeal to a broad set of audience. Before going into the laundry list of new features, let's talk about the key themes for this release to put things in perspective. First, this release is about Business Driven Application Management. The traditional paradigm of component centric systems management simply cannot satisfy the management needs of modern distributed applications, as they do not provide adequate visibility of whether these applications are truly meeting the service level expectations of the business users. Business Driven Application Management helps IT manage applications according to the needs of the business users so that valuable IT resources can be better focused to help deliver better business results. To support Business Driven Application Management, 11gR1 builds on the work that we started in 10g to provide better support for user experience management. This capability helps IT better understand how users use applications and the experience that the applications provide so that IT can take actions to help end users get their work done more effectively. In addition, this release also delivers improved business transaction management capabilities to make it faster and easier to understand and troubleshoot transaction problems that impact end user experience. Second, this release includes strengthened Integrated Application-to-Disk Management. Every component of an application environment, from the application logic to the application server, to database, host machines and storage devices, etc... can affect end user experience. After user experience improvement needs are identified, IT needs tools that can be used do deep dive diagnostics for each of the application environment component, analyze configurations and deploy changes. Enterprise Manager 11gR1 extends coverage of key application environment components to include full support for Oracle Database 11gR2, Exadata V2, and Fusion Middleware 11g. For composite and Java application management, two key pieces of technologies, JVM Diagnostic and Composite Application Monitoring and Modeler, are now fully integrated into Enterprise Manager so there is no need to install and maintain separate tools. In addition, we have delivered the first set of integration between Enterprise Manager Grid Control and Enterprise Manager Ops Center so that hardware level events can be centrally monitored via Grid Control. Finally, this release delivers Integrated Systems Management and Support for customers of Oracle technologies. Traditionally, systems management tools and tech support were separate silos. When problems occur, administrators used internally deployed tools to try to solve the problems themselves. If they couldn't fix the problems, then they would use some sort of support website to get help from the vendor's support staff. Oracle Enterprise Manager 11g integrates problem diagnostic and remediation workflow. Administrators can use Oracle Enterprise Manager's various diagnostic tools to begin the troubleshooting process. They can also use the integrated access to My Oracle Support to look up solutions and download software patches. If further help is needed, administrators can open service requests from right within Oracle Enterprise Manager and track status update. Oracle's support staff, using Enterprise Manager's configuration management capabilities, can collect important configuration information about customer environments in order to expedite problem resolution. This tight integration between Oracle Enterprise Manager and My Oracle Support helps Oracle customers achieve a Superior Ownership Experience for their Oracle products. So there you have it. This is a brief 50,000 feet overview of Oracle Enterprise Manager 11g. We know you are hungry for the details. We are going to write about it in the coming days and weeks. For those of you that absolutely can't wait to find out more, you may download our software to try it out today. In fact, for the first time ever, the initial release of Oracle Enterprise Manager is available for both 32 and 64 bit Linux. Additional O/S ports will arrive in the coming weeks. Please stay tuned on the Oracle Enterprise Manager blog for additional updates.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >