Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 134/145 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • zlib gzgets extremely slow?

    - by monkeyking
    I'm doing stuff related to parsing huge globs of textfiles, and was testing what input method to use. There is not much of a difference using c++ std::ifstreams vs c FILE, According to the documentation of zlib, it supports uncompressed files, and will read the file without decompression. I'm seeing a difference from 12 seconds using non zlib to more than 4 minutes using zlib.h This I've tested doing multiple runs, so its not a disk cache issue. Am I using zlib in some wrong way? thanks #include <zlib.h> #include <cstdio> #include <cstdlib> #include <fstream> #define LENS 1000000 size_t fg(const char *fname){ fprintf(stderr,"\t-> using fgets\n"); FILE *fp =fopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(NULL!=fgets(buffer,LENS,fp)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t is(const char *fname){ fprintf(stderr,"\t-> using ifstream\n"); std::ifstream is(fname,std::ios::in); size_t nLines =0; char *buffer = new char[LENS]; while(is. getline(buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t iz(const char *fname){ fprintf(stderr,"\t-> using zlib\n"); gzFile fp =gzopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(0!=gzgets(fp,buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } int main(int argc,char**argv){ if(atoi(argv[2])==0) fg(argv[1]); if(atoi(argv[2])==1) is(argv[1]); if(atoi(argv[2])==2) iz(argv[1]); }

    Read the article

  • Delphi Unit local variables - how to make each instance unique?

    - by Justin
    Ok, this, I'm sure is something simple that is easy to do. The problem : I've inherited scary spaghetti code and am slowly trying to better it when new features need adding - generally when a refactor makes adding the new feature neater. I've got a bunch of code I'm packing into a single unit which, in different places in the application, controls the same physical thing in the outside world. The control appears in several places in the application and operates slightly differently in each instance. What I've done is to create a unit with all of the features I need which I can simply drop, as a frame, into each form that requires it. Each form then uses the unit's interface methods to customise the behaviour for each instance. The problem within the problem : In the unit in question (the frame) I have a variable declared in the IMPLEMENTATION section - local to the unit. I also have a procedure, declared in the TYPE section which takes an argument and assigns that argument to the local variable in question - each form passes a unique variable to each instance of the frame/unit. What I want it to do is for each instance of the frame to keep its own version of that variable, different from the others, and use that to define how it operates. What seems to be happening, however, is that all instances are using the same value, even if I explicitly pass each instance a different variable. ie: Unit FlexibleUnit; interface uses //the uses stuff type TFlexibleUnit=class(TFrame) //declarations including procedure makeThisInstanceX(passMeTheVar:integer); private // public // end; implementation uses //the uses var myLocalVar; procedure makeThisInstanceX(passMeTheVar:integer); begin myLocalVar:=passMeTheVar; end; //other procedures using myLocalVar //etc to the end; Now somewhere in another Form I've dropped this Frame onto the Design pane, sometimes two of these frames on one Form, and have it declared in the proper places, etc. Each is unique in that : ThisFlexibleUnit : TFlexibleUnit; ThatFlexibleUnit : TFlexibleUnit; and when I do a: ThisFlexibleUnit.makeThisInstanceX(var1); //want to behave in way "var1" ThatFlexibleUnit.makeThisInstanceX(var2); //want to behave in way "var2" it seems that they both share the same variable "myLocalVar". Am I doing this wrong, in principle? If this is the correct method then it's a matter of debugging what I have (which is too huge to post) but if this is not correct in principle then is there a way to do what I am suggesting? Thanks in advance, Stack Overflow - you guys (and gals!) are legendary.

    Read the article

  • how to use window.onload?

    - by Patrick
    I'm refactoring a website using MVC. What was a set of huge pages with javascript, php, html etc etc is becoming a series of controllers and views. I'm trying to do it in a modular way so views are split in 'modules' that I can reuse in other pages when needed eg. "view/searchform displays only one div with the searchform "view/display_events displays a list of events and so on. One of the old pages was supposed to load a google map with a marker on it. Amongst the rest of the code, I can identify the relevant bits as follows <head> <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;key=blablabla" type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ function load() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); var point = new GLatLng(<?php echo ($info->lat && $info->lng) ? $info->lat .",". $info->lng : "51.502759,-0.126171"; ?>); map.setCenter(new GLatLng(<?php echo ($info->lat && $info->lng) ? $info->lat .",". $info->lng : "51.502759,-0.126171"; ?>), 15); map.addControl(new GLargeMapControl()); map.addControl(new GScaleControl()); map.addOverlay(new GMarker(point)); var marker = createMarker(point,GIcon(),"CIAO"); map.addOverlay(marker); } } //]]> </script> </head> ...then <body onload="load()" onunload="GUnload()"> ...and finally this div where the map should be displayed <div id="map" style="width: 440px; height: 300px"> </div> Don't know much about js, but my understanding is that a) I have to include the scripts in the view module I'm writing (directly in the HTML? I would prefer to load a separate script) b) I have to trigger that function using the equivalent of body onload... (obviously there's no body tag in my view. In my ignorance I've tried div onload=.... but didn't seem to be working :) What do you suggest I do? I've read about window.onload but don't know what's the correct syntax for that. please keep in mind that other parts of the page include other js functions (eg, google adsense) that are called after the footer.

    Read the article

  • How do you select form elements in JQuery based upon an html table?

    - by Swoop
    I am working on some ASP.NET web forms which involves some dynamic generation, and I need to add some onClick helpers on the client side. I have a basic outline of something working, except for one huge problem. There are multiple HTML tables, each generated by a different ASP.NET web control. Each table can contain overlapping field names, which is causing a problem with my JQuery click event handlers. The click event handler is linking to unintended form fields in addition to the intended form field. I have provided a simplified sample version of the code below. This code is trying to set the value of textbox box1 when a particular radiobutton is selected in the table with id=thing1. Obviously, the jquery code will be triggered for the form fields in both tables. The tables are dynamically added to the webpage based upon different conditions. It is possible that no tables will be loaded, only 1 table, or both tables might load. In the future, other tables could be added. Each table comes from a different .net web control. Other than renaming the form fields to make sure they are unique across all user controls, is there a way to have JQuery act only on the intended form fields? In other words, could the table ID be incorporated into the JQuery code in a manner that does not become a nightmare to maintain later? <script> $(document).ready(function() { $("[id$=radio1_0]").click(function() { $("[id$=box1]").attr("value", ""); }); $("[id$=radio1_1]").click(function() { $("[id$=box1]").attr("value", "N/A"); }); </script> <table id="thing1"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </td></tr> </table> <table id="thing2"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </tr></td> </table>

    Read the article

  • A case-insensitive related implementation problem

    - by Robert
    Hi All, I am going through a final refinement posted by the client, which needs me to do a case-insesitive query. I will basically walk through how this simple program works. First of all, in my Java class, I did a fairly simple webpage parsing: title=(String)results.get("title"); doc = docBuilder.parse("http://" + server + ":" + port + "/exist/rest/db/wb/xql/media_lookup.xql?" + "&title=" + title); This Java statement references an XQuery file "media_lookup.xql" which is stored on localhost, and the only parameter we are passing is the string "title". Secondly, let's take at look at that XQuery file: $title := request:get-parameter('title',""), $mediaNodes := doc('/db/wb/portfolio/media_data.xml'), $query := $mediaNodes//media[contains(title,$title)], Then it will evaluate that query. This XQuery will get the "title" parameter that are passes from our Java class, and query the "media_data" xml file stored in the database, which contains a bunch of media nodes with a 'title' element node. As you may expect, this simple query will just match those media nodes whose 'title' element contains a substring of what the value of string 'title' is. So if our 'title' is "Chi", it will return media nodes whose title may be "Chicago" or "Chicken". The refinment request posted by the client is that there should be NO case-sensitivity. The very intuitive way is to modify the XQuery statement by using a lower-case funtion in it, like: $query := $mediaNodes//media[contains(lower-case(title/text(),lower-case($title))], However, the question comes: this modified query will run my machine into memory overflow. Since my "media_data.xml" is quite huge and contains thouands of millions of media nodes, I assume the lower-case() function will run on each of the entries, thus causing the machine to crash. I've talked with some experienced XQuery programmer, and they think I should use an index to solve this problem, and I will definitely research into that. But before that, I am just posting this problem here to get other ideas or any suggestions, do you think any other way may help? for example, could I tweak the Java parse statement to realize the case-insensitivity? Since I think I saw some people did some string concatination by using "contains." in Java before passing it to the server. Any idea or help is welcomed, thanks in advance.

    Read the article

  • Loading datasets from datastore and merge into single dictionary. Resource problem.

    - by fredrik
    Hi, I have a productdatabase that contains products, parts and labels for each part based on langcodes. The problem I'm having and haven't got around is a huge amount of resource used to get the different datasets and merging them into a dict to suit my needs. The products in the database are based on a number of parts that is of a certain type (ie. color, size). And each part has a label for each language. I created 4 different models for this. Products, ProductParts, ProductPartTypes and ProductPartLabels. I've narrowed it down to about 10 lines of code that seams to generate the problem. As of currently I have 3 Products, 3 Types, 3 parts for each type, and 2 languages. And the request takes a wooping 5500ms to generate. for product in productData: productDict = {} typeDict = {} productDict['productName'] = product.name cache_key = 'productparts_%s' % (slugify(product.key())) partData = memcache.get(cache_key) if not partData: for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } ## Start of problem lines ## for defaultPart in product.defaultPartsData: for label in labelsForLangCode: if label.key() in defaultPart.partLabelList: typeDict[defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in product.optionalPartsData: for label in labelsForLangCode: if label.key() in optionalPart.partLabelList: typeDict[optionalPart.type.typeId]['optional'].append(label.partLangLabel) ## end problem lines ## memcache.add(cache_key, typeDict, 500) partData = memcache.get(cache_key) productDict['parts'] = partData productList.append(productDict) I guess the problem lies in the number of for loops is too many and have to iterate over the same data over and over again. labelForLangCode get all labels from ProductPartLabels that match the current langCode. All parts for a product is stored in a db.ListProperty(db.key). The same goes for all labels for a part. The reason I need the some what complex dict is that I want to display all data for a product with it's default parts and show a selector for the optional one. The defaultPartsData and optionaPartsData are properties in the Product Model that looks like this: @property def defaultPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.defaultParts) @property def optionalPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.optionalParts) When the completed dict is in the memcache it works smoothly, but isn't the memcache reset if the application goes in to hibernation? Also I would like to show the page for first time user(memcache empty) with out the enormous delay. Also as I said above, this is only a small amount of parts/product. What will the result be when it's 30 products with 100 parts. Is one solution to create a scheduled task to cache it in the memcache every hour? It this efficient? I know this is alot to take in, but I'm stuck. I've been at this for about 12 hours straight. And can't figure out a solution. ..fredrik

    Read the article

  • Are there compelling reasons not to use Groovy?

    - by Leonard H Martin
    I'm developing a LoB application in Java after a long absence from the platform (having spent the last 8 years or so entrenched in Fortran, C, a smidgin of C++ and latterly .Net). Java, the language, is not much changed from how I remember it. I like it's strengths and I can work around its weaknesses - the platform has grown and deciding upon the myriad of different frameworks which appear to do much the same thing as one another is a different story; but that can wait for another day - all-in-all I'm comfortable with Java. However, over the last couple of weeks I've become enamoured with Groovy, and purely from a selfish point of view: but not just because it makes development against the JVM a more succinct and entertaining (and, well, "groovy") proposition than Java (the language). What strikes me most about Groovy is its inherent maintainability. We all (I hope!) strive to write well documented, easy to understand code. However, sometimes the languages we use themselves defeat us. An example: in 2001 I wrote a library in C to translate EDIFACT EDI messages into ANSI X12 messages. This is not a particularly complicated process, if slightly involved, and I thought at the time I had documented the code properly - and I probably had - but some six years later when I revisited the project (and after becoming acclimatised to C#) I found myself lost in so much C boilerplate (mallocs, pointers, etc. etc.) that it took three days of thoughtful analysis before I finally understood what I'd been doing six years previously. This evening I've written about 2000 lines of Java (it is the day of rest, after all!). I've documented as best as I know how, but, but, of those 2000 lines of Java a significant proportion is Java boiler plate. This is where I see Groovy and other dynamic languages winning through - maintainability and later comprehension. Groovy lets you concentrate on your intent without getting bogged down on the platform specific implementation; it's almost, but not quite, self documenting. I see this as being a huge boon to me when I revisit my current project (which I'll port to Groovy asap) in several years time and to my successors who will inherit it and carry on the good work. So, are there any reasons not to use Groovy?

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • How do I determine if a property was overriden?

    - by Benjamin
    Hello I am doing a project that where I need to register all the properties, because of the system being so huge it would require a lot of work to register all the properties that i want to be dependent for the purpose of Xaml. The goal is to find all properties that are on the top of the tree. so basically public class A{ public int Property1 { get; set; } } public class B : A{ public int Property2 { get; set; } public virtual int Property3 { get; set; } } public class C : B{ public override int Property3 { get; set; } public int Property4 { get; set; } public int Property5 { get; set; } } The end result would be something like this A.Prorperty1 B.Property2 B.Property3 C.Property4 C.Property5 If you notice I don't want to accept overridden properties because of the way I search for the properties if I do something like this C.Property3 for example and it cannot find it it will check C's basetype and there it will find it. This is what I have so far. public static void RegisterType( Type type ) { PropertyInfo[] properties = type.GetProperties( BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly | BindingFlags.GetProperty | BindingFlags.SetProperty ); if ( properties != null && properties.Length > 0 ) { foreach ( PropertyInfo property in properties ) { // if the property is an indexers then we ignore them if ( property.Name == "Item" && property.GetIndexParameters().Length > 0 ) continue; // We don't want Arrays or Generic Property Types if ( (property.PropertyType.IsArray || property.PropertyType.IsGenericType) ) continue; // Register Property } } } What I want are the following: Public properties, that are not overridden, not static, not private Either get and set properties are allowed They are not an array or a generic type They are the top of the tree ie C class in the example is the highest (The property list example is exactly what I am looking for) They are not an indexer property ( this[index] ) Any help will be much appreciated =).

    Read the article

  • Are there any modern GUI toolkits which implement a heirarchical menu buffer zone?

    - by scomar
    In Bruce Tognazzini's quiz on Fitt's Law, the question discussing the bottleneck in the hierarchical menu (as used in almost every modern desktop UI), talks about his design for the original Mac: The bottleneck is the passage between the first-level menu and the second-level menu. Users first slide the mouse pointer down to the category menu item. Then, they must carefully slide the mouse directly across (horizontally) in order to move the pointer into the secondary menu. The engineer who originally designed hierarchicals apparently had his forearm mounted on a track so that he could move it perfectly in a horizontal direction without any vertical component. Most of us, however, have our forarms mounted on a pivot we like to call our elbow. That means that moving our hand describes an arc, rather than a straight line. Demanding that pivoted people move a mouse pointer along in a straight line horizontally is just wrong. We are naturally going to slip downward even as we try to slide sideways. When we are not allowed to slip downward, the menu we're after is going to slam shut just before we get there. The Windows folks tried to overcome the pivot problem with a hack: If they see the user move down into range of the next item on the primary menu, they don't instantly close the second-level menu. Instead, they leave it open for around a half second, so, if users are really quick, they can be inaccurate but still get into the second-level menu before it slams shut. Unfortunately, people's reactions to heightened chance of error is to slow down, rather than speed up, a well-established phenomenon. Therefore, few users will ever figure out that moving faster could solve their problem. Microsoft's solution is exactly wrong. When I specified the Mac hierarchical menu algorthm in the mid-'80s, I called for a buffer zone shaped like a <, so that users could make an increasingly-greater error as they neared the hierarchical without fear of jumping to an unwanted menu. As long as the user's pointer was moving a few pixels over for every one down, on average, the menu stayed open, no matter how slow they moved. (Cancelling was still really easy; just deliberately move up or down.) This just blew me away! Such a simple idea which would result in a huge improvement in usability. I'm sure I'm not the only one who regularly has the next level of a menu slam shut because I don't move the mouse pointer in a perfectly horizontal line. So my question is: Are there any modern UI toolkits which implement this brilliant idea of a < shaped buffer zone in hierarchical menus? And if not, why not?!

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • VBScript Issue Help Required.

    - by MalsiaPro
    I need a script that can run and pull information from any drive on a Windows operating system (Windows Server 2003), listing all files and folders which contain the following fields: The server is quite big and is within our domain. The required information is: Full file path (e.g. C:\Documents and Settings\user\My Documents\testPage.doc) File type (e.g. word document, spreadsheet, database etc) Size When Created When last modified When last accessed Also the script will need to convert that data to a CSV file, which later on I can modify and process in Excel. I can imagine that this data will be huge but I still need it. I am logged in as an administrator on the server and the script will need to also process protected files. As in previous posts I have read that the script will stop if such files are processed. I need to make sure that not a single file is skipped. Please note I have asked this question before but still have not got a working script. This is the script I got so far, file Test.vbs: Set objFS=CreateObject("Scripting.FileSystemObject") WScript.Echo Chr(34) & "Full Path" &_ Chr(34) & "," & Chr(34) & "File Size" &_ Chr(34) & "," & Chr(34) & "File Date modified" &_ Chr(34) & "," & Chr(34) & "File Date Created" &_ Chr(34) & "," & Chr(34) & "File Date Accessed" & Chr(34) Set objArgs = WScript.Arguments strFolder = objArgs(0) Set objFolder = objFS.GetFolder(strFolder) Go (objFolder) Sub Go(objDIR) If objDIR <> "\System Volume Information" Then For Each eFolder in objDIR.SubFolders Go eFolder Next End If For Each strFile In objDIR.Files WScript.Echo Chr(34) & strFile.Path & Chr(34) & "," &_ Chr(34) & strFile.Size & Chr(34) & "," &_ Chr(34) & strFile.DateLastModified & Chr(34) & "," &_ Chr(34) & strFile.DateCreated & Chr(34) & "," &_ Chr(34) & strFile.DateLastAccessed & Chr(34) Next End Sub I am currently using the command-line to run it: c:\test> cscript //nologo Test.vbs "c:\" > "C:\test\Output.csv" The script is not working. I don't know why.

    Read the article

  • Problem with Precision floating point operation in C

    - by Microkernel
    Hi Guys, For one of my course project I started implementing "Naive Bayesian classifier" in C. My project is to implement a document classifier application (especially Spam) using huge training data. Now I have problem implementing the algorithm because of the limitations in the C's datatype. ( Algorithm I am using is given here, http://en.wikipedia.org/wiki/Bayesian_spam_filtering ) PROBLEM STATEMENT: The algorithm involves taking each word in a document and calculating probability of it being spam word. If p1, p2 p3 .... pn are probabilities of word-1, 2, 3 ... n. The probability of doc being spam or not is calculated using Here, probability value can be very easily around 0.01. So even if I use datatype "double" my calculation will go for a toss. To confirm this I wrote a sample code given below. #define PROBABILITY_OF_UNLIKELY_SPAM_WORD (0.01) #define PROBABILITY_OF_MOSTLY_SPAM_WORD (0.99) int main() { int index; long double numerator = 1.0; long double denom1 = 1.0, denom2 = 1.0; long double doc_spam_prob; /* Simulating FEW unlikely spam words */ for(index = 0; index < 162; index++) { numerator = numerator*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom1 = denom1*(long double)(1 - PROBABILITY_OF_UNLIKELY_SPAM_WORD); } /* Simulating lot of mostly definite spam words */ for (index = 0; index < 1000; index++) { numerator = numerator*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom1 = denom1*(long double)(1- PROBABILITY_OF_MOSTLY_SPAM_WORD); } doc_spam_prob= (numerator/(denom1+denom2)); return 0; } I tried Float, double and even long double datatypes but still same problem. Hence, say in a 100K words document I am analyzing, if just 162 words are having 1% spam probability and remaining 99838 are conspicuously spam words, then still my app will say it as Not Spam doc because of Precision error (as numerator easily goes to ZERO)!!!. This is the first time I am hitting such issue. So how exactly should this problem be tackled?

    Read the article

  • Preoblem with Precision floating point operation in C

    - by Microkernel
    Hi Guys, For one of my course project I started implementing "Naive Bayesian classifier" in C. My project is to implement a document classifier application (especially Spam) using huge training data. Now I have problem implementing the algorithm because of the limitations in the C's datatype. ( Algorithm I am using is given here, http://en.wikipedia.org/wiki/Bayesian_spam_filtering ) PROBLEM STATEMENT: The algorithm involves taking each word in a document and calculating probability of it being spam word. If p1, p2 p3 .... pn are probabilities of word-1, 2, 3 ... n. The probability of doc being spam or not is calculated using Here, probability value can be very easily around 0.01. So even if I use datatype "double" my calculation will go for a toss. To confirm this I wrote a sample code given below. #define PROBABILITY_OF_UNLIKELY_SPAM_WORD (0.01) #define PROBABILITY_OF_MOSTLY_SPAM_WORD (0.99) int main() { int index; long double numerator = 1.0; long double denom1 = 1.0, denom2 = 1.0; long double doc_spam_prob; /* Simulating FEW unlikely spam words */ for(index = 0; index < 162; index++) { numerator = numerator*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom1 = denom1*(long double)(1 - PROBABILITY_OF_UNLIKELY_SPAM_WORD); } /* Simulating lot of mostly definite spam words */ for (index = 0; index < 1000; index++) { numerator = numerator*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom1 = denom1*(long double)(1- PROBABILITY_OF_MOSTLY_SPAM_WORD); } doc_spam_prob= (numerator/(denom1+denom2)); return 0; } I tried Float, double and even long double datatypes but still same problem. Hence, say in a 100K words document I am analyzing, if just 162 words are having 1% spam probability and remaining 99838 are conspicuously spam words, then still my app will say it as Not Spam doc because of Precision error (as numerator easily goes to ZERO)!!!. This is the first time I am hitting such issue. So how exactly should this problem be tackled?

    Read the article

  • Performance - FunctionCall vs Event vs Action vs Delegate

    - by hwcverwe
    Currently I am using Microsoft Sync Framework to synchronize databases. I need to gather information per record which is inserted/updated/deleted by Microsoft Sync Framework and do something with this information. The sync speed can go over 50.000 records per minute. So that means my additional code need to be very lightweight otherwise it will be a huge performance penalty. Microsoft Sync Framework raises an SyncProgress event for each record. I am subscribed to that code like this: // Assembly1 SyncProvider.SyncProgress += OnSyncProgress; // .... private void OnSyncProgress(object sender, DbSyncProgressEventArgs e) { switch (args.Stage) { case DbSyncStage.ApplyingInserts: // MethodCall/Delegate/Action<>/EventHandler<> => HandleInsertedRecordInformation // Do something with inserted record info break; case DbSyncStage.ApplyingUpdates: // MethodCall/Delegate/Action<>/EventHandler<> => HandleUpdatedRecordInformation // Do something with updated record info break; case DbSyncStage.ApplyingDeletes: // MethodCall/Delegate/Action<>/EventHandler<> => HandleDeletedRecordInformation // Do something with deleted record info break; } } Somewhere else in another assembly I have three methods: // Assembly2 public class SyncInformation { public void HandleInsertedRecordInformation(...) {...} public void HandleUpdatedRecordInformation(...) {...} public void HandleInsertedRecordInformation(...) {...} } Assembly2 has a reference to Assembly1. So Assembly1 does not know anything about the existence of the SyncInformation class which need to handle the gathered information. So I have the following options to trigger this code: use events and subscribe on it in Assembly2 1.1. EventHandler< 1.2. Action< 1.3. Delegates using dependency injection: public class Assembly2.SyncInformation : Assembly1.ISyncInformation Other? I know the performance depends on: OnSyncProgress switch using a method call, delegate, Action< or EventHandler< Implementation of SyncInformation class I currently don't care about the implementation of the SyncInformation class. I am mainly focused on the OnSyncProgress method and how to call the SyncInformation methods. So my questions are: What is the most efficient approach? What is the most in-efficient approach? Is there a better way than using a switch in OnSyncProgress?

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

  • sys.path() and PYTHONPATH issues

    - by Justin
    I've been learning Python, I'm working in 2.7.3, and I'm trying to understand import statements. The documentation says that when you attempt to import a module, the interpreter will first search for one of the built-in modules. What is meant by a built-in module? Then, the documentation says that the interpreter searches in the directories listed by sys.path, and that sys.path is initialized from these sources: the directory containing the input script (or the current directory). PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH). the installation-dependent default. Here is a sample output of a sys.path command from my computer using python in command-line mode: (I deleted a few so that it wouldn't be huge) ['', '/usr/lib/python2.7', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol'] Now, I'm assuming that the '' path refers to the directory containing the 'script', and so I figured the rest of them would be coming from my PYTHONPATH environmental variable. However, when I go to the terminal and type env, PYTHONPATH doesn't exist as an environmental variable. I also tried import os then os.environ, but I get the same output. Do I really not have a PYTHONPATH environmental variable? I don't believe I ever specifically defined a PYTHONPATH environmental variable, but I assumed that when I installed new packages they automatically altered that environment variable. If I don't have a PYTHONPATH, how is my sys.path getting populated? If I download new packages, how does Python know where to look for them if I don't have this PYTHONPATH variable? How do environment variables work? From what I understand, environment variables are specific to the process for which they are set, however, if I open multiple terminal windows and run env, they all display a number of identical variables, for example, PATH. I know there file locations for persistent environment variables, for example /etc/environment, which contains my PATH variable. Is it possible to tell where a persistent environment variable is stored? What is the recommended location for storing new persistent environment variables? How do environment variables actually work with say, the Python interpreter? The Python interpreter looks for PYTHONPATH, but how does it work at the nitty-gritty level?

    Read the article

  • Toggling between states on 3 containers in jQuery

    - by Saif Bechan
    Hello, i have an php/mysql/ajax auction application application. Now there is a section that can have 3 states. State one, the user can place a bid.(Bid button is shown) State two, the user has to wait before he can place a new bid(number of sec is shown) State three, the user has an auto bidding system enabled.(The price maximum and amount ate shown). Now i get these values trough jquery ajax, and i want to present the user the right section. Now the code i have is as follows: if(data[i].bidwait >= 0 && data[i].amount == ''){ // In this section the user has no autobiddings, and can place a bidding $(productContainer+' .bid-wrapper').removeClass('none'); $(productContainer+' .bidwait-wrapper').addClass('none'); $(productContainer+' .autobid-wrapper').addClass('none'); }else if(data[i].bidwait < 0 && data[i].amount == ''){ // In this section the user has to wait the amount of sec to place a new bid $(productContainer+' .bid-wrapper').addClass('none'); $(productContainer+' .bidwait-wrapper').removeClass('none'); $(productContainer+' .autobid-wrapper').addClass('none'); $(productContainer+' .bidwait-text').text(data[i].bidwait * -1); }else{ // In the last section the user has the auto bidder enabled, so that is shown $(productContainer+' .bid-wrapper').addClass('none'); $(productContainer+' .bidwait-wrapper').addClass('none'); $(productContainer+' .autobid-wrapper').removeClass('none'); $(productContainer+' .amount').html(data[i].amount == 0 ? '&#8734;' : data[i].amount); $(productContainer+' .maxprice').html(data[i].maxprice == 0 ? '&#8734;' : data[i].maxprice); } This looks like an awful lot of code for something so small. I was wondering if there is an easier method to accomplish such a thing. Speed is a huge issue for me because this has to run every second in the users browser. If there is no other option i am just going to remove this option and go with a no ajax approuch. If you have been, thank you for reading!

    Read the article

  • The dynamic Type in C# Simplifies COM Member Access from Visual FoxPro

    - by Rick Strahl
    I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET. Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly. Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code: *** Create .NET COM InstanceloNet = CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "[email protected]" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component: public string PassRecordObject(object FoxObject) { // *** using raw Reflection string Company = (string) FoxObject.GetType().InvokeMember( "Company", BindingFlags.GetProperty,null, FoxObject,null); // using the easier ComUtils wrappers string Name = (string) ComUtils.GetProperty(FoxObject,"Name"); // Getting Address object – then getting child properties object Address = ComUtils.GetProperty(FoxObject,"Address");    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress"); // using ComUtils 'Ex' functions you can use . Syntax     string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress"); return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; } Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element. Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural: public string PassRecordObjectDynamic(dynamic FoxObject) { // *** using raw Reflection string Company = FoxObject.Company; // *** using the easier ComUtils class string Name = FoxObject.Name; // *** using ComUtils 'ex' functions to use . Syntax string Address = FoxObject.Address.StreetAddress; return Name + Environment.NewLine + Company + Environment.NewLine + Address + Environment.NewLine + " FOX"; } As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required. Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET . The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result. Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object: DEFINE CLASS FoxData AS SESSION OlePublic cAppStartPath = "" FUNCTION INIT THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) ) SET PATH TO ( THIS.cAppStartpath ) ENDFUNC FUNCTION GetRecord(lnPk) LOCAL loCustomer SELECT * FROM tt_Cust WHERE pk = lnPk ; INTO CURSOR TCustomer IF _TALLY < 1 RETURN NULL ENDIF SCATTER NAME loCustomer MEMO RETURN loCustomer ENDFUNC ENDDEFINE If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly: int pk = 0; int.TryParse(Request.QueryString["id"],out pk); // Create Fox COM Object with Com Callable Wrapper FoxData foxData = new FoxData(); dynamic foxRecord = foxData.GetRecord(pk); string company = foxRecord.Company; DateTime entered = foxRecord.Entered; This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain. For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed. © Rick Strahl, West Wind Technologies, 2005-2010Posted in COM  FoxPro  .NET  CSharp  

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • Week in Geek: US Govt E-card Scam Siphons Confidential Data Edition

    - by Asian Angel
    This week we learned how to “back up photos to Flickr, automate repetitive tasks, & normalize MP3 volume”, enable “stereo mix” in Windows 7 to record audio, create custom papercraft toys, read up on three alternatives to Apple’s flaky iOS alarm clock, decorated our desktops & app docks with Google icon packs, and more. Photo by alexschlegel. Random Geek Links It has been a busy week on the security & malware fronts and we have a roundup of the latest news to help keep you updated. Photo by TopTechWriter.US. US govt e-card scam hits confidential data A fake U.S. government Christmas e-card has managed to siphon off gigabytes of sensitive data from a number of law enforcement and military staff who work on cybersecurity matters, many of whom are involved in computer crime investigations. Security tool uncovers multiple bugs in every browser Michal Zalewski reports that he discovered the vulnerability in Internet Explorer a while ago using his cross_fuzz fuzzing tool and reported it to Microsoft in July 2010. Zalewski also used cross_fuzz to discover bugs in other browsers, which he also reported to the relevant organisations. Microsoft to fix Windows holes, but not ones in IE Microsoft said that it will release two security bulletins next week fixing three holes in Windows, but it is still investigating or working on fixing holes in Internet Explorer that have been reportedly exploited in attacks. Microsoft warns of Windows flaw affecting image rendering Microsoft has warned of a Windows vulnerability that could allow an attacker to take control of a computer if the user is logged on with administrative rights. Windows 7 Not Affected by Critical 0-Day in the Windows Graphics Rendering Engine While confirming that details on a Critical zero-day vulnerability have made their way into the wild, Microsoft noted that customers running the latest iteration of Windows client and server platforms are not exposed to any risks. Microsoft warns of Office-related malware Microsoft’s Malware Protection Center issued a warning this week that it has spotted malicious code on the Internet that can take advantage of a flaw in Word and infect computers after a user does nothing more than read an e-mail. *Refers to a flaw that was addressed in the November security patch releases. Make sure you have all of the latest security updates installed. Unpatched hole in ImgBurn disk burning application According to security specialist Secunia, a highly critical vulnerability in ImgBurn, a lightweight disk burning application, can be used to remotely compromise a user’s system. Hole in VLC Media Player Virtual Security Research (VSR) has identified a vulnerability in VLC Media Player. In versions up to and including 1.1.5 of the VLC Media Player. Flash Player sandbox can be bypassed Flash applications run locally can read local files and send them to an online server – something which the sandbox is supposed to prevent. Chinese auction site touts hacked iTunes accounts Tens of thousands of reportedly hacked iTunes accounts have been found on Chinese auction site Taobao, but the company claims it is unable to take action unless there are direct complaints. What happened in the recent Hotmail outage Mike Schackwitz explains the cause of the recent Hotmail outage. DOJ sends order to Twitter for Wikileaks-related account info The U.S. Justice Department has obtained a court order directing Twitter to turn over information about the accounts of activists with ties to Wikileaks, including an Icelandic politician, a legendary Dutch hacker, and a U.S. computer programmer. Google gets court to block Microsoft Interior Department e-mail win The U.S. Federal Claims Court has temporarily blocked Microsoft from proceeding with the $49.3 million, five-year DOI contract that it won this past November. Google Apps customers get email lockdown Companies and organisations using Google Apps are now able to restrict the email access of selected users. LibreOffice Is the Default Office Suite for Ubuntu 11.04 Matthias Klose has announced some details regarding the replacement of the old OpenOffice.org 3.2.1 packages with the new LibreOffice 3.3 ones, starting with the upcoming Ubuntu 11.04 (Natty Narwhal) Alpha 2 release. Sysadmin Geek Tips Photo by Filomena Scalise. How to Setup Software RAID for a Simple File Server on Ubuntu Do you need a file server that is cheap and easy to setup, “rock solid” reliable, and has Email Alerting? This tutorial shows you how to use Ubuntu, software RAID, and SaMBa to accomplish just that. How to Control the Order of Startup Programs in Windows While you can specify the applications you want to launch when Windows starts, the ability to control the order in which they start is not available. However, there are a couple of ways you can easily overcome this limitation and control the startup order of applications. Random TinyHacker Links Using Opera Unite to Send Large Files A tutorial on using Opera Unite to easily send huge files from your computer. WorkFlowy is a Useful To-do List Tool A cool to-do list tool that lets you integrate multiple tasks in one single list easily. Playing Flash Videos on iOS Devices Yes, you can play flash videos on jailbroken iPhones. Here’s a tutorial. Clear Safari History and Cookies On iPhone A tutorial on clearing your browser history on iPhone and other iOS devices. Monitor Your Internet Usage Here’s a cool, cross-platform tool to monitor your internet bandwidth. Super User Questions See what the community had to say on these popular questions from Super User this week. Why is my upload speed much less than my download speed? Where should I find drivers for my laptop if it didn’t come with a driver disk? OEM Office 2010 without media – how to reinstall? Is there a point to using theft tracking software like Prey on my laptop, if you have login security? Moving an “all-in-one” PC when turned on/off How-To Geek Weekly Article Recap Get caught up on your HTG reading with our hottest articles from this past week. How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk How To Boot 10 Different Live CDs From 1 USB Flash Drive What is Camera Raw, and Why Would a Professional Prefer it to JPG? Did You Know Facebook Has Built-In Shortcut Keys? The How-To Geek Guide to Audio Editing: The Basics One Year Ago on How-To Geek Enjoy looking through our latest gathering of retro article goodness. Learning Windows 7: Create a Homegroup & Join a New Computer To It How To Disconnect a Machine from a Homegroup Use Remote Desktop To Access Other Computers On a Small Office or Home Network How To Share Files and Printers Between Windows 7 and Vista Allow Users To Run Only Specified Programs in Windows 7 The Geek Note That is all we have for you this week and we hope your first week back at work or school has gone very well now that the holidays are over. Know a great tip? Send it in to us at [email protected]. Photo by Pamela Machado. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Parallelism in .NET – Part 14, The Different Forms of Task

    - by Reed
    Before discussing Task creation and actual usage in concurrent environments, I will briefly expand upon my introduction of the Task class and provide a short explanation of the distinct forms of Task.  The Task Parallel Library includes four distinct, though related, variations on the Task class. In my introduction to the Task class, I focused on the most basic version of Task.  This version of Task, the standard Task class, is most often used with an Action delegate.  This allows you to implement for each task within the task decomposition as a single delegate. Typically, when using the new threading constructs in .NET 4 and the Task Parallel Library, we use lambda expressions to define anonymous methods.  The advantage of using a lambda expression is that it allows the Action delegate to directly use variables in the calling scope.  This eliminates the need to make separate Task classes for Action<T>, Action<T1,T2>, and all of the other Action<…> delegate types.  As an example, suppose we wanted to make a Task to handle the ”Show Splash” task from our earlier decomposition.  Even if this task required parameters, such as a message to display, we could still use an Action delegate specified via a lambda: // Store this as a local variable string messageForSplashScreen = GetSplashScreenMessage(); // Create our task Task showSplashTask = new Task( () => { // We can use variables in our outer scope, // as well as methods scoped to our class! this.DisplaySplashScreen(messageForSplashScreen); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This provides a huge amount of flexibility.  We can use this single form of task for any task which performs an operation, provided the only information we need to track is whether the task has completed successfully or not.  This leads to my first observation: Use a Task with a System.Action delegate for any task for which no result is generated. This observation leads to an obvious corollary: we also need a way to define a task which generates a result.  The Task Parallel Library provides this via the Task<TResult> class. Task<TResult> subclasses the standard Task class, providing one additional feature – the ability to return a value back to the user of the task.  This is done by switching from providing an Action delegate to providing a Func<TResult> delegate.  If we decompose our problem, and we realize we have one task where its result is required by a future operation, this can be handled via Task<TResult>.  For example, suppose we want to make a task for our “Check for Update” task, we could do: Task<bool> checkForUpdateTask = new Task<bool>( () => { return this.CheckWebsiteForUpdate(); }); Later, we would start this task, and perform some other work.  At any point in the future, we could get the value from the Task<TResult>.Result property, which will cause our thread to block until the task has finished processing: // This uses Task<bool> checkForUpdateTask generated above... // Start the task, typically on a background thread checkForUpdateTask.Start(); // Do some other work on our current thread this.DoSomeWork(); // Discover, from our background task, whether an update is available // This will block until our task completes bool updateAvailable = checkForUpdateTask.Result; This leads me to my second observation: Use a Task<TResult> with a System.Func<TResult> delegate for any task which generates a result. Task and Task<TResult> provide a much cleaner alternative to the previous Asynchronous Programming design patterns in the .NET framework.  Instead of trying to implement IAsyncResult, and providing BeginXXX() and EndXXX() methods, implementing an asynchronous programming API can be as simple as creating a method that returns a Task or Task<TResult>.  The client side of the pattern also is dramatically simplified – the client can call a method, then either choose to call task.Wait() or use task.Result when it needs to wait for the operation’s completion. While this provides a much cleaner model for future APIs, there is quite a bit of infrastructure built around the current Asynchronous Programming design patterns.  In order to provide a model to work with existing APIs, two other forms of Task exist.  There is a constructor for Task which takes an Action<Object> and a state parameter.  In addition, there is a constructor for creating a Task<TResult> which takes a Func<Object, TResult> as well as a state parameter.  When using these constructors, the state parameter is stored in the Task.AsyncState property. While these two overloads exist, and are usable directly, I strongly recommend avoiding this for new development.  The two forms of Task which take an object state parameter exist primarily for interoperability with traditional .NET Asynchronous Programming methodologies.  Using lambda expressions to capture variables from the scope of the creator is a much cleaner approach than using the untyped state parameters, since lambda expressions provide full type safety without introducing new variables.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >