Search Results

Search found 14319 results on 573 pages for 'biztalk 2010'.

Page 350/573 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • first of all nice work,, how to redirect the url of old modified directries

    - by kath
    I really appreciate your hard work after searching lot of web i cannot find the answer so if you get time please try to find what i should so the problem is its a website for classifieds ads before i modified or better word edited the category name but its still showing up in Google index and even in browser URL so is there any way to by pass or redirect it to new one i tried .htaccess but cant get the the result here is the both URL list before modification http://adsbuz.com/vehicles-cars/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm after editing category name (modified one) http://adsbuz.com/vehicles-cars-for-sale/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm (edited category name was before ""vehicles-car"" and after editing is ""vehicles-cars-for-sale"" as you can see both URL opens and not good for seo. and is there any way some one opens wrong url but page opens only with corect url automatically just like in your site.. consider me new in this market and want little help here (the website is in php) thanks Really appreciate your quick response thanks kath

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • Curl, twitter oauth problem

    - by Darxval
    Does anyone see a problem with the following Curl call / how the Oauth request is built? (i am trying to get a correctly setup request so i can finish my app) So i am calling the following CURL call: C:\>curl -v -k --data-urlencode "status=Testing2" -H "Authorization: OAuth realm='', oauth_nonce=1276107867blah, oauth_timestamp=1276107867, oauth_consumer_key=yJDLH7BDdVi1OKIINSV7Q, oauth_signature_method=HMAC-SHA1, oauth_version=1.0, oauth_signature=NWU4MDdlNjk0OGIxYWQ1YTkyNmU5YjU1NGYyOTczMmU5ZDg5 YWNkNA==, staus=Testing2 " http://twitter.com/statuses/update.xml?status=Testing2 and i recieve this: * About to connect() to twitter.com port 80 (#0) * Trying 168.143.162.68... connected * Connected to twitter.com (168.143.162.68) port 80 (#0) > POST /statuses/update.xml?status=Testing2 HTTP/1.1 > User-Agent: curl/7.20.1 (i386-pc-win32) libcurl/7.20.1 OpenSSL/0.9.8n zlib/1.2.5 libidn/1.18 libssh2/1.2.5 > Host: twitter.com > Accept: */* > Authorization: OAuth realm='', oauth_nonce=1276106370blah, oauth_timestamp=1276106370, oauth_consumer_key=yJDLH7BDdVi1OKIINSV7Q, oauth_signature_method=HMAC-SHA1, oauth_version=1.0, oauth_signature=MjQzNDA1MGU4NGRmMWVjMzUwZmQ4YzE5NzMzY2I1ZDJlOTRkNmQ2Zg==, staus=Testing2 > Content-Length: 15 > Content-Type: application/x-www-form-urlencoded > < HTTP/1.1 401 Unauthorized < Date: Wed, 09 Jun 2010 18:00:22 GMT < Server: hi < Status: 401 Unauthorized < WWW-Authenticate: Basic realm="Twitter API" < X-Runtime: 0.00548 < Content-Type: application/xml; charset=utf-8 < Content-Length: 164 < Cache-Control: no-cache, max-age=1800 < Set-Cookie: k=209.234.229.21.1276106420885412; path=/; expires=Wed, 16-Jun-10 18:00:20 GMT; domain=.twitter.com < Set-Cookie: guest_id=127610642214871948; path=/; expires=Fri, 09 Jul 2010 18:00:22 GMT < Set-Cookie: _twitter_sess=BAh7CDoPY3JlYXRlZF9hdGwrCIm33h0pAToHaWQiJTkyMjllODE0NTdiYWE1%250AMWU1MzBmNjgwMTFiMDhkYjdlIgpmbGFzaElDOidBY3Rpb25Db250cm9sbGVy%250AOjpGbGFzaDo6Rmxhc2hIYXNoewAGOgpAdXNlZHsA--8ebb3c62d461d28f8fda7b8adab642af66969f7e; domain=.twitter.com; path=/ < Expires: Wed, 09 Jun 2010 18:30:20 GMT < Vary: Accept-Encoding < Connection: close < <?xml version="1.0" encoding="UTF-8"?> <hash> <request>/statuses/update.xml?status=Testing2</request> <error>Could not authenticate with OAuth.</error> </hash> * Closing connection #0 my Parameters are setup like so: var parameters = [encodeURIComponent("status="+status),encodeURIComponent("oauth_token="+ac_token),encodeURIComponent("oauth_consumer_key="+"yJDLH7BDdVi1OKIINSV7Q"),encodeURIComponent("oauth_nonce="+nonce,"oauth_signature_method=HMAC-SHA1"),encodeURIComponent("oauth_timestamp="+timestamp),encodeURIComponent("oauth_version=1.0")] var join = parameters.join("&"); var eparamjoin =encodeURIComponent(join); The key is like so: var key=con_secret+"&"+ac_secret; Signature base string is: var signaturebs = "POST&"+encodeURIComponent(url)+"&"+eparamjoin; giving this: POST&http%3A%2F%2Ftwitter.com%2Fstatuses%2Fupdate.xml&status%253DTesting2%26oauth_token%253D142715285-yi2ch324S3zfyKyJby6WDUZOhCsiQuKNUtc3nAGe%26oauth_consumer_key%253DyJDLH7BDdVi1OKIINSV7Q%26oauth_nonce%253D1276107867blah%26oauth_timestamp%253D1276107867%26oauth_version%253D1.0 and signature built like so: var hmac = Crypto.HMAC(Crypto.SHA1, signaturebs,key ); var signature=Base64.encode(hmac); making the signature: NWU4MDdlNjk0OGIxYWQ1YTkyNmU5YjU1NGYyOTczMmU5ZDg5YWNkNA== Any help would be appreciated thank you!

    Read the article

  • WPF Binding Error reported when Binding appears to work fine

    - by Noldorin
    I am trying to create a custom TabItem template/style in my WPF 4.0 application (using VS 2010 Pro RTM), but inspite of everything seeming to work correctly, I am noticing a binding error in the trace window. The resource dictionary XAML I use to style the TabItems of a TabControl is given in full here. (Just create a simple TabControl with several items and apply the given ResourceDictionary to test it out.) Specifically, the error is occurring due to the following line (discovered through trial and error testing, since Visual Studio isn't actually reporting it at design tim <TranslateTransform X="{Binding ActualWidth, ElementName=leftSideBorderPath}"/> The full error given in the trace (Ouput window) is the following: System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=ActualWidth; DataItem=null; target element is 'TranslateTransform' (HashCode=35345840); target property is 'X' (type 'Double') The error occurs on load and is repeated 5 times then (note that I have 3 tab items in my example). It also occurs consistently and repeatedly whenever for the Window is resized, for example - filling the Output window. Perhaps every time the TabItem layout is updated? And again, though it is not reported, the error very much seems to be due to the fact that I am binding to any element at all, not specifically leftSideBorderPath or the the ActualWidth propertry. For example, changing this line to the following fixes things. <TranslateTransform X="25"/> Unfortunately, hard-coding the value isn't really an option. This issue seems very strange to me in that the binding does appear to be giving the correct results. (Inspecting the X value of the TranslateTransform at runtime clearly shows the correct bound value, and the ClipGeometry when viewed is exactly what it hsould be.) Neither Visual Studio nor WPF seems to be giving me any more information on the cause of the error perhaps (setting PresentationTraceSources.TraceLevel to High doesn't help), yet the fact that things are working despite the error being reported inclines me to think that this is some fringe-case WPF bug. As a side note, the Visual Studio WPF designer and XAML editor are giving me a problem with the following line: <PathGeometry Figures="{Binding Source={StaticResource TabSideFillFigures}}"/> Although WPF (at runtime) is perfectly happy binding Figures to the TabSideFillFigures string, with the Binding enforcing the use of the TypeConverter, the XAML editor and WPF designer are complaining. All the XAML code for the ControlTemplate is underlined and I get the following errors in the Error List: Error 9 '{DependencyProperty.UnsetValue}' is not a valid value for the 'System.Windows.Controls.Control.Template' property on a Setter. C:\Users\Alex\Documents\Visual Studio 2010\Projects\Ircsil\devel\Ircsil\MainWindow.xaml 1 1 Ircsil Error 10 Object reference not set to an instance of an object. C:\Users\Alex\Documents\Visual Studio 2010\Projects\Ircsil\devel\Ircsil\Skins\Default\MainSkin.xaml 58 17 Ircsil Again, to repeat, everything works perfectly well at runtime, which is what makes this particularly odd... Could someone perhaps shed some light on these issues, in particular the first (which seems to be a potential WPF bug), and the latter (which seems to be a Visual Studio bug). Any sort of feedback or suggestions would be much appreciated!

    Read the article

  • How to get the child elementsvalue , when the parent element contains different number of child Elem

    - by Subhen
    Hi, I have the following XML Structure: <DIDL-Lite xmlns="urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:upnp="urn:schemas-upnp-org:metadata-1-0/upnp/" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0/"> <item id="1268" parentID="20" restricted="1"> <upnp:class>object.item.audioItem.musicTrack</upnp:class> <dc:title>Hey</dc:title> <dc:date>2010-03-17T12:12:26</dc:date> <upnp:albumArtURI dlna:profileID="PNG_TN" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0">URL/albumart/22.png</upnp:albumArtURI> <upnp:icon>URL/albumart/22.png</upnp:icon> <dc:creator>land</dc:creator> <upnp:artist>sland</upnp:artist> <upnp:album>Change</upnp:album> <upnp:genre>Rock</upnp:genre> <res protocolInfo="http-get:*:audio/mpeg:DLNA.ORG_PN=MP3;DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="9527987" duration="0:03:58">URL/1268.mp3</res> <res protocolInfo="http-get:*:image/png:DLNA.ORG_PN=PNG_TN;DLNA.ORG_CI=01;DLNA.ORG_FLAGS=00f00000000000000000000000000000" colorDepth="24" resolution="160x160">URL/albumart/22.png</res> </item> <item id="1269" parentID="20" restricted="1"> <upnp:class>object.item.audioItem.musicTrack</upnp:class> <dc:title>Indian </dc:title> <dc:date>2010-03-17T12:06:32</dc:date> <upnp:albumArtURI dlna:profileID="PNG_TN" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0">URL/albumart/13.png</upnp:albumArtURI> <upnp:icon>URL/albumart/13.png</upnp:icon> <dc:creator>MC</dc:creator> <upnp:artist>MC</upnp:artist> <upnp:album>manimal</upnp:album> <upnp:genre>Rap</upnp:genre> <res protocolInfo="http-get:*:audio/mpeg:DLNA.ORG_PN=MP3;DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="8166707" duration="0:03:24">URL/1269.mp3</res> <res protocolInfo="http-get:*:image/png:DLNA.ORG_PN=PNG_TN;DLNA.ORG_CI=01;DLNA.ORG_FLAGS=00f00000000000000000000000000000" colorDepth="24" resolution="160x160">URL/albumart/13.png</res> </item> <item id="1277" parentID="20" restricted="1"> <upnp:class>object.item.videoItem.movie</upnp:class> <dc:title>IronMan_TeaserTrailer-full_NEW.mpg</dc:title> <dc:date>2010-03-17T12:50:24</dc:date> <upnp:genre>Unknown</upnp:genre> <res protocolInfo="http-get:*:video/mpeg:DLNA.ORG_PN=(NULL);DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="98926592" resolution="1920x1080" duration="0:02:30">URL/1277.mpg</res> </item> </DIDL-Lite> Here in the last Elemnt Item , there are few missing elements like icon,albumArtURi ..etc. Now whhile Ity to access the Values by following LINQ to XML query, var vAudioData = from xAudioinfo in xResponse.Descendants(ns + "DIDL-Lite").Elements(ns + "item").Where(x=>!x.Elements(upnp+"album").l orderby ((string)xAudioinfo.Element(upnp + "artist")).Trim() select new RMSMedia { strAudioTitle = ((string)xAudioinfo.Element(dc + "title")).Trim(),//((string)xAudioinfo.Attribute("audioAlbumcoverImage")).Trim()=="" ? ((string)xAudioinfo.Element("Song")).Trim():"" }; It show object reference is not set to reference of object. I do undersrand this is because of missing elements, is there any way I can pre-check if the elements exist , or check the number of elemnts inside item or any other way. Any help is deeply appreciated. Thanks, Subhen

    Read the article

  • Trac FastCGI + Python on dreamhost leaves zombie python running

    - by Katsuke
    Hello there, Recently I installed trac in one of my dreamhost domains. I followed the instructions of http://trac.mlalonde.net/wiki/CreamyTrac and everything worked perfectly. At least i thought that was the case. After a few days, i started to notice that i was getting random 500 pages. I quickly checked the error log, and found a bunch of: [Fri Apr 16 14:35:34 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi [Fri Apr 16 14:35:54 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/login [Fri Apr 16 16:05:58 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/timeline The trac instance has very low traffic so it is possible that this might have been triggered faster if there was more use of it. So next step i went to look at top and am amazed by what i see: (this is NOT exactly what i saw, i just reproduced this from memory) 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 I opened the trac site again and that changed to this: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28378 rl_inst 20 0 2608 1208 1008 S 0.0 0.0 0:00.00 dispatch.fcgi 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 So the dispatch.fcgi was being called correctly and was spawning correctly other python2.4 process. After the IDLE time passed this is the results: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 dispatch.fcgi was gone but the corresponding python2.4 was still there o_O. I had started with 5 sleeping processes of python and after idle time i ended up with 6. I repeated this and found that it kept spawning more and more python2.4, this was indeed what was causing my 500s indirectly. Let me explain my findings. Am on a shared hosting so my processes get killed if they are way too many. So everytime i opened trac 2 new processes spawned and one remained, to the point that dreamhost was killing a random process when i reached my limit. Sometimes killing the python2.4 that actually was rendering the current page. Hence header premature ending, python is gone the .fcgi doesnt know what to do and throws an 500. I found a dirty solution to this. I changed my dispatch.fcgi to contain a line that killed any currently running python2.4 process and then spawn a new one. Since then i dont get any rogues what so ever. But i dont think this is the best solution, calling killall in every fcgi just seems wrong. Anyone has run into this issue and found a cleaner solution? Is there anything that i have overlooked?

    Read the article

  • log-back and thirdparty writing to stdout. How to stop them getting interleaved.

    - by David Roussel
    First some background. I have a batch-type java process run from a DOS batch script. All the java logging goes to stdout, and the batch script redirects the stdout to a file. (This is good for me because I can ECHO from the script and it gets into the log file, so I can see all the java JVM command line args, which is great for debugging.) I may not I use slf4j API, and for the backend I used to use log4j, but recently switched to logback-classic. Although all my application code uses slf4j, I have a third party library that does it's own logging (not using a standard API) which also gets written to stdout. The problem is that sometimes log lines get mixed up and don't cleanly appear on separate lines. Here is an example of some messed up output: 2010-05-28 18:00:44.783 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Bump parameters exist for scenario, now attempting bumping. [indexDisplayName=STANDARD_S1_v300] 2010-05-28 18:01:43.517 [thread-1 ] INFO CreditCorrelationElementBuilderImpl - Found adjusted point in data, now applying bump. [point=0.144040000000000] 2010-05-28 18:01:58.642 [thread-1 ] DEBUG com.company.request.Request - Generated request for [dealName=XXX_20050225_01[5],dealType=GENERIC_XXX,correlationType=2,copulaType=1] in 73.8 s, Simon Stopwatch: [sys1.batchpricer.reqgen.gen INHERIT] total 1049 s, counter 24, max 74.1 s, min 212 ms 2010-05-28 18:05/28/10 18:02:20.236 INFO: [ServiceEvent] SubmittedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23 01:58.658 [req-writer-2b ] INFO .c.g.r.o.OptionalFileDocumentOutput - Writing request XML to \\filserver\dir\file1.xml - write time: 21.4 ms - Simon Stopwatch: [sys1.batchpricer.reqgen.writeinputfile INHERIT] total 905 ms, counter 24, max 109 ms, min 10.8 ms 2010-05-28 18:02:33.626 [ResponseCallbacks-1: DriverJobSpace$TakeJobRunner$1] ERROR c.c.s.s.D.CalculatorCallback - Id:23 no deal found !! 2010-0505/28/10 18:02:50.267 INFO: [ServiceEvent] CompletedTask:BC-STRESS_04_FZBC-2010-05-21-545024448189310126-23:Total:24 Now comparing back to older log files, it seems the problem didn't occur when using log4j as the logging backend. So logback must be doing something different. The problem seems to be that although PrintStream.write(byte buf[], int off, int len) is synchronized, however I can see in ch.qos.logback.core.joran.spi.ConsoleTarget that System.out.write(int b) is the only write method called. So inbetween logback outputting each byte, the thirdparty library is managing to write a whole string to the stdout. (Not only is this cause me a problem, but it must also be a little inefficient?) Is there any other fix to this interleaving problem than patching the code to ConsoleTarget so it implments the other write methods? Any nice work arounds. Or should I just file a bug report? Here is my logback.xml: <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%-16thread] %-5level %-35.35logger{30} - %msg%n</pattern> </encoder> </appender> <root level="DEBUG"> <appender-ref ref="STDOUT" /> </root> </configuration> I'm using logback 0.9.20 with java 1.6.0_07.

    Read the article

  • null reference expction in the code

    - by LifeH2O
    I am getting NullReferenceException error on "_attr.Append(xmlNode.Attributes["name"]);". namespace SMAS { class Profiles { private XmlTextReader _profReader; private XmlDocument _profDoc; private const string Url = "http://localhost/teamprofiles.xml"; private const string XPath = "/teams/team-profile"; public XmlNodeList Teams{ get; private set; } private XmlAttributeCollection _attr; public ArrayList Team { get; private set; } public void GetTeams() { _profReader = new XmlTextReader(Url); _profDoc = new XmlDocument(); _profDoc.Load(_profReader); Teams = _profDoc.SelectNodes(XPath); foreach (XmlNode xmlNode in Teams) { _attr.Append(xmlNode.Attributes["name"]); } } } } the teamprofiles.xml file looks like <teams> <team-profile name="Australia"> <stats type="Test"> <span>1877-2010</span> <matches>721</matches> <won>339</won> <lost>186</lost> <tied>2</tied> <draw>194</draw> <percentage>47.01</percentage> </stats> <stats type="Twenty20"> <span>2005-2010</span> <matches>32</matches> <won>18</won> <lost>12</lost> <tied>1</tied> <draw>1</draw> <percentage>59.67</percentage> </stats> </team-profile> <team-profile name="Bangladesh"> <stats type="Test"> <span>2000-2010</span> <matches>66</matches> <won>3</won> <lost>57</lost> <tied>0</tied> <draw>6</draw> <percentage>4.54</percentage> </stats> </team-profile> </teams> I am trying to extract names of all teams in an ArrayList. Then i'll extract all stats of all teams and display them in my application. Can you please help me about that null reference exception?

    Read the article

  • Why does sending post data with WebRequest take so long?

    - by Paramiliar
    I am currently creating a C# application to tie into a php / MySQL online system. The application needs to send post data to scripts and get the response. When I send the following data username=test&password=test I get the following responses... Starting request at 22/04/2010 12:15:42 Finished creating request : took 00:00:00.0570057 Transmitting data at 22/04/2010 12:15:42 Transmitted the data : took 00:00:06.9316931 <<-- Getting the response at 22/04/2010 12:15:49 Getting response 00:00:00.0360036 Finished response 00:00:00.0360036 Entire call took 00:00:07.0247024 As you can see it is taking 6 seconds to actually send the data to the script, I have done further testing bye sending data from telnet and by sending post data from a local file to the url and they dont even take a second so this is not a problem with the hosted script on the site. Why is it taking 6 seconds to transmit the data when it is two simple strings? I use a custom class to send the data class httppostdata { WebRequest request; WebResponse response; public string senddata(string url, string postdata) { var start = DateTime.Now; Console.WriteLine("Starting request at " + start.ToString()); // create the request to the url passed in the paramaters request = (WebRequest)WebRequest.Create(url); // set the method to post request.Method = "POST"; // set the content type and the content length request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postdata.Length; // convert the post data into a byte array byte[] byteData = Encoding.UTF8.GetBytes(postdata); var end1 = DateTime.Now; Console.WriteLine("Finished creating request : took " + (end1 - start)); var start2 = DateTime.Now; Console.WriteLine("Transmitting data at " + start2.ToString()); // get the request stream and write the data to it Stream dataStream = request.GetRequestStream(); dataStream.Write(byteData, 0, byteData.Length); dataStream.Close(); var end2 = DateTime.Now; Console.WriteLine("Transmitted the data : took " + (end2 - start2)); // get the response var start3 = DateTime.Now; Console.WriteLine("Getting the response at " + start3.ToString()); response = request.GetResponse(); //Console.WriteLine(((WebResponse)response).StatusDescription); dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); var end3 = DateTime.Now; Console.WriteLine("Getting response " + (end3 - start3)); // read the response string serverresponse = reader.ReadToEnd(); var end3a = DateTime.Now; Console.WriteLine("Finished response " + (end3a - start3)); Console.WriteLine("Entire call took " + (end3a - start)); //Console.WriteLine(serverresponse); reader.Close(); dataStream.Close(); response.Close(); return serverresponse; } } And to call it I use private void btnLogin_Click(object sender, EventArgs e) { // string postdata; if (txtUsername.Text.Length < 3 || txtPassword.Text.Length < 3) { MessageBox.Show("Missing your username or password."); } else { string postdata = "username=" + txtUsername.Text + "&password=" + txtPassword.Text; httppostdata myPost = new httppostdata(); string response = myPost.senddata("http://www.domainname.com/scriptname.php", postdata); MessageBox.Show(response); } }

    Read the article

  • Is it possible to create the following XML using Xdocument and append the relevant protion in subseq

    - by Newbie
    <?xml version='1.0' encoding='UTF-8'?> <StockMarket> <StockDate Day = "02" Month="06" Year="2010"> <Stock> <Symbol>ABC</Symbol> <Amount>110.45</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>366.25</Amount> </Stock> </StockDate> <StockDate Day = "03" Month="06" Year="2010"> <Stock> <Symbol>ABC</Symbol> <Amount>110.35</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>369.70</Amount> </Stock> </StockDate> </StockMarket> My approach so far is XDocument doc = new XDocument( new XElement("StockMarket", new XElement("StockDate", new XAttribute("Day", "02"),new XAttribute("Month","06"),new XAttribute("Year","2010")), new XElement("Stock", new XElement("Symbol", "ABC"), new XElement("Amount", "110.45")) ) ); Since I am new to Linq to XML, I am presently struggling a lot and henceforth seeking for help. What I need is to generate the above XML programatically(I mean to say that some kind of looping...) The values will be passed from textbox and hence will be filled at runtime. At present the one which I have done is a kind of static one. This entire stuff has to be done at runtime. One more problem that I am facing is at the time of saving the record for the second time. Suppose first time I executed the code and saved it(Using the program I have done). Next time when I will try to save then only <StockDate Day = "xx" Month="xx" Year="xxxx"> <Stock> <Symbol>ABC</Symbol> <Amount>110.45</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>366.25</Amount> </Stock> </StockDate> should get save(or better appended) and not <StockMarket> .... </StockMarket>.. Because that will be created only at the first time when the XML will be generated or created. I hope I am able to convey my problem properly. If you people is having difficulty in understanding my situation, kindly let me know. Using C#3.0 . Thanks

    Read the article

  • Need help with regex parsing (in perl)

    - by Charlie
    Hi all, need some help parsing an html file in perl. I used the LWP module to retrieve a webpage into $_ with $/ undefined so there are no newline issues. Then I'm trying to find all strings matching a pattern. How do I do that? I know how to find 1 instance of it, but how do I match all instances? and what data structure would the results go to? a multi dimensional array? my text (excerpt) looks like the following: <TR> <TD BGCOLOR=EEEEEE><A HREF="/program.cgi?pid=1233"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 1</A></FONT></TD> <TD BGCOLOR=EEEEEE nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jun 27 2010 3:00PM</FONT></TD> <TD BGCOLOR=EEEEEE>&nbsp;</TD> </TR> <TR><TD BGCOLOR=EEEEEE COLSPAN=3><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=999999><IMG SRC="http://images.domain.com/images/spacer.gif" HEIGHT=1 WIDTH=1></TD></TR> <TR><TD COLSPAN=3 ><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR> <TD><A HREF="/program.cgi?pid=1234"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 2</A></FONT></TD> <TD nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jun 29 2010 7:00PM</FONT></TD> <TD>&nbsp;</TD> </TR> <TR><TD COLSPAN=3><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=999999><IMG SRC="http://images.domain.com/images/spacer.gif" HEIGHT=1 WIDTH=1></TD></TR> <TR><TD COLSPAN=3 BGCOLOR=EEEEEE><IMG SRC="http://images.domain.com/images/spacer.gif" WIDTH=1 HEIGHT=2><BR></TD></TR> <TR> <TD BGCOLOR=EEEEEE><A HREF="/program.cgi?pid=1235"><FONT FACE="ARIAL,HELVETICA,SANS-SERIF" SIZE=2>Title 3</A></FONT></TD> <TD BGCOLOR=EEEEEE nowrap><FONT FACE="ARIAL,HELVETICA" SIZE=2>Jul 3 2010 7:00PM</FONT></TD> <TD BGCOLOR=EEEEEE>&nbsp;</TD> </TR> I want to get the following into an array (or any structure): { ["/program.cgi?pdi=1233", "Title 1"], ["/program.cgi?pdi=1234", "Title 2"], ["/program.cgi?pdi=1235", "Title 3"] } Thanks

    Read the article

  • What is the recommended way of parsing an XML feed with multiple namespaces with ActionScript 3.0?

    - by dafko
    I have seen the following methods to be used in several online examples, but haven't found any documentation on the recommended way of parsing an XML feed. Method 1: protected function xmlResponseHandler(event:ResultEvent):void { var atom:Namespace = new Namespace("http://www.w3.org/2005/Atom"); var microsoftData:Namespace = new Namespace("http://schemas.microsoft.com/ado/2007/08/dataservices"); var microsoftMetadata:Namespace = new Namespace("http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"); var ac:ArrayCollection = new ArrayCollection(); var keyValuePairs:KeyValuePair; var propertyList:XMLList = (event.result as XML)..atom::entry.atom::content.microsoftMetadata::properties; for each (var properties:XML in propertyList) { keyValuePairs = new KeyValuePair(properties.microsoftData::FieldLocation, properties.microsoftData::Locationid); ac.addItem(keyValuePairs); } cb.dataProvider = ac; } Method 2: protected function xmlResponseHandler(event:ResultEvent):void { namespace atom = "http://www.w3.org/2005/Atom"; namespace d = "http://schemas.microsoft.com/ado/2007/08/dataservices"; namespace m = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"; use namespace d; use namespace m; use namespace atom; var ac:ArrayCollection = new ArrayCollection(); var keyValuePairs:KeyValuePair; var propertyList:XMLList = (event.result as XML)..entry.content.properties; for each (var properties:XML in propertyList) { keyValuePairs = new KeyValuePair(properties.FieldLocation, properties.Locationid); ac.addItem(keyValuePairs); } cb.dataProvider = ac; } Sample XML feed: <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base="http://www.test.com/Test/my.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">Test_Locations</title> <id>http://www.test.com/test/my.svc/Test_Locations</id> <updated>2010-04-27T20:41:23Z</updated> <link rel="self" title="Test_Locations" href="Test_Locations" /> <entry> <id>1</id> <title type="text"></title> <updated>2010-04-27T20:41:23Z</updated> <author> <name /> </author> <link rel="edit" title="Test_Locations" href="http://www.test.com/id=1" /> <category term="MySQLModel.Test_Locations" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:FieldLocation>Test Location</d:FieldLocation> <d:Locationid>test0129</d:Locationid> </m:properties> </content> </entry> <entry> <id>2</id> <title type="text"></title> <updated>2010-04-27T20:41:23Z</updated> <author> <name /> </author> <link rel="edit" title="Test_Locations" href="http://www.test.com/id=2" /> <category term="MySQLModel.Test_Locations" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:FieldLocation>Yet Another Test Location</d:FieldLocation> <d:Locationid>test25</d:Locationid> </m:properties> </content> </entry> </feed>

    Read the article

  • null reference exception in the code

    - by LifeH2O
    I am getting NullReferenceException error on "_attr.Append(xmlNode.Attributes["name"]);". namespace SMAS { class Profiles { private XmlTextReader _profReader; private XmlDocument _profDoc; private const string Url = "http://localhost/teamprofiles.xml"; private const string XPath = "/teams/team-profile"; public XmlNodeList Teams{ get; private set; } private XmlAttributeCollection _attr; public ArrayList Team { get; private set; } public void GetTeams() { _profReader = new XmlTextReader(Url); _profDoc = new XmlDocument(); _profDoc.Load(_profReader); Teams = _profDoc.SelectNodes(XPath); foreach (XmlNode xmlNode in Teams) { _attr.Append(xmlNode.Attributes["name"]); } } } } the teamprofiles.xml file looks like <teams> <team-profile name="Australia"> <stats type="Test"> <span>1877-2010</span> <matches>721</matches> <won>339</won> <lost>186</lost> <tied>2</tied> <draw>194</draw> <percentage>47.01</percentage> </stats> <stats type="Twenty20"> <span>2005-2010</span> <matches>32</matches> <won>18</won> <lost>12</lost> <tied>1</tied> <draw>1</draw> <percentage>59.67</percentage> </stats> </team-profile> <team-profile name="Bangladesh"> <stats type="Test"> <span>2000-2010</span> <matches>66</matches> <won>3</won> <lost>57</lost> <tied>0</tied> <draw>6</draw> <percentage>4.54</percentage> </stats> </team-profile> </teams> I am trying to extract names of all teams in an ArrayList. Then i'll extract all stats of all teams and display them in my application. Can you please help me about that null reference exception?

    Read the article

  • Newbie jQuery question: Need slideshow to rotate automatically, not just when clicking navigation.

    - by Justin
    Hi everyone, This is my first post, so please forgive me if this question has been asked a million times. I'm a self professed jQuery hack and I need a little guidance on taking this script I found and adapting it to my needs. Anyway, what I'm making is an image slide show with navigation. The script I found does this, but does not automatically cycle through the images. I'm using jQuery 1.3.2 and would rather stick with that than using the newer library. I would also prefer to edit what is already here rather than start from scratch. Anywho, here's the html: <div id="myslide"> <div class="cover"> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/current_Denver-skyline.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/pepsi_center-IS42RF-0D111C.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/columbine-2689820469_D1104.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/ist2_10460354-RedRocks.jpg" /> </div> </div> <!-- end of div cover --> </div> <!-- end of div myslide --> And here's the jQuery: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> <script type="text/JavaScript"> $(document).ready(function (){ $('#button a').click(function(){ var integer = $(this).attr('rel'); $('#myslide .cover').css({left:-820*(parseInt(integer)-1)}).hide().fadeIn(); /*----- Width of div #mystuff (here 820) ------ */ $('#button a').each(function(){ $(this).removeClass('active'); if($(this).hasClass('button'+integer)){ $(this).addClass('active')} }); }); }); </script> Here's where I got the script: http://www.webdeveloperjuice.com/2010/04/07/create-lightweight-jquery-fade-manual-slideshow/ Again, if this question is too basic for this site please let me know and possibly provide a reference link or two. Thanks a ton!

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • Boot From a USB Drive Even if your BIOS Won’t Let You

    - by Trevor Bekolay
    You’ve always got a trusty bootable USB flash drive with you to solve computer problems, but what if a PC’s BIOS won’t let you boot from USB? We’ll show you how to make a CD or floppy disk that will let you boot from your USB drive. This boot menu, like many created before USB drives became cheap and commonplace, does not include an option to boot from a USB drive. A piece of freeware called PLoP Boot Manager solves this problem, offering an image that can burned to a CD or put on a floppy disk, and enables you to boot to a variety of devices, including USB drives. Put PLoP on a CD PLoP comes as a zip file, which includes a variety of files. To put PLoP on a CD, you will need either plpbt.iso or plpbtnoemul.iso from that zip file. Either disc image should work on most computers, though if in doubt plpbtnoemul.iso should work “everywhere,” according to the readme included with PLoP Boot Manager. Burn plpbtnoemul.iso or plpbt.iso to a CD and then skip to the “booting PLoP Boot Manager” section. Put PLoP on a Floppy Disk If your computer is old enough to still have a floppy drive, then you will need to put the contents of the plpbt.img image file found in PLoP’s zip file on a floppy disk. To do this, we’ll use a freeware utility called RawWrite for Windows. We aren’t fortunate enough to have a floppy drive installed, but if you do it should be listed in the Floppy drive drop-down box. Select your floppy drive, then click on the “…” button and browse to plpbt.img. Press the Write button to write PLoP boot manager to your floppy disk. Booting PLoP Boot Manager To boot PLoP, you will need to have your CD or floppy drive boot with higher precedence than your hard drive. In many cases, especially with floppy disks, this is done by default. If the CD or floppy drive is not set to boot first, then you will need to access your BIOS’s boot menu, or the setup menu. The exact steps to do this vary depending on your BIOS – to get a detailed description of the process, search for your motherboard’s manual (or your laptop’s manual if you’re working with a laptop). In general, however, as the computer boots up, some important keyboard strokes are noted somewhere prominent on the screen. In our case, they are at the bottom of the screen. Press Escape to bring up the Boot Menu. Previously, we burned a CD with PLoP Boot Manager on it, so we will select the CD-ROM Drive option and hit Enter. If your BIOS does not have a Boot Menu, then you will need to access the Setup menu and change the boot order to give the floppy disk or CD-ROM Drive higher precedence than the hard drive. Usually this setting is found in the “Boot” or “Advanced” section of the Setup menu. If done correctly, PLoP Boot Manager will load up, giving a number of boot options. Highlight USB and press Enter. PLoP begins loading from the USB drive. Despite our BIOS not having the option, we’re now booting using the USB drive, which in our case holds an Ubuntu Live CD! This is a pretty geeky way to get your PC to boot from a USB…provided your computer still has a floppy drive. Of course if your BIOS won’t boot from a USB it probably has one…or you really need to update it. Download PLoP Boot Manager Download RawWrite for Windows Similar Articles Productive Geek Tips Create a Bootable Ubuntu 9.10 USB Flash DriveReinstall Ubuntu Grub Bootloader After Windows Wipes it OutCreate a Bootable Ubuntu USB Flash Drive the Easy WayBuilding a New Computer – Part 3: Setting it UpInstall Windows XP on Your Pre-Installed Windows Vista Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • The Beginner’s Guide to Greasemonkey User Scripts in Firefox

    - by Asian Angel
    Everybody knows that Firefox has add-ons for virtually everything, but if you don’t want to bloat your installation you’ve always got the option of Greasemonkey scripts instead. Here’s a quick primer on how to use them. Getting Started with User Scripts Once you have Greasemonkey installed, managing the extension is really easy. Left click on the status bar icon to turn the extension on/off and right click to access the context menu shown here. Whether you use the Options button in the Add-ons Manager Window or the context menu shown above, both will bring up the Manage User Scripts dialog. At the moment you have a nice clean slate to work with… time to get some scripts added in. The majority of user scripts can be found at two different sites, the first being appropriately named userscripts.org, and you can either browse by tag or search for a script. As you can see here your search for a particular type of script can be quickly narrowed down based on category. There is definitely a lot to choose from. For our example we focused on the “textarea” tag. There were 62 scripts available but we quickly found what we were looking for on the first page. Installing, Managing, & Using Your Scripts When you find a script that you want to install visit the script’s homepage and click on the “Install” button. Note: Link for this script provided below. Once you have clicked on the Install button, Greasemonkey will open up the following installation window. You will be able to view: A summary of what the script does A list of websites that the script is supposed to function on (our example is set for all) View the script source if desired Make a final decision on whether to install the script or cancel the process Right-clicking on our status bar icon shows our new script listed and active. Reopening the Manage User Scripts window shows: Our new script listed in the column on the left The websites/pages included An option to disable the script (can also be done in the context menu) The ability to edit the script The ability to uninstall the script If you choose to edit the script you will be asked to browse for and select a default text editor of your choice (first time only). Once you have selected a text editor you can make any changes desired to the script. We decided to test our new user script on the site. Going to the comment box at the bottom we could easily resize the window as desired. The Comment box definitely got a lot bigger. Conclusion If you prefer to keep the number of extensions to a minimum in your Firefox installation then Greasemonkey and the Userscripts website can easily provide that extra functionality without the bloat. For added auto website script detection goodness see our article on Greasefire. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to Greasemonkey. Links Download the Greasemonkey Extension (Mozilla Add-ons) Install the Textarea & Input Resize User Script Visit the Userscripts.org Website Visit the Userstyles.org Website Similar Articles Productive Geek Tips Enjoy How-To Geek User Style Script GoodnessEnable Multi-Column Google Searches with a User ScriptSearch Alternative Search Engines from within Bing’s Search PageFind User Scripts for Your Favorite Websites the Easy WaySet Up User Scripts in Opera Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • If You Could Cut Your Meeting Times in ½ Would You?

    - by [email protected]
    By Brian Dayton on April 22, 2010 2:02 PM I know it sounds like a big promise. And what I'm thinking about may not cut a :60 minute meeting into :30 minutes, but it could make meetings and interactions up to 2X more productive. How? Social Media for the Enterprise, Not Social Media In the Enterprise Bear with me. I'm not talking about whether or not workers should or shouldn't have access to Facebook on corporate networks. That topic has been discussed @ length. I'm also not talking about the direct benefits of Social Networking tools like Presence (the ability to see someone online and ask a question in real-time), blogs, RSS feeds or external tools like Twitter. The Un-Measurable Benefits Would you do something that you believe will have a positive effect--but can't be measured? It's impossible to quantify the effectiveness of a meeting. However, what I am talking about would be more of a byproduct of all of the social networking tools above. Here's the hypothesis: As I've gotten more and more busy with work, family, travel and kids--and the same has happened to my friends and family--I'm less and less connected. But by introducing Facebook to my life I've not only made connections with longtime friends whom I haven't spoken to in years--but I've increased the pace and quality of interactions, on and offline, with close friends who I see and speak to every week. In some cases it even enhances the connections and interactions with those I see or speak to every day. The same holds true in an organization. Especially a larger one with highly matrixed organizational structures. You work with people on a project, new people come in with each different project and a disproportionate amount of time is spent getting oriented and staying current. Going back to the initial value proposition--making meetings shorter/more effective--a large amount of time is spent: - At Project Kick-off: Meeting and understanding team member's histories, goals & roles - Ongoing: Summarizing events since the last meeting or update email In my personal, Facebook life today I know that: - My best friend from college - has been stranded in India for 5 days because of the volcano in Iceland and is now only 250 miles from home - One of my co-workers started conference calls at 6:30 this morning - My wife wasn't terribly pleased with my painting skills in our new bathroom (disclosure: she told me this face to face too) Strengthening Weak Links A recent article in CIO Magazine, Three Dangerous Social Media Misconceptions (Kristen Burnham, March 12, 2010) calls out the #1 misconception as follows: 1. "Face-to-face relationships are far more valuable than virtual ones." While some level of physical interaction will always add value to relationships, Gartner says that come 2020, most relationships and teams will be based on "weak links"--that is, you may not have personally met a contact, but you'll know of or may have interacted with him via social sites like Facebook, LinkedIn and Twitter. The sooner your enterprise adopts these tools, the sooner your employees will learn them, and the sooner you'll begin to cultivate these relationships-of-the-future. I personally believe that it's not an either/or choice between face-to-face and virtual interactions. In fact, I'll be as bold as saying it doesn't matter. I can point to two extremely valuable work relationships that I've had over the past 5 years: - I shared an office with one of them - I met the other person, face-to-face, only once Both relationships were very productive. The dynamics were similar. The communication tactics differed immensely. What does matter is the quality, frequency and relevance of interactions. Still sound like too much? An over-promise? Stay tuned for my next post The Gap Between Facebook and LinkedIn. I'll also connect some of the dots with where Oracle Applications and technologies are headed.

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • Integrate Google Docs with Outlook the Easy Way

    - by Matthew Guay
    Want to use Google Docs and Microsoft office together?  Here’s how you can use Harmony for Google Docs to integrates them seamlessly with Outlook. Harmony for Google Docs is an exciting new plugin for Outlook 2007 (a version for Outlook 2010 is in the works).  It lets you integrate your Google Docs account with Outlook via a sidebar.  From this, you can find any of your Google docs or upload a new document, and then you can open the document to view or edit it in Outlook. Getting Started Download Harmony for Google Docs (link below), and install as normal.  Make sure Outlook is closed before you run the install. Next time you open Outlook, the new Harmony sidebar will automatically open.  Enter your Google Account info, and click Sign In. Now, all of your Google Docs will show up in the sidebar. Double-click any file to open it in Outlook.  You may have to sign-in to Google Docs the first time you open a document. Here’s a Google Doc open in Outlook.  Notice that everything works, including full editing. All Google Docs features worked great in our tests except for the new drawings tool.  When we tried to insert a drawing, Outlook had a script error.  This was the only problem we had with Harmony, and could be due to an interaction between Google Drawings and Outlook’s rendering engine. Harmony makes it easy to find any file in your Google Docs account.  You can search for a file, or sort your files by type, recentness, and more. You can also easily add any document to Google Docs directly from Harmony.  You can drag and drop any document, including one attached to an email, to the Harmony sidebar, and it will directly upload to your Google Docs account. And, when you’re writing a new email or reply, click the Show Documents button to open the Harmony sidebar.  From here, you can add documents as usual and share it with email recipient. Conclusion We previously covered a similar app OffiSync which combines Google doc features with MS Office. However, Harmony makes it much easier to use Google Apps along with Outlook.  This gives you an easy and efficient way to collaborate on documents with coworkers, all without leaving Outlook.  And, if your company uses SharePoint instead of Google Docs, Harmony offers a SharePoint edition that integrates with Outlook just as easily! Link Download Harmony for Google Docs Similar Articles Productive Geek Tips How To Export Documents from Google Docs to Your ComputerView Your Google Calendar in Outlook 2007Sync Your Outlook and Google Calendar with Google Calendar SyncIntegrate Twitter With Microsoft OutlookSlacker Geek: Update Your Facebook Profile from Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams

    Read the article

  • HTML Presence Controls for Communications Server 14 CodePlex Project

    Showing Presence on the Web If youre running Office Communicator Server 2007 R2, you know that your only out-of-the-box option for showing presence on the web is to use the NameControl ActiveX control that ships as part of Office.  Being an ActiveX control, this obviously means that youre limited to Internet Explorer.  Also, nobody likes ActiveX controls What if you want to show the presence of users in a pure ASP.NET or HTML application and cant assume that the user has Communicator installed you need anASP.NET or HTML presence control.  HTML Presence Controls for Microsoft Communications Server 14 We recently worked with the UC team at Microsoft on a keynote demo for TechEd 2010 in New Orleans.  The demo was for a fictitious airline Fabrikam Airlines that wanted to show the presence of customer service and reservations agents on its website.  Customers could also start an instant message conversation with the agents using a Silverlight web chat window that used WCF to communicate with the backend UCMA application. We built HTML Presence Controls that use AJAX to poll a REST-based WCF service running in IIS and hosting a UCMA 3.0 presence subscription application.   Microsoft has graciously allowed us to publish these on CodePlex so that the development community can benefit from them:  http://htmlpresencecontrols.codeplex.com/ We will be maintaining the CodePlex project as new builds of UCMA 3.0 become available.  Check out the project home page on CodePlex for some more in-depth details on how the controls are implemented. ASP.NET Server Control Implementation Were providing an ASP.NET Server Control implementation that you can use stand-alone or in a GridView or Repeater (or other layout control).  The control has properties that allow you to control its appearance, e.g. you can choose whether or not to show the contacts name or availability text. You can also use the server control in a layout control such as a GridView by putting it in a TemplateColumn and binding to the Sip Uri in the data source. Disclaimer Once we started working on these, we realized why Microsoft hasnt shipped such controls as part of the product.  There are some tradeoffs you have to be aware of when using these controls, heres the high level. Privacy The backend UCMA 3.0 application that subscribes to presence of contacts runs as a trusted application and can thus retrieve the presence of any user in the organization.  Theres currently no good way in UCMA to apply any privacy rules to ensure that the consumer of the presence controls has permission to see the presence of the contacts that the controls are bound to.  Just to be absolutely crystal clear These controls provide a way to query the presence of any user in the organization, regardless of the privacy relationship between the person consuming the controls and the contacts whose presence is being displayed. Were exploring options for a design pattern that would allow you to inject some privacy controls.  Keep in mind though that you would most likely be responsible for implementing this logic, as there is currently no functionality in UCMA that allows you to do that. Polling the WCF REST Service The controls poll the backend WCF service to retrieve the presence of contacts - you can control the refresh interval so that they poll less often. We implemented a caching layer so that the WCF service is always communicating with a presence cache it never communicates directly with Communications Server.  For example, if your web page is showing the presence of sip:[email protected] and 500 people have the page open, the presence cache only contains one instance of the subscription Communications Server is not being polled 500 times for the presence of that contact. Once the presence of a contact changes, it is updated in the cache.  There are some server-based push mechanisms that would work nicely here, such as the one that Outlook Web Access 2010 uses.  Unfortunately we didnt have time to explore these options. Community Contribution Take a look at the project Issue Tracker, there are a couple of things we can use some help with.  Shoot me a note if youre interested in contributing to the project. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Regular Expressions Cookbook Is in The Money—Win a Copy

    - by Jan Goyvaerts
    %COOKBOOKFRAME%You may have heard some people say that most book authors never get any royalties. That’s not true because most authors get an advance royalty that is paid before the book is published. That’s the author’s main incentive for writing the book, at least as far as money is concerned. (If money is your main concern, don’t write books.) What is true is that most authors never see any money beyond the advance royalty. Royalty rates are very low. A 10% royalty of the publisher’s price is considered normal. The publisher’s price is usually 45% of the retail price. So if you pay full price in a bookstore, the author gets 4.5% of your money. If there’s more than one author, they split the royalty. It doesn’t take a math degree to figure out that a book needs to sell quite a few copies for the royalty to add up to a meaningful amount of money. But Steven and I must have done something right. Regular Expressions Cookbook is in the money. My royalty statement for the 3rd quartier of 2009, which is the 2nd quarter that the book was on the market, came with a check. I actually received it last month but didn’t get around to blogging about. The amount of the check is insignificant. The point is that the balance is no longer negative. I’m taking this opportunity to pat myself and my co-author on the back. To celebrate the occassion O’Reilly has offered to sponsor a give-away of five (5) copies of Regular Expressions Cookbook. These are the rules of the game: You must post a comment to this blog article including your actual name and actual email address. Names are published, email addresses are not. Comments are moderated by myself (Jan Goyvaerts). If I consider a comment to be offensive or spam it will not be published and not be eligible for any prize. If you don’t know what to say in the comment, just wish me a happy 100000nd birthday, so I don’t have to feel so bad about entering the 6-bit era. Each person commenting has only one chance to win, regardless of the number of comments posted. O’Reilly will be provided with the names and email addresses of the winners (and those email addresses only) in order to arrange delivery. Each winner can choose to receive a printed copy or ebook (DRM-free PDF). If you choose the printed book, O’Reilly pays for shipping to anywhere in the world but not for any duties or taxes your country may impose on books imported from the USA. If you choose the ebook, you’ll need to create an O’Reilly account that is then granted access to the PDF download. You can make your choice after you’ve won, so it doesn’t influence your chance of winning. Contest ends 28 February 2010, GMT+7 (Thai time). Chosen by five calls to Random(78)+1 in Delphi 2010, the winners are: 48: Xiaozu 45: David Chisholm 19: Miquel Burns 33: Aaron Rice 17: David Laing Thanks to everybody who participated. The winners have been notified by email on how to collect their prize.

    Read the article

  • Fünf Jahre Bonn-to-Code.Net – das muss gefeiert werden!

    - by WeigeltRo
    Als ich am 1. Januar 2006 die .NET User Group “Bonn-to-Code.Net” gründete (den genialen Namen ließ sich mein Kollege Jens Schaller in Anlehnung an das Motto meines Blogs einfallen), ahnte ich nicht, wie schnell sich alles entwickeln würde. So konnte, nach ein wenig Werbung über diverse Kanäle, bereits am 14. Februar 2006 das erste Treffen stattfinden und wenige Tage später wurde Bonn-to-Code.Net offiziell in den Kreis der INETA User Groups aufgenommen. Das ist nun etwas über fünf Jahre her und soll am 22. März 2011 um 19:00 (Einlass ab 18:30) gebührend gefeiert werden, und zwar im Rahmen unseres März-Treffens. Der Abend bietet Vorträge zu “Flow Design und seine Umsetzung mit Event Based Components” sowie “WCF Services mal anders” (ausführlichere Infos zu den Vortragsinhalten gibt es hier). Anschließend gibt es bei einer großen Verlosung neben Büchern auch hochkarätige Software-Preise zu gewinnen. Zusätzlich zu Lizenzen für JetBrains ReSharper und Telerik Ultimate Collection warten dieses Mal (mit freundlicher Unterstützung durch Microsoft Deutschland) je ein Windows 7 Ultimate und ein Office 2010 Professional Plus auf ihre glücklichen Gewinner. Und wer nicht zu spät kommt, kann auch ganz ohne Losglück eines von vielen kleinen Goodies abgreifen. Eine Anmeldung ist nicht erforderlich, eine Anfahrtsbeschreibung gibt es auf der Bonn-to-Code.Net Website. Es freut mich dabei besonders, dass wir zu diesem Termin u.a. einen Sprecher an Bord haben, der bereits beim Gründungstreffen dabei war: Stefan Lieser. Mittlerweile z.B. durch die Clean Code Developer Initiative bekannt, ist Stefan nur ein Beispiel für eine ganze Reihe von Sprechern auf den diversen Entwicklerkonferenzen, die ihre ersten Erfahrungen u.a. bei Bonn-to-Code.Net gemacht haben. …und was ist in den fünf Jahren so passiert? Einiges! Ein Community Launch Event in 2007, zwei Microsoft TechTalks (2007,2008), Gastsprecher aus ganz Deutschland und dem Ausland (JP Boodhoo, Harry Pierson). Doch nichts hat die fünf Jahre so geprägt wie die Zusammenarbeit mit “den Nachbarn aus Köln”. Zum Zeitpunkt der Gründung von Bonn-to-Code.Net gab es im gesamten Köln/Bonner Raum keine .NET User Group. Und so war es nicht ungewöhnlich, dass der erste Interessent, der sich auf meinen Blog-Eintrag vom 4. Januar 2006 hin meldete, aus Köln stammte: Albert Weinert. Kurze Zeit nach der Bonner Gruppe wurde dann – initiiert durch Angelika Wöpking und Stefan Lange – schließlich die .NET User Group Köln gegründet. Wobei Stefan wiederum vor dem Kölner Gründungstreffen Ende April bereits Bonner Treffen besucht hatte; insgesamt also eine Menge personeller Überlapp zwischen Köln und Bonn. Als nach einem etwas holprigen Start der Kölner Gruppe schließlich Albert und Stefan die Leitung übernahmen, war klar dass Köln und Bonn in vielerlei Hinsicht eng zusammenarbeiten würden. Sei es durch die Koordination von Themen und Terminen oder auch durch Werbung für die Treffen der jeweils anderen Gruppe. Der nächste Schritt kam dann mit der Beteiligung der Kölner und Bonner Gruppen an der Organisation des “AfterLaunch” im April 2008. Der große Erfolg dieser Veranstaltung war der Ansporn, in Bezug auf die Zusammenarbeit ein neues Kapitel aufzuschlagen. Anfang 2009 wurde zunächst der dotnet Köln/Bonn e.V. gegründet, um für eigene Großveranstaltungen ein solides Fundament zu schaffen. Im Mai 2009 folgte dann die erste “dotnet Cologne” – ein voller Erfolg. Und mit der “dotnet Cologne 2010” etablierte sich diese Konferenz als das große .NET Community Event in Deutschland. Am 6. Mai 2011 findet nun die “dotnet Cologne 2011” statt; hinter den Kulissen laufen die Vorbereitungen dazu bereits seit Monaten auf Hochtouren. Alles in allem sehr aufregende fünf Jahre, in denen viel passiert ist. Mal schauen, wie die nächsten fünf Jahre werden…

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >