Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 382/402 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Perl kill(0, $pid) in Windows always returning 1

    - by banshee_walk_sly
    I'm trying to make a Perl script that will run a set of other programs in Windows. I need to be able to capture the stdout, stderr, and exit code of the process, and I need to be able to see if a process exceeds it's allotted execution time. Right now, the pertinent part of my code looks like: ... $pid = open3($wtr, $stdout, $stderr, $command); if($time < 0){ waitpid($pid, 0); $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } else{ # Do timeout stuff, currently not working as planned print "pid: $pid\n"; my $elapsed = 0; #THIS LOOP ONLY TERMINATES WHEN $time > $elapsed ...? while(kill 0, $pid and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } #these lines are needed to grab the stdout and stderr in arrays so # I may reuse them in multiple logs if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } ... Everything is working correctly if $time = -1 (no timeout is needed), but the system thinks that kill 0, $pid is always 1. This makes my loop run for the entirety of the time allowed. Some extra details just for clarity: This is being run on Windows. I know my process does terminate because I have get all the expected output. Perl version: This is perl, v5.10.1 built for MSWin32-x86-multi-thread (with 2 registered patches, see perl -V for more detail) Copyright 1987-2009, Larry Wall Binary build 1007 [291969] provided by ActiveState http://www.ActiveState.com Built Jan 26 2010 23:15:11 I appreciate your help :D For that future person who may have a similar issue I got the code to work, here is the modified code sections: $pid = open3($wtr, $stdout, $stderr, $command); close($wtr); if($time < 0){ waitpid($pid, 0); } else{ print "pid: $pid\n"; my $elapsed = 0; while(waitpid($pid, WNOHANG) <= 0 and $time > $elapsed){ Time::HiRes::usleep(1000); # sleep for milliseconds $elapsed += 1; } if($elapsed >= $time){ $status = "FAIL"; print $log "TIME LIMIT EXCEEDED\n"; } } $return = $? >> 8; $death_sig = $? & 127; $core_dump = $? & 128; if(fileno $stdout){ @stdout = <$stdout>; } if(fileno $stderr){ @stderr = <$stderr>; } close($stdout); close($stderr);

    Read the article

  • How to optimize this javascript code?

    - by Andrija
    I have a jsp which uses a lot of javascript and it's just not fast enough. I would like to optimize it so first, here's a part of the code: In the jsp I have the initialization: window.onload = function () { formCollection.pageSize.value = "<%= pagingSize%>"; elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> elementsNumber = <%= resultList.size() %>; <%} else { %> elementsNumber = 0; <% } %> contextPath = "<%= request.getContextPath() %>"; } In my js file I have two types of js functions: // gets the first element and sets it's value to all the other; //the selectSingleNode function is used because I use XSLT transformation //to generate the table _setTehJed = function(){ var resultId = formCollection.elements["idTehJedinice_spis_1"].value; var resultText = formCollection.elements["tehnicka_spis_1"].value; if (resultId != ""){ var counter = 1; while (counter<elementsNumber){ counter++; if(formCollection.elements["idTehJedinice_spis_"+counter] != null){ formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } } // sets the elements checkbox to checked or unchecked _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } I've used these tips: http://blogs.msdn.com/b/ie/archive/2006/08/28/728654.aspx http://code.google.com/speed/articles/optimizing-javascript.html but I still think something could be done like defining the functions like this: In the jsp: window.onload = function () { iDom3.DigitalnaArhivaPrihvat.formCollection=document.forms["controller"]; iDom3.DigitalnaArhivaPrihvat.formCollection.pageSize.value = "<%= pagingSize%>"; iDom3.DigitalnaArhivaPrihvat.elemCollection = iDom3.Table.all["spis"].XML.DOM; <% if (resultList != null) { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = <%= resultList.size() %> <%} else { %> iDom3.DigitalnaArhivaPrihvat.elementsNumber = 0; <% } %> } in the js: iDom3.DigitalnaArhivaPrihvat = { formCollection:null, elemCollection:null, elementsNumber:null, _setTehJed : function(){ var resultId = this.formCollection.elements.idTehJedinice_spis_1.value; var resultText = this.formCollection.elements.tehnicka_spis_1.value; if (resultId != ""){ var counter = 1; while (counter<this.elementsNumber){ counter++; if(this.formCollection.elements["idTehJedinice_spis_"+counter] !== null){ this.formCollection.elements["idTehJedinice_spis_"+counter].value=resultId; this.formCollection.elements["tehnicka_spis_"+counter].value=resultText; } var node=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'tehnicka']/title"); node.text=resultText; var node2=this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+counter+"']/data[@col = 'idTehJedinice']/title"); node2.text=resultId; } } }, _SelectCheckRokCuvanja = { all : [], Item : function (oItem, sId) { this.all["spis_"+sId] = oItem.value; if (oItem.checked) { this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "true"); }else{ this.elemCollection.selectSingleNode("/suite/table/rows/row[@id = 'spis_"+sId+"']/data[@col = 'rokCheck']").setAttribute("default", "false"); } } } but the problem is scoping (if I do it like this, the second function does not execute properly). Any suggestions?

    Read the article

  • C# slowdown while creating a bitmap - calculating distances from a large List of places for each pixel

    - by user576849
    I'm creating a graphic of the glow of lights above a geographic location based upon Walkers Law: Skyglow=0.01*Population*DistanceFromCenter^-2.5 I have a CSV file of places with 66,000 records using 5 fields (id,name,population,latitude,longitude), parsed on the FormLoad event and stored it in: List<string[]> placeDataList Then I set up nested loops to fill in a bitmap using SetPixel. For each pixel on the bitmap, which represents a coordinate on a map (latitude and longitude), the program loops through placeDataList – calculating the distance from that coordinate (pixel) to each place record. The distance (along with population) is used in a calculation to find how much cumulative sky glow is contributed to the coordinate from each place record. So, for every pixel, 66,000 distance calculations must be made. The problem is, this is predictably EXTREMELY slow – on the order of one line of pixels per 30 seconds or so on a 320 pixel wide image. This is unrelated to SetPixel, which I know is also slow, because the speed is similarly slow when adding the distance calculation results to an array. I don’t actually need to test all 66,000 records for every pixel, only the records within 150 miles (i.e. no skyglow is contributed to a coordinate from a small town 3000 miles away). But to find which records are within 150 miles of my coordinate I would still need to loop through all the records for each pixel. I can't use a smaller number of records because all 66,000 places contribute to skyglow for SOME coordinate in my map as it loops. This seems like a Catch-22, so I know there must be a better method out there. Like I mentioned, the slowdown is related to how many calculations I’m making per pixel, not anything to do with the bitmap. Any suggestions? private void fillPixels(int width) { Color pixelColor; int pixel_w = width; int pixel_h = (int)Math.Floor((width * 0.424088664)); Bitmap bmp = new Bitmap(pixel_w, pixel_h); for (int i = 0; i < pixel_h; i++) for (int j = 0; j < pixel_w; j++) { pixelColor = getPixelColor(i, j); bmp.SetPixel(j, i, pixelColor); } bmp.Save("Nightfall", System.Drawing.Imaging.ImageFormat.Jpeg); pictureBox1.Image = bmp; MessageBox.Show("Done"); } private Color getPixelColor(int height, int width) { int c; double glow,d,cityLat,cityLon,cityPop; double testLat, testLon; int size_h = (int)Math.Floor((size_w * 0.424088664)); ; testLat = (height * (24.443136 / size_h)) + 24.548874; testLon = (width * (57.636853 / size_w)) -124.640767; glow = 0; for (int i = 0; i < placeDataList.Count; i++) { cityPop=Convert.ToDouble(placeDataList[i][2]); cityLat=Convert.ToDouble(placeDataList[i][3]); cityLon=Convert.ToDouble(placeDataList[i][4]); d = distance(testLat, testLon, cityLat, cityLon,"M"); if(d<150) glow = glow+(0.01 * cityPop * Math.Pow(d, -2.5)); } if (glow >= 1) glow=1; c = (int)Math.Ceiling(glow * 255); return Color.FromArgb(c, c, c); }

    Read the article

  • CRM 2011 Plugin for CREATE (post-operational): Why is the value of "baseamount" zero in post entity image and target?

    - by Olli
    REFORMULATED QUESTION (Apr 24): I am using the CRM Developer Toolkit for VS2012 to create a CRM2011 plugin. The plugin is registered for the CREATE message of the "Invoice Product" entity. Pipeline-Stage is post-operational, execution is synchronous. I register for a post image that contains baseamount. The toolkit creates an execute function that looks like this: protected void ExecutePostInvoiceProductCreate(LocalPluginContext localContext) { if (localContext == null) { throw new ArgumentNullException("localContext"); } IPluginExecutionContext context = localContext.PluginExecutionContext; Entity postImageEntity = (context.PostEntityImages != null && context.PostEntityImages.Contains(this.postImageAlias)) ? context.PostEntityImages[this.postImageAlias] : null; } Since we are in post operational stage, the value of baseamount in postImageEntity should already be calculated from the user input, right? However, the value of baseamountin the postImageEntity is zero. The same holds true for the value of baseamount in the target entity that I get using the following code: Entity targetEntity = (context.InputParameters != null && context.InputParameters.Contains("Target")) ? (Entity)context.InputParameters["Target"] : null; Using a retrieve request like the one below, I am getting the correct value of baseamount: Entity newlyCreated = service.Retrieve("invoicedetail", targetEntity.Id, new ColumnSet(true)); decimal baseAmount = newlyCreated.GetAttributeValue<Money>("baseamount").Value; The issue does not appear in post operational stage of an update event. I'd be glad to hear your ideas/explanations/suggestions on why this is the case... (Further information: Remote debugging, no isolation mode, plugin stored in database) Original Question: I am working on a plugin for CRM 2011 that is supposed to calculate the amount of tax to be paid when an invoice detail is created. To this end I am trying to get the baseamount of the newly created invoicedetail entity from the post entity image in post operational stage. As far as I understood it, the post entity image is a snapshot of the entity in the database after the new invoice detail has been created. Thus it should contain all properties of the newly created invoice detail. I am getting a "postentityimages" property of the IPluginExecutionContext that contains an entity with the alias I registered ("postImage"). This "postImage" entity contains a key for "baseamount" but its value is 0. Can anybody help me understand why this is the case and what I can do about it? (I also noticed that the postImage does not contain all but only a subset of the entities I registered for.) Here is what the code looks like: protected void ExecutePostInvoiceProductCreate(LocalPluginContext localContext) { if (localContext == null) { throw new ArgumentNullException("localContext"); } // Get PluginExecutionContext to obtain PostEntityImages IPluginExecutionContext context = localContext.PluginExecutionContext; // This works: I get a postImage that is not null. Entity postImage = (context.PostEntityImages != null && context.PostEntityImages.Contains(this.postImageAlias)) ? context.PostEntityImages[this.postImageAlias] : null; // Here is the problem: There is a "baseamount" key in the postImage // but its value is zero! decimal baseAmount = ((Money)postImage["baseamount"]).Value; } ADDITION: Pre and post images for post operational update contain non-zero values for baseamount.

    Read the article

  • Is this a SEO SAFE anchor link

    - by Mayhem
    so... Is this a safe way to use internal links on your site.. By doing this i have the index page generating the usual php content section and handing it to the div element. THE MAIN QUESTION: Will google still index the pages using this method? Common sense tells me it does.. But just double checking and leaving this here as a base example as well if it is. As in. EXAMPLE ONLY PEOPLE The Server Side if (isset($_REQUEST['page'])) {$pageID=$_REQUEST['page'];} else {$pageID="home";} if (isset($_REQUEST['pageMode']) && $_REQUEST['pageMode']=="js") { require "content/".$pageID.".php"; exit; } // ELSE - REST OF WEBSITE WILL BE GENERATED USING THE page VARIABLE The Links <a class='btnMenu' href='?page=home'>Home Page</a> <a class='btnMenu' href='?page=about'>About</a> <a class='btnMenu' href='?page=Services'>Services</a> <a class='btnMenu' href='?page=contact'>Contact</a> The Javascript $(function() { $(".btnMenu").click(function(){return doNav(this);}); }); function doNav(objCaller) { var sPage = $(objCaller).attr("href").substring(6,255); $.get("index.php", { page: sPage, pageMode: 'js'}, function(data) { ("#siteContent").html(data).scrollTop(0); }); return false; } Forgive me if there are any errors, as just copied and pasted from my script then removed a bunch of junk to simplify it as still prototyping/white boarding the project its in. So yes it does look a little nasty at the moment. REASONS WHY: The main reason is bandwidth and speed, This will allow other scripts to run and control the site/application a little better and yes it will need to be locked down with some coding. -- FURTHER EXAMPLE-- INSERT PHP AT TOP <?php // PHP CODE HERE ?> <html> <head> <link rel="stylesheet" type="text/css" href="style.css" /> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="scripts.js"></script> </head> <body> <div class='siteBody'> <div class='siteHeader'> <?php foreach ($pageList as $key => $value) { if ($pageID == $key) {$btnClass="btnMenuSel";} else {$btnClass="btnMenu";} echo "<a class='$btnClass' href='?page=".$key."'>".$pageList[$key]."</a>"; } ?> </div><div id="siteContent" style='margin-top:10px;'> <?php require "content/".$pageID.".php"; ?> </div><div class='siteFooter'> </div> </div> </body> </html>

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • HTML 5 <video> tag vs Flash video. What are the pros and cons?

    - by Vilx-
    Seems like the new <video> tag is all the hype these days, especially since Firefox now supports it. News of this are popping up in blogs all over the place, and everyone seems to be excited. But what about? As much as I searched I could not find anything that would make it better than the good old Flash video. In fact, I see only problems with it: It will still be some time before all the browsers start supporting it, and much more time before most people upgrade; Flash is available already and everyone has it; You can couple Flash with whatever fancy UI you want for controlling the playback. I gather that the tag will be controllable as well (via JavaScript probably), but will it be able to go fullscreen? The only two pros for a <video> tag that I can see are: It is more "semantic" - which probably holds no importance to a whole lot of people, including me; It is not dependent on a single commercial 3rd party entity (Adobe) - which I also don't see as a compelling reason to switch, because free players and video converters are already available, and Adobe is not hindering the whole process in any way (it's not in their interests even). So... what's the big deal? Added: OK, so there is one more Pro... maybe. Support for mobile devices. Hard to say though. A number of thoughts race through my head about the subject: How many mobile devices are actually able to decode video at a decent speed anyway, Flash or otherwise? How long until mainstream mobile devices get the <video> support? Even if it is available through updates, how many people actually do that? How many people watch videos on web pages on their mobile phones at all? As for the semantics part - I understand that search engines might be able to detect videos better now, but... what will they do with them anyway? OK, so they know that there is a video in the page. And? They can't index a video! I'd like some more arguments here. Added: Just thought of another Cons. This opens up a whole new area of cross-browser incompatibility. HTML and CSS is quite messy already in this aspect. Flash at least is the same everywhere. But it's enough for at least one major browser vendor to decide against the <video> tag (can anyone say "Internet Explorer"?) and we have a nice new area of hell to explore. Added: A Pro just came in. More competition = more innovation. That's true. Giving Adobe more competition will probably force them to improve Flash in areas it has been lacking so far. Linux seems to be a weak spot for it, cited by many.

    Read the article

  • Normalizing a table

    - by Alex
    I have a legacy table, which I can't change. The values in it can be modified from legacy application (application also can't be changed). Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries. The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday) The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query. The legacy table looks like this: CREATE TABLE [business_days]( [country_code] [char](3) , [state_code] [varchar](4) , [calendar_year] [int] , [calendar_month] [varchar](31) , [calendar_month2] [varchar](31) , [calendar_month3] [varchar](31) , [calendar_month4] [varchar](31) , [calendar_month5] [varchar](31) , [calendar_month6] [varchar](31) , [calendar_month7] [varchar](31) , [calendar_month8] [varchar](31) , [calendar_month9] [varchar](31) , [calendar_month10] [varchar](31) , [calendar_month11] [varchar](31) , [calendar_month12] [varchar](31) , misc. ) Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X): ' XX XX ..' I'd like the new table to look like so: create table #Temp (country varchar(3), state varchar(4), date datetime, hours int) And I'd like to only have rows for days which are off (marked with X or H from previous query) What I ended up doing, so far is this: Create a temporary-intermediate table, that looks like this: create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int) To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc. Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2. To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table. [edit added code] Declare @country_code char(3) Declare @state_code varchar(4) Declare @calendar_year int Declare @calendar_month varchar(31) Declare @month_code int Declare @calendar_date datetime Declare @day_code int WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0) BEGIN Select Top 1 @country_code = t2.country_code, @state_code = t2.state_code, @calendar_year = t2.calendar_year, @calendar_month = t2.calendar_month, @month_code = t2.month_code From #Temp_2 t2 -- where processed = 0 set @day_code = 1 while @day_code <= 31 begin if substring(@calendar_month, @day_code, 1) = 'X' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 8) end if substring(@calendar_month, @day_code, 1) = 'H' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 4) end set @day_code = @day_code + 1 end delete from #Temp_2 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code --update #Temp_2 set processed = 1 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code END I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion. After having the temp table, I'm planning to do (dates would be coming from a table): select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8) Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result. (I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)

    Read the article

  • C++ - getline() keeps reading the same line over and over again for some reason

    - by Jammanuser
    I am wondering WTF my while loop which calls istream& getline ( istream& is, string& str ); keeps reading the same line again. I have the following while loop (nested down several levels of other while loops and if statements) which calls getline, but my output statement which is the first code line in the while loop's block of code tells me it is reading the same line over and over again, which explains why my output file doesn't contain the right data when my program is finished. while (getline(file_handle, buffer_str)) { cout<< buffer_str <<endl; cin.get(); if ((buffer_str.find(';', 0) != string::npos) && (buffer_str.find('\"', 0) != string::npos)) { //we're now at the end of the 'exc' initialiation statement buffer_str.erase(buffer_str.size() - 2, 1); buffer_str += '\n'; for (size_t i = 0; i < pos; i++) { buffer_str += ' '; } buffer_str += "throw(exc);\n"; for (size_t i = 0; i < (pos - 3); i++) { buffer_str += ' '; } buffer_str += '}'; } else if (buffer_str.find(search_str6, 0) != string::npos) { //we're now at the second problem line of the first case buffer_str += " {\n"; output_str += buffer_str; output_str += '\n'; getline(file_handle, buffer_str); //We're now at the beginning of the 'exc' initialiation statement output_str += buffer_str; output_str += '\n'; while (getline(file_handle, buffer_str)) { if ((buffer_str.find(';', 0) != string::npos) && (buffer_str.find('\"', 0) != string::npos)) { //we're now at the end of the 'exc' initialiation statement buffer_str.erase(buffer_str.size() - 2, 1); buffer_str += '\n'; for (size_t i = 0; i < pos; i++) { buffer_str += ' '; } buffer_str += "throw(exc);\n"; for (size_t i = 0; i < (pos - 3); i++) { buffer_str += ' '; } buffer_str += '}'; } output_str += buffer_str; output_str += '\n'; if (buffer_str.find("return", 0) != string::npos) { getline(file_handle, buffer_str); output_str += buffer_str; output_str += '\n'; about_to_break = true; break; //out of this while loop } } } if (about_to_break) { break; //out of the level 3 while loop (execution then goes back up to beginning of level 2 while loop) } output_str += buffer_str; output_str += '\n'; } Because of this problem, my if statement and then my else statement in my loop are not functioning as they should, and it doesn't break out of that loop when it should (though it eventually does break out of it, but I don't know exactly how yet). Anyone have any idea what could be causing this problem?? Thanks in advance.

    Read the article

  • Accessing a broken mdadm raid

    - by CarstenCarsten
    Hi! I used a western digital mybookworld (SOHO NAS storage using Linux) as backup for my Linux box. Suddenly, the mybookworld does not boot up any more. So I opened the box, removed the hard disk and put the hard disk into an external USB HDD case, and connected it to my Linux box. [ 530.640301] usb 2-1: new high speed USB device using ehci_hcd and address 3 [ 530.797630] scsi7 : usb-storage 2-1:1.0 [ 531.794844] scsi 7:0:0:0: Direct-Access WDC WD75 00AAKS-00RBA0 PQ: 0 ANSI: 2 [ 531.796490] sd 7:0:0:0: Attached scsi generic sg3 type 0 [ 531.797966] sd 7:0:0:0: [sdc] 1465149168 512-byte logical blocks: (750 GB/698 GiB) [ 531.800317] sd 7:0:0:0: [sdc] Write Protect is off [ 531.800327] sd 7:0:0:0: [sdc] Mode Sense: 38 00 00 00 [ 531.800333] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803821] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803836] sdc: sdc1 sdc2 sdc3 sdc4 [ 531.815831] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.815842] sd 7:0:0:0: [sdc] Attached SCSI disk The dmesg output looks normal, but I was wondering why the hardisk was not mounted at all. And why there are 4 different partitions on it. fdisk showed the following: root@ubuntu:/home/ubuntu# fdisk /dev/sdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdc: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00007c00 Device Boot Start End Blocks Id System /dev/sdc1 4 369 2939895 fd Linux raid autodetect /dev/sdc2 370 382 104422+ fd Linux raid autodetect /dev/sdc3 383 505 987997+ fd Linux raid autodetect /dev/sdc4 506 91201 728515620 fd Linux raid autodetect Oh no! Everything seems to be created as a mdadm software raid. Calling mdadm --examine with the different partitions seems to affirm that. I think the only partition I am interested in, is /dev/sdc4 (because it is the largest). But nevertheless I called mdadm --examine with every partition. root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 5626a2d8:070ad992:ef1c8d24:cd8e13e4 Creation Time : Wed Feb 20 00:57:49 2002 Raid Level : raid1 Used Dev Size : 2939776 (2.80 GiB 3.01 GB) Array Size : 2939776 (2.80 GiB 3.01 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 4c90bc55 - correct Events : 16682 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 00.90.00 UUID : 9734b3ee:2d5af206:05fe3413:585f7f26 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 2 Update Time : Wed Oct 27 20:19:08 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 55560b40 - correct Events : 9884 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc3 /dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 08f30b4f:91cca15d:2332bfef:48e67824 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Array Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 39717874 - correct Events : 73678 Number Major Minor RaidDevice State this 0 8 3 0 active sync 0 0 8 3 0 active sync 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 00.90.00 UUID : febb75ca:e9d1ce18:f14cc006:f759419a Creation Time : Wed Feb 20 00:57:55 2002 Raid Level : raid1 Used Dev Size : 728515520 (694.77 GiB 746.00 GB) Array Size : 728515520 (694.77 GiB 746.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 4 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2f36a392 - correct Events : 519320 Number Major Minor RaidDevice State this 0 8 4 0 active sync 0 0 8 4 0 active sync 1 1 0 0 1 faulty removed If I read the output correctly everything was removed, because it was faulty. Is there ANY way to see the contents of the largest partition? Or seeing somehow which files are broken? I see that everything is raid1 which is only mirroring, so this should be a normal partition. I am anxious to do anything with mdadm, in fear that I destroy the data on the hard disk. I would be very thankful for any help.

    Read the article

  • Generating strongly biased radom numbers for tests

    - by nobody
    I want to run tests with randomized inputs and need to generate 'sensible' random numbers, that is, numbers that match good enough to pass the tested function's preconditions, but hopefully wreak havoc deeper inside its code. math.random() (I'm using Lua) produces uniformly distributed random numbers. Scaling these up will give far more big numbers than small numbers, and there will be very few integers. I would like to skew the random numbers (or generate new ones using the old function as a randomness source) in a way that strongly favors 'simple' numbers, but will still cover the whole range, I.e. extending up to positive/negative infinity (or ±1e309 for double). This means: numbers up to, say, ten should be most common, integers should be more common than fractions, numbers ending in 0.5 should be the most common fractions, followed by 0.25 and 0.75; then 0.125, and so on. A different description: Fix a base probability x such that probabilities will sum to one and define the probability of a number n as xk where k is the generation in which n is constructed as a surreal number1. That assigns x to 0, x2 to -1 and +1, x3 to -2, -1/2, +1/2 and +2, and so on. This gives a nice description of something close to what I want (it skews a bit too much), but is near-unusable for computing random numbers. The resulting distribution is nowhere continuous (it's fractal!), I'm not sure how to determine the base probability x (I think for infinite precision it would be zero), and computing numbers based on this by iteration is awfully slow (spending near-infinite time to construct large numbers). Does anyone know of a simple approximation that, given a uniformly distributed randomness source, produces random numbers very roughly distributed as described above? I would like to run thousands of randomized tests, quantity/speed is more important than quality. Still, better numbers mean less inputs get rejected. Lua has a JIT, so performance can't be reasonably predicted. Jumps based on randomness will break every prediction, and many calls to math.random() will be slow, too. This means a closed formula will be better than an iterative or recursive one. 1 Wikipedia has an article on surreal numbers, with a nice picture. A surreal number is a pair of two surreal numbers, i.e. x := {n|m}, and its value is the number in the middle of the pair, i.e. (for finite numbers) {n|m} = (n+m)/2 (as rational). If one side of the pair is empty, that's interpreted as increment (or decrement, if right is empty) by one. If both sides are empty, that's zero. Initially, there are no numbers, so the only number one can build is 0 := { | }. In generation two one can build numbers {0| } =: 1 and { |0} =: -1, in three we get {1| } =: 2, {|1} =: -2, {0|1} =: 1/2 and {-1|0} =: -1/2 (plus some more complex representations of known numbers, e.g. {-1|1} ? 0). Note that e.g. 1/3 is never generated by finite numbers because it is an infinite fraction – the same goes for floats, 1/3 is never represented exactly.

    Read the article

  • Unable to get my master & details gridview to work.

    - by Javier
    I'm unable to get this to work. I'm very new at programming and would appreciate any help on this. <%@ Page Language="C#" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { } protected void DataGridSqlDataSource_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:SqlDataSource ID="DataGrid2SqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:JobPostings1ConnectionString %>" SelectCommand="SELECT [Jobs_PK], [Position_Title], [Educ_Level], [Grade], [JP_Description], [Job_Status], [Position_ID] FROM [Jobs]" FilterExpression="Jobs_PK='@Jobs_PK'"> <filterparameters> <asp:ControlParameter Name="Jobs_PK" ControlId="GridView1" PropertyName="SelectedValue" /> </filterparameters> </asp:SqlDataSource> <asp:SqlDataSource ID="DataGridSqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:JobPostings1ConnectionString %>" SelectCommand="SELECT [Position_Title], [Jobs_PK] FROM [Jobs]" onselecting="DataGridSqlDataSource_Selecting"> </asp:SqlDataSource> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="Jobs_PK" DataSourceID="DataGridSqlDataSource" AllowPaging="True" AutoGenerateSelectButton="True" SelectedIndex="0" Width="100px"> <Columns> <asp:BoundField DataField="Position_Title" HeaderText="Position_Title" SortExpression="Position_Title" /> <asp:BoundField DataField="Jobs_PK" HeaderText="Jobs_PK" InsertVisible="False" ReadOnly="True" SortExpression="Jobs_PK" /> </Columns> </asp:GridView> <br /> <asp:DetailsView ID="DetailsView1" runat="server" AutoGenerateRows="False" DataKeyNames="Jobs_PK" DataSourceID="DataGrid2SqlDataSource" Height="50px" Width="125px"> <Fields> <asp:BoundField DataField="Jobs_PK" HeaderText="Jobs_PK" InsertVisible="False" ReadOnly="True" SortExpression="Jobs_PK" /> <asp:BoundField DataField="Position_Title" HeaderText="Position_Title" SortExpression="Position_Title" /> <asp:BoundField DataField="Educ_Level" HeaderText="Educ_Level" SortExpression="Educ_Level" /> <asp:BoundField DataField="Grade" HeaderText="Grade" SortExpression="Grade" /> <asp:BoundField DataField="JP_Description" HeaderText="JP_Description" SortExpression="JP_Description" /> <asp:BoundField DataField="Job_Status" HeaderText="Job_Status" SortExpression="Job_Status" /> <asp:BoundField DataField="Position_ID" HeaderText="Position_ID" SortExpression="Position_ID" /> </Fields> </asp:DetailsView> </div> </form> </body> error message: Cannot perform '=' operation on System.Int32 and System.String. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.EvaluateException: Cannot perform '=' operation on System.Int32 and System.String.

    Read the article

  • How to prevent DIVs from sliding over each other

    - by Haghpanah
    I’m going to use DIV-based layout instead of table-based to reduce amount of markups and speed up page loading, however I’ve found it too much tricky as I’m not CSS guru. I use following CSS class to simulate rows of a table containing one column for label and one for textbox. .FormItem { margin-left: auto; margin-right: auto; width: 604px; min-height: 36px; } .ItemLabel { float: left; width: 120px; padding: 3px 1px 1px 1px; text-align: right; } .ItemTextBox { float: right; width: 480px; padding: 1px 1px 1px 1px; text-align: left; } , <div class="FormItem"> <div class="ItemLabel"> <asp:Label ID="lblName" runat="server" Text="Name :"></asp:Label> </div> <div class="ItemTextBox"> <asp:TextBox ID="txtName" runat="server"></asp:TextBox> <p><span>User Name</span></p> </div> </div> <div class="FormItem"> <div class="ItemLabel"> <asp:Label ID="lblComments" runat="server" Text="Comments :"></asp:Label> </div> <div class="ItemTextBox"> <asp:TextBox ID="txtComments" runat="server"></asp:TextBox> <p><span>(optional)Comments</span></p> </div> </div> These styles work fine if the height of ItemData DIVs are less than or equal to FormItem DIVs min-height. If ItemData DIVs height gets more than FormItem height then ItemText DIVs start sliding over FormItem DIVs to and ItemText and ItemData are no longer aligned. For example the following markups… <div class="FormItem"> <div class="ItemLabel"> <asp:Label ID="lblName" runat="server" Text="Name :"></asp:Label> </div> <div class="ItemTextBox"> <asp:TextBox ID="txtName" runat="server"></asp:TextBox> <p><span>User Name</span></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> </div> </div> <div class="FormLabel"> <div class="ItemText"> <asp:Label ID="lblComments" runat="server" Text="Comments :"></asp:Label> </div> <div class="ItemTextBox"> <asp:TextBox ID="txtComments" runat="server"></asp:TextBox> <p><span>(optional)Comments</span></p> </div> </div> I've tried several CSS attributes such as; position, float, clear… but I can not correct the problem. I’ll be appreciated for any help.

    Read the article

  • Replace html element with data in javascript

    - by Ultimate
    I trying to auto increment the serial number when row increases and automatically readjust the numbering order when a row gets deleted in javascript. For that I am using the following clone method for doing my task. Rest of the thing is working correct except its not increasing the srno because its creating the clone of it. Following is my code for this: function addCloneRow(obj) { if(obj) { var tBody = obj.parentNode.parentNode.parentNode; var trTable = tBody.getElementsByTagName("tr")[1]; var trClone = trTable.cloneNode(true); if(trClone) { var txt = trClone.getElementsByTagName("input"); var srno = trClone.getElementsByTagName("span"); var dd = trClone.getElementsByTagName("select"); text = tBody.getElementsByTagName("tr").length; alert(text) //here i am getting the srno in increasing order //I tried something like following but not working //var ele = srno.replace(document.createElement("h1"), srno); //alert(ele); for(var i=0; i<dd.length; i++) { dd[i].options[0].selected=true; var nm = dd[i].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); dd[i].name = nNm+"[]"; } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } tBody.insertBefore(trClone,tBody.childNodes[1]); } } } Following is my html : <table id="step_details" style="display:none;"> <tr> <th width="5">#</th> <th width="45%">Step details</th> <th>Expected Results</th> <th width="25">Execution</th> <th><img src="gui/themes/default/images/ico_add.gif" onclick="addCloneRow(this);"/></th> </tr> <tr> <td><span>1</span></td> <td><textArea name="step_details[]"></textArea></td> <td><textArea name="expected_results[]"></textArea></td> <td><select onchange="content_modified = true" name="exec_type[]"> <option selected="selected" value="1" label="Manual">Manual</option> <option value="2" label="Automated">Automated</option> </select> </td> <td><img src="gui/themes/default/images/ico_del.gif" onclick="removeCloneRow(this);"/></td> </tr> </table> I want to change the srno. of span element dynamically after increment and decrement on it. Need help thanks

    Read the article

  • Objective-C vs JavaScript loop performance

    - by micadelli
    I have a PhoneGap mobile application that I need to generate an array of match combinations. In JavaScript side, the code hanged pretty soon when the array of which the combinations are generated from got a bit bigger. So, I thought I'll make a plugin to generate the combinations, passing the array of javascript objects to native side and loop it there. To my surprise the following codes executes in 150 ms (JavaScript) whereas in native side (Objective-C) it takes ~1000 ms. Does anyone know any tips for speeding up those executing times? When players exceeds 10, i.e. the length of the array of teams equals 252 it really gets slow. Those execution times mentioned above are for 10 players / 252 teams. Here's the JavaScript code: for (i = 0; i < GAME.teams.length; i += 1) { for (j = i + 1; j < GAME.teams.length; j += 1) { t1 = GAME.teams[i]; t2 = GAME.teams[j]; if ((t1.mask & t2.mask) === 0) { GAME.matches.push({ Team1: t1, Team2: t2 }); } } } ... and here's the native code: NSArray *teams = [[NSArray alloc] initWithArray: [options objectForKey:@"teams"]]; NSMutableArray *t = [[NSMutableArray alloc] init]; int mask_t1; int mask_t2; for (NSInteger i = 0; i < [teams count]; i++) { for (NSInteger j = i + 1; j < [teams count]; j++) { mask_t1 = [[[teams objectAtIndex:i] objectForKey:@"mask"] intValue]; mask_t2 = [[[teams objectAtIndex:j] objectForKey:@"mask"] intValue]; if ((mask_t1 & mask_t2) == 0) { [t insertObject:[teams objectAtIndex:i] atIndex:0]; [t insertObject:[teams objectAtIndex:j] atIndex:1]; /* NSArray *newCombination = [[NSArray alloc] initWithObjects: [teams objectAtIndex:i], [teams objectAtIndex:j], nil]; */ [combinations addObject:t]; } } } ... the array in question (GAME.teams) looks like this: { count = 2; full = 1; list = ( { index = 0; mask = 1; name = A; score = 0; }, { index = 1; mask = 2; name = B; score = 0; } ); mask = 3; name = A; }, { count = 2; full = 1; list = ( { index = 0; mask = 1; name = A; score = 0; }, { index = 2; mask = 4; name = C; score = 0; } ); mask = 5; name = A; },

    Read the article

  • Get backreferences values and modificate these values

    - by roasted
    Could you please explain why im not able to get values of backreferences from a matched regex result and apply it some modification before effective replacement? The expected result is replacing for example string ".coord('X','Y')" by "X * Y". But if X to some value, divide this value by 2 and then use this new value in replacement. Here the code im currently testing: See /*>>1<<*/ & /*>>2<<*/ & /*>>3<<*/, this is where im stuck! I would like to be able to apply modification on backrefrences before replacement depending of backreferences values. Difference between /*>>2<<*/ & /*>>3<<*/ is just the self call anonymous function param The method /*>>2<<*/ is the expected working solution as i can understand it. But strangely, the replacement is not working correctly, replacing by alias $1 * $2 and not by value...? You can test the jsfiddle //string to test ".coord('125','255')" //array of regex pattern and replacement //just one for the example //for this example, pattern matching alphanumerics is not necessary (only decimal in coord) but keep it as it var regexes = [ //FORMAT is array of [PATTERN,REPLACEMENT] /*.coord("X","Y")*/ [/\.coord\(['"]([\w]+)['"],['"]?([\w:\.\\]+)['"]?\)/g, '$1 * $2'] ]; function testReg(inputText, $output) { //using regex for (var i = 0; i < regexes.length; i++) { /*==>**1**/ //this one works as usual but dont let me get backreferences values $output.val(inputText.replace(regexes[i][0], regexes[i][2])); /*==>**2**/ //this one should works as i understand it $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { $1 = checkReplace(match, $1, $2, $3, $4); //here want using $1 modified value in replacement return regexes[i][3]; })); /*==>**3**/ //this one is just a test by self call anonymous function $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { $1 = checkReplace(match, $1, $2, $3, $4); //here want using $1 modified value in replacement return regexes[i][4]; }())); inputText = $output.val(); } } function checkReplace(match, $1, $2, $3, $4) { console.log(match + ':::' + $1 + ':::' + $2 + ':::' + $3 + ':::' + $4); //HERE i should be able if lets say $1 > 200 divide it by 2 //then returning $1 value if($1 > 200) $1 = parseInt($1 / 2); return $1; }? Sure I'm missing something, but cannot get it! Thanks for your help, regards. EDIT WORKING METHOD: Finally get it, as mentionned by Eric: The key thing is that the function returns the literal text to substitute, not a string which is parsed for backreferences.?? JSFIDDLE So complete working code: (please note as pattern replacement will change for each matched pattern and optimisation of speed code is not an issue here, i will keep it like that) $('#btn').click(function() { testReg($('#input').val(), $('#output')); }); //array of regex pattern and replacement //just one for the example var regexes = [ //FORMAT is array of [PATTERN,REPLACEMENT] /*.coord("X","Y")*/ [/\.coord\(['"]([\w]+)['"],['"]?([\w:\.\\]+)['"]?\)/g, '$1 * $2'] ]; function testReg(inputText, $output) { //using regex for (var i = 0; i < regexes.length; i++) { $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { var checkedValues = checkReplace(match, $1, $2, $3, $4); $1 = checkedValues[0]; $2 = checkedValues[1]; regexes[i][1] = regexes[i][1].replace('$1', $1).replace('$2', $2); return regexes[i][1]; })); inputText = $output.val(); } } function checkReplace(match, $1, $2, $3, $4) { console.log(match + ':::' + $1 + ':::' + $2 + ':::' + $3 + ':::' + $4); if ($1 > 200) $1 = parseInt($1 / 2); if ($2 > 200) $2 = parseInt($2 / 2); return [$1,$2]; }

    Read the article

  • Having trouble with multiple Jquery libraries

    - by user3716971
    I've seen the posts about the no conflict but I'm not very code savvy and can't figure it out alone. I'm having trouble making two libraries work together. At the top I have the 1.9.1 library which controls a news ticker, and a carousel. Near the bottom there is a library 1.6.1, which controls a Dribbble feed. If I remove 1.6.1 everything but the dribbble feed works, and if I remove the 1.9.1 the dribbble feed is the only thing that works. I uploaded the website for you guys to check out. If you could edit my code to make it work that would be amazing, I don't have much knowledge of jquery. This version has a working dribbble feed at the very bottom http://michaelcullenbenson.com/MichaelCullenBenson.com/index.html and this version has a broken feed and everything else works. http://michaelcullenbenson.com/MichaelCullenBenson.com/index2.html Help would be AMAZING as the dribbble feed is the last element I'm trying to finish on my homepage and I'll be able to move on. <script type="text/javascript" src="js/jquery-1.9.1.min.js"></script> <script type="text/javascript" src="js/jquery.innerfade.js"></script> <script type="text/javascript"> $(document).ready( function(){ $('#news').innerfade({ animationtype: 'slide', speed: 600, timeout: 6000, type: 'random', containerheight: '1em' }); }); </script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="utilcarousel-files/utilcarousel/jquery.utilcarousel.min.js"></script> <script src="utilcarousel-files/magnific-popup/jquery.magnific-popup.js"></script> <script src="js/responsive-nav.js"></script> <script> $(function() { $('#fullwidth').utilCarousel({ breakPoints : [[600, 1], [800, 2], [1000, 3], [1300, 4],], mouseWheel : false, rewind : true, autoPlay : true, pagination : false }); $('#fullwidth2').utilCarousel({ breakPoints : [[600, 1], [800, 2], [1000, 3], [1300, 4],], mouseWheel : false, rewind : true, autoPlay : true, pagination : false }); }); </script> <script> $(document).ready(function() { var movementStrength = 25; var height = movementStrength / $(window).height(); var width = movementStrength / $(window).width(); $("#aboutarea").mousemove(function(e){ var pageX = e.pageX - ($(window).width() / 2); var pageY = e.pageY - ($(window).height() / 2); var newvalueX = width * pageX * -1 - 25; var newvalueY = height * pageY * -1 - 50; $('#aboutarea').css("background-position", newvalueX+"px "+newvalueY+"px"); }); }); </script> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.min.js" type="text/javascript"></script> <script type="text/javascript" src="dribbble.js"></script> <script type="text/javascript"> $(function () { $('#user').dribbble({ player: 'MCBDesign', total: 1 }); }); </script>

    Read the article

  • ASP.NET MVC 3 - New Features

    - by imran_ku07
    Introduction:          ASP.NET MVC 3 just released by ASP.NET MVC team which includes some new features, some changes, some improvements and bug fixes. In this article, I will show you the new features of ASP.NET MVC 3. This will help you to get started using the new features of ASP.NET MVC 3. Full details of this announcement is available at Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix.   Description:       New Razor View Engine:              Razor view engine is one of the most coolest new feature in ASP.NET MVC 3. Razor is speeding things up just a little bit more. It is much smaller and lighter in size. Also it is very easy to learn. You can say ' write less, do more '. You can get start and learn more about Razor at Introducing “Razor” – a new view engine for ASP.NET.         Granular Request Validation:             Another biggest new feature in ASP.NET MVC 3 is Granular Request Validation. Default request validator will throw an exception when he see < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of querystring, posted form, headers and cookie collection. In previous versions of ASP.NET MVC, you can control request validation using ValidateInputAttriubte. In ASP.NET MVC 3 you can control request validation at Model level by annotating your model properties with a new attribute called AllowHtmlAttribute. For details see Granular Request Validation in ASP.NET MVC 3.       Sessionless Controller Support:             Sessionless Controller is another great new feature in ASP.NET MVC 3. With Sessionless Controller you can easily control your session behavior for controllers. For example, you can make your HomeController's Session as Disabled or ReadOnly, allowing concurrent request execution for single user. For details see Concurrent Requests In ASP.NET MVC and HowTo: Sessionless Controller in MVC3 – what & and why?.       Unobtrusive Ajax and  Unobtrusive Client Side Validation is Supported:             Another cool new feature in ASP.NET MVC 3 is support for Unobtrusive Ajax and Unobtrusive Client Side Validation.  This feature allows separation of responsibilities within your web application by separating your html with your script. For details see Unobtrusive Ajax in ASP.NET MVC 3 and Unobtrusive Client Validation in ASP.NET MVC 3.       Dependency Resolver:             Dependency Resolver is another great feature of ASP.NET MVC 3. It allows you to register a dependency resolver that will be used by the framework. With this approach your application will not become tightly coupled and the dependency will be injected at run time. For details see ASP.NET MVC 3 Service Location.       New Helper Methods:             ASP.NET MVC 3 includes some helper methods of ASP.NET Web Pages technology that are used for common functionality. These helper methods includes: Chart, Crypto, WebGrid, WebImage and WebMail. For details of these helper methods, please see ASP.NET MVC 3 Release Notes. For using other helper methods of ASP.NET Web Pages see Using ASP.NET Web Pages Helpers in ASP.NET MVC.       Child Action Output Caching:             ASP.NET MVC 3 also includes another feature called Child Action Output Caching. This allows you to cache only a portion of the response when you are using Html.RenderAction or Html.Action. This cache can be varied by action name, action method signature and action method parameter values. For details see this.       RemoteAttribute:             ASP.NET MVC 3 allows you to validate a form field by making a remote server call through Ajax. This makes it very easy to perform remote validation at client side and quickly give the feedback to the user. For details see How to: Implement Remote Validation in ASP.NET MVC.       CompareAttribute:             ASP.NET MVC 3 includes a new validation attribute called CompareAttribute. CompareAttribute allows you to compare the values of two different properties of a model. For details see CompareAttribute in ASP.NET MVC 3.       Miscellaneous New Features:                    ASP.NET MVC 2 includes FormValueProvider, QueryStringValueProvider, RouteDataValueProvider and HttpFileCollectionValueProvider. ASP.NET MVC 3 adds two additional value providers, ChildActionValueProvider and JsonValueProvider(JsonValueProvider is not physically exist).  ChildActionValueProvider is used when you issue a child request using Html.Action and/or Html.RenderAction methods, so that your explicit parameter values in Html.Action and/or Html.RenderAction will always take precedence over other value providers. JsonValueProvider is used to model bind JSON data. For details see Sending JSON to an ASP.NET MVC Action Method Argument.           In ASP.NET MVC 3, a new property named FileExtensions added to the VirtualPathProviderViewEngine class. This property is used when looking up a view by path (and not by name), so that only views with a file extension contained in the list specified by this new property is considered. For details see VirtualPathProviderViewEngine.FileExtensions Property .           ASP.NET MVC 3 installation package also includes the NuGet Package Manager which will be automatically installed when you install ASP.NET MVC 3. NuGet makes it easy to install and update open source libraries and tools in Visual Studio. See this for details.           In ASP.NET MVC 2, client side validation will not trigger for overridden model properties. For example, if have you a Model that contains some overridden properties then client side validation will not trigger for overridden properties in ASP.NET MVC 2 but client side validation will work for overridden properties in ASP.NET MVC 3.           Client side validation is not supported for StringLengthAttribute.MinimumLength property in ASP.NET MVC 2. In ASP.NET MVC 3 client side validation will work for StringLengthAttribute.MinimumLength property.           ASP.NET MVC 3 includes new action results like HttpUnauthorizedResult, HttpNotFoundResult and HttpStatusCodeResult.           ASP.NET MVC 3 includes some new overloads of LabelFor and LabelForModel methods. For details see LabelExtensions.LabelForModel and LabelExtensions.LabelFor.           In ASP.NET MVC 3, IControllerFactory includes a new method GetControllerSessionBehavior. This method is used to get controller's session behavior. For details see IControllerFactory.GetControllerSessionBehavior Method.           In ASP.NET MVC 3, Controller class includes a new property ViewBag which is of type dynamic. This property allows you to access ViewData Dictionary using C # 4.0 dynamic features. For details see ControllerBase.ViewBag Property.           ModelMetadata includes a property AdditionalValues which is of type Dictionary. In ASP.NET MVC 3 you can populate this property using AdditionalMetadataAttribute. For details see AdditionalMetadataAttribute Class.           In ASP.NET MVC 3 you can also use MvcScaffolding to scaffold your Views and Controller. For details see Scaffold your ASP.NET MVC 3 project with the MvcScaffolding package.           If you want to convert your application from ASP.NET MVC 2 to ASP.NET MVC 3 then there is an excellent tool that automatically converts ASP.NET MVC 2 application to ASP.NET MVC 3 application. For details see MVC 3 Project Upgrade Tool.           In ASP.NET MVC 2 DisplayAttribute is not supported but in ASP.NET MVC 3 DisplayAttribute will work properly.           ASP.NET MVC 3 also support model level validation via the new IValidatableObject interface.           ASP.NET MVC 3 includes a new helper method Html.Raw. This helper method allows you to display unencoded HTML.     Summary:          In this article I showed you the new features of ASP.NET MVC 3. This will help you a lot when you start using ASP MVC 3. I also provide you the links where you can find further details. Hopefully you will enjoy this article too.  

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to Upgrade Your Netbook to Windows 7 Home Premium

    - by Matthew Guay
    Would you like more features and flash in Windows on your netbook?  Here’s how you can easily upgrade your netbook to Windows 7 Home Premium the easy way. Most new netbooks today ship with Windows 7 Starter, which is the cheapest edition of Windows 7.  It is fine for many computing tasks, and will run all your favorite programs great, but it lacks many customization, multimedia, and business features found in higher editions.  Here we’ll show you how you can quickly upgrade your netbook to more full-featured edition of Windows 7 using Windows Anytime Upgrade.  Also, if you want to upgrade your laptop or desktop to another edition of Windows 7, say Professional, you can follow these same steps to upgrade it, too. Please note: This is only for computers already running Windows 7.  If your netbook is running XP or Vista, you will have to run a traditional upgrade to install Windows 7. Upgrade Advisor First, let’s make sure your netbook can support the extra features, such as Aero Glass, in Windows 7 Home Premium.  Most modern netbooks that ship with Windows 7 Starter can run the advanced features in Windows 7 Home Premium, but let’s check just in case.  Download the Windows 7 Upgrade Advisor (link below), and install as normal. Once it’s installed, run it and click Start Check.   Make sure you’re connected to the internet before you run the check, or otherwise you may see this error message.  If you see it, click Ok and then connect to the internet and start the check again. It will now scan all of your programs and hardware to make sure they’re compatible with Windows 7.  Since you’re already running Windows 7 Starter, it will also tell you if your computer will support the features in other editions of Windows 7. After a few moments, the Upgrade Advisor will show you want it found.  Here we see that our netbook, a Samsung N150, can be upgraded to Windows 7 Home Premium, Professional, or Ultimate. We also see that we had one issue, but this was because a driver we had installed was not recognized.  Click “See all system requirements” to see what your netbook can do with the new edition. This shows you which of the requirements, including support for Windows Aero, your netbook meets.  Here our netbook supports Aero, so we’re ready to go upgrade. For more, check out our article on how to make sure your computer can run Windows 7 with Upgrade Advisor. Upgrade with Anytime Upgrade Now, we’re ready to upgrade our netbook to Windows 7 Home Premium.  Enter “Anytime Upgrade” in the Start menu search,and select Windows Anytime Upgrade. Windows Anytime Upgrade lets you upgrade using product key you already have or one you purchase during the upgrade process.  And, it installs without any downloads or Windows disks, so it works great even for netbooks without DVD drives. Anytime Upgrades are cheaper than a standard upgrade, and for a limited time, select retailers in the US are offering Anytime Upgrades to Windows 7 Home Premium for only $49.99 if purchased with a new netbook.  If you already have a netbook running Windows 7 Starter, you can either purchase an Anytime Upgrade package at a retail store or purchase a key online during the upgrade process for $79.95.  Or, if you have a standard Windows 7 product key (full or upgrade), you can use it in Anytime upgrade.  This is especially nice if you can purchase Windows 7 cheaper through your school, university, or office. Purchase an upgrade online To purchase an upgrade online, click “Go online to choose the edition of Windows 7 that’s best for you”.   Here you can see a comparison of the features of each edition of Windows 7.  Note that you can upgrade to either Home Premium, Professional, or Ultimate.  We chose home Premium because it has most of the features that home users want, including Media Center and Aero Glass effects.  Also note that the price of each upgrade is cheaper than the respective upgrade from Windows XP or Vista.  Click buy under the edition you want.   Enter your billing information, then your payment information.  Once you confirm your purchase, you will directly be taken to the Upgrade screen.  Make sure to save your receipt, as you will need the product key if you ever need to reinstall Windows on your computer. Upgrade with an existing product key If you purchased an Anytime Upgrade kit from a retailer, or already have a Full or Upgrade key for another edition of Windows 7, choose “Enter an upgrade key”. Enter your product key, and click Next.  If you purchased an Anytime Upgrade kit, the product key will be located on the inside of the case on a yellow sticker. The key will be verified as a valid key, and Anytime Upgrade will automatically choose the correct edition of Windows 7 based on your product key.  Click Next when this is finished. Continuing the Upgrade process Whether you entered a key or purchased a key online, the process is the same from here on.  Click “I accept” to accept the license agreement. Now, you’re ready to install your upgrade.  Make sure to save all open files and close any programs, and then click Upgrade. The upgrade only takes about 10 minutes in our experience but your mileage may vary.  Any available Microsoft updates, including ones for Office, Security Essentials, and other products, will be installed before the upgrade takes place. After a couple minutes, your computer will automatically reboot and finish the installation.  It will then reboot once more, and your computer will be ready to use!  Welcome to your new edition of Windows 7! Here’s a before and after shot of our desktop.  When you do an Anytime Upgrade, all of your programs, files, and settings will be just as they were before you upgraded.  The only change we noticed was that our pinned taskbar icons were slightly rearranged to the default order of Internet Explorer, Explorer, and Media Player.  Here’s a shot of our desktop before the upgrade.  Notice that all of our pinned programs and desktop icons are still there, as well as our taskbar customization (we are using small icons on the taskbar instead of the default large icons). Before, with the Windows 7 Starter background and the Aero Basic theme: And after, with Aero Glass and the more colorful default Windows 7 background.   All of the features of Windows 7 Home Premium are now ready to use.  The Aero theme was activate by default, but you can now customize your netbook theme, background, and more with the Personalization pane.  To open it, right-click on your desktop and select Personalize. You can also now use Windows Media Center, and can play-back DVD movies using an external drive. One of our favorite tools, the Snipping Tool, is also now available for easy screenshots and clips. Activating you new edition of Windows 7 You will still need to activate your new edition of Windows 7.  To do this right away, open the start menu, right-click on Computer, and select Properties.   Scroll to the bottom, and click “Activate Windows Now”. Make sure you’re connected to the internet, and then select “Activate Windows online now”. Activation may take a few minutes, depending on your internet connection speed. When it is done, the Activation wizard will let you know that Windows is activated and genuine.  Your upgrade is all finished! Conclusion Windows Anytime Upgrade makes it easy, and somewhat cheaper, to upgrade to another edition of Windows 7.  It’s useful for desktop and laptop owners who want to upgrade to Professional or Ultimate, but many more netbook owners will want to upgrade from Starter to Home Premium or another edition.  Links Download the Windows 7 Upgrade Advisor Windows Team Blog: Anytime Upgrade Special with new PC purchase Similar Articles Productive Geek Tips How To Upgrade from Vista to Windows 7 Home Premium EditionAnother Blog You Should Subscribe ToMysticgeek Blog: Turn Vista Home Premium Into Ultimate (Part 3) – Shadow CopyUpgrade Ubuntu from Breezy to DapperHow to Upgrade the Windows 7 RC to RTM (Final Release) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • The How-To Geek Holiday Gift Guide (Geeky Stuff We Like)

    - by The Geek
    Welcome to the very first How-To Geek Holiday Gift Guide, where we’ve put together a list of our absolute favorites to help you weed through all of the junk out there to pick the perfect gift for anybody. Though really, it’s just a list of the geeky stuff we want. We’ve got a whole range of items on the list, from cheaper gifts that most anybody can afford, to the really expensive stuff that we’re pretty sure nobody is giving us. Stocking Stuffers Here’s a couple of ideas for items that won’t break the bank. LED Keychain Micro-Light   Magcraft 1/8-Inch Rare Earth Cube Magnets Best little LED keychain light around. If they don’t need the penknife of the above item this is the perfect gift. I give them out by the handfuls and nobody ever says anything but good things about them. I’ve got ones that are years old and still running on the same battery.  Price: $8   Geeks cannot resist magnets. Jason bought this pack for his fridge because he was sick of big clunky magnets… these things are amazing. One tiny magnet, smaller than an Altoid mint, can practically hold a clipboard right to the fridge. Amazing. I spend more time playing with them on the counter than I do actually hanging stuff.  Price: $10 Lots of Geeky Mugs   Astronomy Powerful Green Laser Pointer There’s loads of fun, geeky mugs you can find on Amazon or anywhere else—and they are great choices for the geek who loves their coffee. You can get the Caffeine mug pictured here, or go with an Atari one, Canon Lens, or the Aperture mug based on Portal. Your choice. Price: $7   No, it’s not a light saber, but it’s nearly bright enough to be one—you can illuminate low flying clouds at night or just blind some aliens on your day off. All that for an extremely low price. Loads of fun. Price: $15       Geeky TV Shows and Books Sometimes you just want to relax and enjoy a some TV or a good book. Here’s a few choices. The IT Crowd Fourth Season   Doctor Who, Complete Fifth Series Ridiculous, funny show about nerds in the IT department, loved by almost all the geeks here at HTG. Justin even makes this required watching for new hires in his office so they’ll get his jokes. You can pre-order the fourth season, or pick up seasons one, two, or three for even cheaper. Price: $13   It doesn’t get any more nerdy than Eric’s pick, the fifth all-new series of Doctor Who, where the Daleks are hatching a new master plan from the heart of war-torn London. There’s also alien vampires, humanoid reptiles, and a lot more. Price: $52 Battlestar Galactica Complete Series   MAKE: Electronics: Learning Through Discovery Watch the epic fight to save the human race by finding the fabled planet Earth while being hunted by the robotic Cylons. You can grab the entire series on DVD or Blu-ray, or get the seasons individually. This isn’t your average sci-fi TV show. Price: $150 for Blu-ray.   Want to learn the fundamentals of electronics in a fun, hands-on way? The Make:Electronics book helps you build the circuits and learn how it all works—as if you had any more time between all that registry hacking and loading software on your new PC. Price: $21       Geeky Gadgets for the Gadget-Loving Geek Here’s a few of the items on our gadget list, though lets be honest: geeks are going to love almost any gadget, especially shiny new ones. Klipsch Image S4i Premium Noise-Isolating Headset with 3-Button Apple Control   GP2X Caanoo MAME/Console Emulator If you’re a real music geek looking for some serious quality in the headset for your iPhone or iPod, this is the pair that Alex recommends. They aren’t terribly cheap, but you can get the less expensive S3 earphones instead if you prefer. Price: $50-100   Eric says: “As an owner of an older version, I can say the GP2X is one of my favorite gadgets ever. Touted a “Retro Emulation Juggernaut,” GP2X runs Linux and may be the only open source software console available. Sounds too good to be true, but isn’t.” Price: $150 Roku XDS Streaming Player 1080p   Western Digital WD TV Live Plus HD Media Player If you do a lot of streaming over Netflix, Hulu Plus, Amazon’s Video on Demand, Pandora, and others, the Roku box is a great choice to get your content on your TV without paying a lot of money.  It’s also got Wireless-N built in, and it supports full 1080P HD. Price: $99   If you’ve got a home media collection sitting on a hard drive or a network server, the Western Digital box is probably the cheapest way to get that content on your TV, and it even supports Netflix streaming too. It’ll play loads of formats in full HD quality. Price: $99 Fujitsu ScanSnap S300 Color Mobile Scanner   Doxie, the amazing scanner for documents Trevor said: “This wonderful little scanner has become absolutely essential to me. My desk used to just be a gigantic pile of papers that I didn’t need at the moment, but couldn’t throw away ‘just in case.’ Now, every few weeks, I’ll run that paper pile through this and then happily shred the originals!” Price: $300   If you don’t scan quite as often and are looking for a budget scanner you can throw into your bag, or toss into a drawer in your desk, the Doxie scanner is a great alternative that I’ve been using for a while. It’s half the price, and while it’s not as full-featured as the Fujitsu, it might be a better choice for the very casual user. Price: $150       (Expensive) Gadgets Almost Anybody Will Love If you’re not sure that one of the more geeky presents is gonna work, here’s some gadgets that just about anybody is going to love, especially if they don’t have one already. Of course, some of these are a bit on the expensive side—but it’s a wish list, right? Amazon Kindle       The Kindle weighs less than a paperback book, the screen is amazing and easy on the eyes, and get ready for the kicker: the battery lasts at least a month. We aren’t kidding, either—it really lasts that long. If you don’t feel like spending money for books, you can use it to read PDFs, and if you want to get really geeky, you can hack it for custom screensavers. Price: $139 iPod Touch or iPad       You can’t go wrong with either of these presents—the iPod Touch can do almost everything the iPhone can do, including games, apps, and music, and it has the same Retina display as the iPhone, HD video recording, and a front-facing camera so you can use FaceTime. Price: $229+, depending on model. The iPad is a great tablet for playing games, browsing the web, or just using on your coffee table for guests. It’s well worth buying one—but if you’re buying for yourself, keep in mind that the iPad 2 is probably coming out in 3 months. Price: $500+ MacBook Air  The MacBook Air comes in 11” or 13” versions, and it’s an amazing little machine. It’s lightweight, the battery lasts nearly forever, and it resumes from sleep almost instantly. Since it uses an SSD drive instead of a hard drive, you’re barely going to notice any speed problems for general use. So if you’ve got a lot of money to blow, this is a killer gift. Price: $999 and up. Stuck with No Idea for a Present? Gift Cards! Yeah, you’re not going to win any “thoughtful present” awards with these, but you might just give somebody what they really want—the new Angry Birds HD for their iPad, Cut the Rope, or anything else they want. ITunes Gift Card   Amazon.com Gift Card Somebody in your circle getting a new iPod, iPhone, or iPad? You can get them an iTunes gift card, which they can use to buy music, games or apps. Yep, this way you can gift them a copy of Angry Birds if they don’t already have it. Or even Cut the Rope.   No clue what to get somebody on your list? Amazon gift cards let them buy pretty much anything they want, from organic weirdberries to big screen TVs. Yeah, it’s not as thoughtful as getting them a nice present, but look at the bright side: maybe they’ll get you an Amazon gift card and it’ll balance out. That’s the highlights from our lists—got anything else to add? Share your geeky gift ideas in the comments. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • Quartz.Net Writing your first Hello World Job

    - by Tarun Arora
    In this blog post I’ll be covering, 01: A few things to consider before you should schedule a Job using Quartz.Net 02: Setting up your solution to use Quartz.Net API 03: Quartz.Net configuration 04: Writing & scheduling a hello world job with Quartz.Net If you are new to Quartz.Net I would recommend going through, A brief introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service A few things to consider before you should schedule a Job using Quartz.Net - An instance of the scheduler service - A trigger - And last but not the least a job For example, if I wanted to schedule a script to run on the server, I should be jotting down answers to the below questions, a. Considering there are multiple machines set up with Quartz.Net windows service, how can I choose the instance of Quartz.Net where I want my script to be run b. What will trigger the execution of the job c. How often do I want the job to run d. Do I want the job to run right away or start after a delay or may be have the job start at a specific time e. What will happen to my job if Quartz.Net windows service is reset f. Do I want multiple instances of this job to run concurrently g. Can I pass parameters to the job being executed by Quartz.Net windows service Setting up your solution to use Quartz.Net API 1. Create a new C# Console Application project and call it “HelloWorldQuartzDotNet” and add a reference to Quartz.Net.dll. I use the NuGet Package Manager to add the reference. This can be done by right clicking references and choosing Manage NuGet packages, from the Nuget Package Manager choose Online from the left panel and in the search box on the right search for Quartz.Net. Click Install on the package “Quartz” (Screen shot below). 2. Right click the project and choose Add New Item. Add a new Interface and call it ‘IScheduledJob.cs’. Mark the Interface public and add the signature for Run. Your interface should look like below. namespace HelloWorldQuartzDotNet { public interface IScheduledJob { void Run(); } }   3. Right click the project and choose Add new Item. Add a class and call it ‘Scheduled Job’. Use this class to implement the interface ‘IscheduledJob.cs’. Look at the pseudo code in the implementation of the Run method. using System; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler // Define the Job to be scheduled // Associate a trigger with the Job // Assign the Job to the scheduler throw new NotImplementedException(); } } }   I’ll get into the implementation in more detail, but let’s look at the minimal configuration a sample configuration file for Quartz.Net service to work. Quartz.Net configuration In the App.Config file copy the below configuration <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> </configSections> <quartz> <add key="quartz.scheduler.instanceName" value="ServerScheduler" /> <add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" /> <add key="quartz.threadPool.threadCount" value="10" /> <add key="quartz.threadPool.threadPriority" value="2" /> <add key="quartz.jobStore.misfireThreshold" value="60000" /> <add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" /> </quartz> </configuration>   As you can see in the configuration above, I have included the instance name of the quartz scheduler, the thread pool type, count and priority, the job store type has been defined as RAM. You have the option of configuring that to ADO.NET JOB store. More details here. Writing & scheduling a hello world job with Quartz.Net Once fully implemented the ScheduleJob.cs class should look like below. I’ll walk you through the details of the implementation… - GetScheduler() uses the name of the quartz.net and listens on localhost port 555 to try and connect to the quartz.net windows service. - Run() an attempt is made to start the scheduler in case it is in standby mode - I have defined a job “WriteHelloToConsole” (that’s the name of the job), this job belongs to the group “IT”. Think of group as a logical grouping feature. It helps you bucket jobs into groups. Quartz.Net gives you the ability to pause or delete all jobs in a group (We’ll look at that in some of the future posts). I have requested for recovery of this job in case the quartz.net service fails over to the other node in the cluster. The jobType is “HelloWorldJob”. This is the class that would be called to execute the job. More details on this below… - I have defined a trigger for my job. I have called the trigger “WriteHelloToConsole”. The Trigger works on the cron schedule “0 0/1 * 1/1 * ? *” which means fire the job once every minute. I would recommend that you look at www.cronmaker.com a free and great website to build and parse cron expressions. The trigger has a priority 1. So, if two jobs are run at the same time, this trigger will have high priority and will be run first. - Use the Job and Trigger to schedule the job. This method returns a datetime offeset. It is possible to see the next fire time for the job from this variable. using System.Collections.Specialized; using System.Configuration; using Quartz; using System; using Quartz.Impl; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler var schd = GetScheduler(); // Start the scheduler if its in standby if (!schd.IsStarted) schd.Start(); // Define the Job to be scheduled var job = JobBuilder.Create<HelloWorldJob>() .WithIdentity("WriteHelloToConsole", "IT") .RequestRecovery() .Build(); // Associate a trigger with the Job var trigger = (ICronTrigger)TriggerBuilder.Create() .WithIdentity("WriteHelloToConsole", "IT") .WithCronSchedule("0 0/1 * 1/1 * ? *") // visit http://www.cronmaker.com/ Queues the job every minute .WithPriority(1) .Build(); // Assign the Job to the scheduler var schedule = schd.ScheduleJob(job, trigger); Console.WriteLine("Job '{0}' scheduled for '{1}'", "", schedule.ToString("r")); } // Get an instance of the Quartz.Net scheduler private static IScheduler GetScheduler() { try { var properties = new NameValueCollection(); properties["quartz.scheduler.instanceName"] = "ServerScheduler"; // set remoting expoter properties["quartz.scheduler.proxy"] = "true"; properties["quartz.scheduler.proxy.address"] = string.Format("tcp://{0}:{1}/{2}", "localhost", "555", "QuartzScheduler"); // Get a reference to the scheduler var sf = new StdSchedulerFactory(properties); return sf.GetScheduler(); } catch (Exception ex) { Console.WriteLine("Scheduler not available: '{0}'", ex.Message); throw; } } } }   The above highlighted values have been taken from the Quartz.config file, this file is available in the Quartz.net server installation directory. Implementation of my HelloWorldJob Class below. The HelloWorldJob class gets called to execute the job “WriteHelloToConsole” using the once every minute trigger set up for this job. The HelloWorldJob is a class that implements the interface IJob. I’ll walk you through the details of the implementation… - context is passed to the method execute by the quartz.net scheduler service. This has everything you need to pull out the job, trigger specific information. - for example. I have pulled out the value of the jobKey name, the fire time and next fire time. using Quartz; using System; namespace HelloWorldQuartzDotNet { class HelloWorldJob : IJob { public void Execute(IJobExecutionContext context) { try { Console.WriteLine("Job {0} fired @ {1} next scheduled for {2}", context.JobDetail.Key, context.FireTimeUtc.Value.ToString("r"), context.NextFireTimeUtc.Value.ToString("r")); Console.WriteLine("Hello World!"); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   I’ll add a call to call the scheduler in the Main method in Program.cs using System; using System.Threading; namespace HelloWorldQuartzDotNet { class Program { static void Main(string[] args) { try { var sj = new ScheduledJob(); sj.Run(); Thread.Sleep(10000 * 10000); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   This was third in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to pass parameters to the scheduled task scheduled on Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >