Search Results

Search found 6048 results on 242 pages for 'linq to dataset'.

Page 148/242 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • Inconsistent results in R with RNetCDF - why?

    - by sarcozona
    I am having trouble extracting data from NetCDF data files using RNetCDF. The data files each have 3 dimensions (longitude, latitude, and a date) and 3 variables (latitude, longitude, and a climate variable). There are four datasets, each with a different climate variable. Here is some of the output from print.nc(p8m.tmax) for clarity. The other datasets are identical except for the specific climate variable. dimensions: month = UNLIMITED ; // (1368 currently) lat = 3105 ; lon = 7025 ; variables: float lat(lat) ; lat:long_name = "latitude" ; lat:standard_name = "latitude" ; lat:units = "degrees_north" ; float lon(lon) ; lon:long_name = "longitude" ; lon:standard_name = "longitude" ; lon:units = "degrees_east" ; short tmax(lon, lat, month) ; tmax:missing_value = -9999 ; tmax:_FillValue = -9999 ; tmax:units = "degree_celsius" ; tmax:scale_factor = 0.01 ; tmax:valid_min = -5000 ; tmax:valid_max = 6000 ; I am getting behavior I don't understand when I use the var.get.nc function from the RNetCDF package. For example, when I attempt to extract 82 values beginning at stval from the maximum temperature data (p8m.tmax <- open.nc(tmaxdataset.nc)) with > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) (where lon_val and lat_val specify the location in the dataset of the coordinates I'm interested in and stval is stval is set to which(time_vec==200201), which in this case equaled 1285.) I get Error: Invalid argument But after successfully extracting 80 and 81 values > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,80)) > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,81)) the command with 82 works: > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) [1] 444 866 1063 ... [output snipped] The same problem occurs in the identically structured tmin file, but at 36 instead of 82: > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval),count=c(1,1,36)) produces Error: Invalid argument But after repeating with counts of 30, 31, etc > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval), count=c(1,1,36)) works. These examples make it seem like the function is failing at the last count, but that actually isn't the case. In the first example, var.get.nc gave Error: Invalid argument after I asked for 84 values. I then narrowed the failure down to the 82nd count by varying the starting point in the dataset and asking for only 1 value at a time. The particular number the problem occurs at also varies. I can close and reopen the dataset and have the problem occur at a different location. In the particular examples above, lon_val and lat_val are 1595 and 1751, respectively, identifying the location in the dataset along the lat and lon dimensions for the latitude and longitude I'm interested in. The 1595th latitude and 1751th longitude are not the problem, however. The problem occurs with all other latitude and longitudes I've tried. Varying the starting location in the dataset along the climate variable dimension (stval) and/or specifying it different (as a number in the command instead of the object stval) also does not fix the problem. This problem doesn't always occur. I can run identical code three times in a row (clearing all objects in between runs) and get a different outcome each time. The first run may choke on the 7th entry I'm trying to get, the second might work fine, and the third run might choke on the 83rd entry. I'm absolutely baffled by such inconsistent behavior. The open.nc function has also started to fail with the same Error: Invalid argument. Like the var.get.nc problems, it also occurs inconsistently. Does anyone know what causes the initial failure to extract the variable? And how I might prevent it? Could have to do with the size of the data files (~60GB each) and/or the fact that I'm accessing them through networked drives? This was also asked here: https://stat.ethz.ch/pipermail/r-help/2011-June/281233.html > sessionInfo() R version 2.13.0 (2011-04-13) Platform: i386-pc-mingw32/i386 (32-bit) locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] reshape_0.8.4 plyr_1.5.2 RNetCDF_1.5.2-2 loaded via a namespace (and not attached): [1] tools_2.13.0

    Read the article

  • TFS 2010 - TF14040 The Folder may not be checked out.

    - by Patricker
    I have a .NET 4 website in VS2010 stored in a TFS 2010 team project. I need to add a reference to System.Data.Linq.dll to the website. I am referencing a LINQ DataContext that is defined in another project and I get build errors saying that I need the reference to System.Data.Linq. I go up to the "Add Reference" menu option and add it like I would any normal reference, and it even shows up in the Web.config and in the Properties pages for the website... BUT if I build I still get the same error. So I found a place in my code where I was referencing the LINQ count function and it told me it was invalid because I was missing a reference and it offered to add the reference automatically. I told it to add the reference automatically and it is at this point that I get the error mentioned in the subject: TF14040: The folder $/Folder/Subfolder may not be checked out. No items were checked out I've done some research online but I haven't been able to find much. I saw on a blog that making the folder not readonly fixed the issue for him, but it didn't seem to work for me unless I misunderstood something. I tried loading up the project from source control onto a fresh computer where that project had never been loaded before and I can reproduce the issue the same way. Help would be greatly appreciated.

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • Error in ASP.NET MVC 2 View after Upgrading from ASP.NET 4.0 RC to RTM

    - by Chris
    In my View, I am trying to loop through a list in a LINQ object that as part of my View Model. This worked fine earlier today with the VS2010 RC and the .NET 4.0 RC. <% if (Model.User.RoleList.Count > 0 ) { %> <% foreach (var role in Model.User.RoleList) { %> <%: role.Name %><br /> <% } %> <% } else { %> <em>None</em><br /> <% } %> It used to happily spew out a list of the role names. No data or code has changed. Simply the software upgrades from RC to RTM. The error I am getting is this: \Views\Users\Details.aspx(67): error CS0012: The type 'System.Data.Linq.EntitySet`1' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Data.Linq, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. But System.Data.Linq IS referenced. I see it there in the references list. I tried deleting it and re-adding it but I get the same error. Any ideas?

    Read the article

  • C# using namespace directive in nested namespaces

    - by MoSlo
    Right, I've usually used 'using' directives as follows using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq } i've recently seen examples of such as using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq namespace DataLibrary { using System.Data; //Data access layers and whatnot } } Granted, i understand that i can put USING inside of my namespace declaration. Such a thing makes sense to me if your namespaces are in the same root (they organized). System; namespace 1 {} namespace 2 { System.data; } But what of nested namespaces? Personally, I would leave all USING declarations at the top where you can find them easily. Instead, it looks like they're being spread all over the source file. Is there benefit to the USING directives being used this way in nested namespaces? Such as memory management or the JIT compiler?

    Read the article

  • Constructing a WeakReference<T> throws COMException

    - by ChaseMedallion
    The following code: IDisposable d = ... new WeakReference<IDisposable>(d); Has started throwing the following exception on SOME machines. What could cause this? System.Runtime.InteropServices.COMException: Unspecified error (Exception from HRESULT: 0x80004005 (E_FAIL)) EDIT: the machines that experience the error are running Windows Server 2008 R2. Windows Server 2012 and desktop machines running windows 7 work fine. (this is true, but I now think a different issue is the relevant difference... see below). EDIT: as an additional note, this occurred right after updating our codebase to Entity Framework 6.1.1.-beta1. In the above code, The IDisposable is a class which wraps an EF DbContext. EDIT: why the votes to close? EDIT: the stack trace of the failure ends at the WeakReference<T> constructor called in the above code: at System.WeakReference`1..ctor(T target, Boolean trackResurrection) // from here on down it's code we wrote/simple LINQ. None of this code has changed recently; // we just upgraded to EF6 and saw this failure start happening at Core.Data.EntityFrameworkDataContext.RegisterDependentDisposable(IDisposable child) at Core.Data.ServiceFactory.GetConstructorParameter[TService](Type parameterType) at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext() at System.Linq.Buffer`1..ctor(IEnumerable`1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Core.Data.ServiceFactory.CreateService[TService]() at MVC controller action method EDIT: it turns out that the machines having issues with this were running AppDynamics. Uninstalling that seems to have removed the issue.

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • How do I make Master/Detail subreports in ReportBuilder come out right?

    - by Mason Wheeler
    I've got a report in ReportBuilder that's supposed to report on two objects. I didn't create this report, and I can't ask the person who did about how it works. Before running the report, we have some code that goes through, finds all the properties on the objects, and loads them into a memory dataset that looks like this: OBJECT_ID: TStringField PROP_NAME: TStringField PROP_VALUE: TStringField The report engine then creates a line on the report for each property in this dataset. This is implemented in a sub-report, whose parent only contains an OBJECT_ID, which is a human-readable name. Everything was going great until we had to display a "comment" of arbitrary size in the report. I made a second sub-report with a TMemoField so it could hold the text, and set the report up in the report designer. What I expect when I run the report is something that looks like this: HEADER Object 1 properties Object 1 comment Object 2 properties Object 2 comment I've managed to get just about everything but that. I used the MasterDataPipeline and MasterFieldLinks properties of the sub-report's pipelines to try to link the OBJECT_IDs of the sub-reports to the OBJECT_ID of the header, and that's the closest I've managed to come, but now what I see is: HEADER Object 1 properties Object 1 comment Object 2 comment The "Object 2 properties" section is nowhere to be seen, even though I've manually verified that the data is making it into the dataset correctly. This is driving me nuts. Any ReportBuilder gurus out there know what's going on and now to fix it?

    Read the article

  • SSRS for Sharepoint, Images in a report from a Sharepoint List URL?

    - by James Polhemus
    Greetings Sabios, I have several reports I run successfully where the data comes from a Sharepoint list in the form of an XML dataset. I am however having trouble with one. I have a report that pulls an image file onto the main body of the report. This data too comes from a Sharepoint list in the form of an XML dataset which sends me the URL to the jpeg or bmp or gif... whatever the case may be. I can successfully pull this off in my own Visual Studio IDE. My Local Report Server will render it as well It won't run on my Sharepoint Report Server (My MOSS runs through https while my Shartpoint Report Server is http might this matter?) When I upload it to Sharepoint and run it through the Sharepoint Report Server, I get back EVERYTHING in the report Header and Footer (dataset text and embedded Images) but just a big RED X where the Main Image should be. I have done everything the boards say: A. I made sure the Unattended Execution Account is running on the Reports Server B. I have insured the URL comes back in clean format (else the images wouldn't render locally either and they do) The report logs throw this exception: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it., ; Info: Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it. Any takers? Even my Sharepoint Administrator can't help me:) James

    Read the article

  • Reading XML outer tables

    - by Sathish
    I have an Xml file as shown below in which if i convert this to Dataset, the dataset will contain 3 tables with talble names Table Name1, Table Name2 and Table Name3 but i want to get this information without converting this to dataset Basically i want to get all the outer table names out of my excel. Please help me with the piece of code <Main Table> <Table Name1> <Something> <Something> <Something> <Something> </Table Name1> <Table Name2> <Something> <Something> <Something> <Something> </Table Name2> <Table Name3> <Something> <Something> <Something> </Table Name3> </Main Table>

    Read the article

  • Can I indicate where my MySQL parameter should go more meaningfully than just having a ? to mark the

    - by Paul H
    I've got a chunk of code where I can pass info into a MySQL command using parameters through an ODBC connection. Example code showing surname passed in using string surnameToLookFor: using (OdbcConnection DbConn = new OdbcConnection( connectToDB )) { OdbcDataAdapter cmd = new OdbcDataAdapter( "SELECT Firstname, Address, Postcode FROM customers WHERE Surname = ?", DbConn); OdbcParameter odbcParam = new OdbcParameter("surname", surnameToLookFor); cmd.SelectCommand.Parameters.Add(odbcParam); cmd.Fill(dsCustomers, "customers"); } What I'd like to know is whether I can indicate where my parameter should go more meaningfully than just having a ? to mark the position - as I could see this getting quite hard to debug if there are multiple parameters being replaced. I'd like to provide a name to the parameter in a manner something like this: SELECT Firstname, Address, Postcode FROM customers WHERE Surname = ?surname ", When I try this it just chokes. The following code public System.Data.DataSet Customer_Open(string sConnString, long ld) { using (MySqlConnection oConn = new MySqlConnection(sConnString)) { oConn.Open(); MySqlCommand oCommand = oConn.CreateCommand(); oCommand.CommandText = "select * from cust_customer where id=?id"; MySqlParameter oParam = oCommand.Parameters.Add("?id", MySqlDbType.Int32); oParam.Value = ld; oCommand.Connection = oConn; DataSet oDataSet = new DataSet(); MySqlDataAdapter oAdapter = new MySqlDataAdapter(); oAdapter.SelectCommand = oCommand; oAdapter.Fill(oDataSet); oConn.Close(); return oDataSet; } } is from http://www.programmingado.net/a-389/MySQL-NET-parameters-in-query.aspx and includes the fragment where id=?id Which would be ideal. Is this only available through the .Net connector rather than the ODBC? If it is possible to do using ODBC how would I need to change my code fragment to enable this?

    Read the article

  • Export Multiple Crystal Reports ASP.NET

    - by AProgrammer
    Hey all, I want to export 2 different reports when I click an Export button. The problem is the routine only fires once and I only get one report to print out. Am I doing something wrong? I think it has something to do with the HTTPResponse, but I'm not sure. Here's my code: Dim badgeSize As Integer = 0 'Drop Down selection Dim badgeData As New DataSet 'Visitor Badge Data Dim badgeEmployeeData As New DataSet 'Employee Badge Data Dim badgeTotals As Integer = 0 'Totals for both badgeSize = ddlBadgeSize.SelectedValue ' Get Visitor Data badgeData = _DatabaseAccess.GetProjectReportData(sessionInfo.myEventID, sessionInfo.EventCreator) ' Get Employee Data badgeEmployeeData = _DatabaseAccess.GetProjectReportEmployeeData(sessionInfo.myEventID, sessionInfo.EventCreator) 'Obtain Totals badgeTotals = badgeData.Tables(0).Rows.Count + badgeEmployeeData.Tables(0).Rows.Count If badgeTotals = 0 Then ShowMessage("There are no badges to print.") Exit Sub End If If badgeSize.Equals(0) Then 'Small If badgeEmployeeData.Tables(0).Rows.Count > 0 Then If badgeEmployeeData.Tables(0).Rows.Count >= 6 Then PrintProjectBadges(badgeEmployeeData, "Employee", badgeSize) Else PrintStandardDymo(badgeEmployeeData, "Employee", 1) End If End If If badgeData.Tables(0).Rows.Count > 0 Then If badgeData.Tables(0).Rows.Count >= 6 Then PrintProjectBadges(badgeData, "Visitor", badgeSize) Else PrintStandardDymo(badgeData, "Visitor", 1) End If End If else 'do somethign else endif And the Report Code: Private Sub PrintProjectBadges(ByVal theData As DataSet, ByVal badgeType As String, ByVal badgeSize As Integer) Dim ourReport As New ReportDocument Dim crConnectionInfo As New ConnectionInfo(SetCrystalConnection) If badgeSize = 0 Then Try If badgeType = "Visitor" Then ourReport.Load(Server.MapPath("SmallProjectBadge.rpt"), OpenReportMethod.OpenReportByDefault) 'LIVE SERVER USE Else ourReport.Load(Server.MapPath("SmallProjectEmployeeBadge.rpt"), OpenReportMethod.OpenReportByDefault) 'LIVE SERVER USE End If Catch ex As Exception Dim TraceList As New ArrayList TraceList.Add("DBLog") DatabaseAccess.WriteToErrorLog("Visitor Registration", "Printing Project Badges", ex.Message, TraceEventType.Information, 1, TraceList) Exit Sub End Try ourReport.SetDataSource(theData.Tables("Project")) Else 'Do somethign else... End If Response.Buffer = True 'Clear the response content and headers Response.ClearContent() Response.ClearHeaders() SetLogon(ourReport, crConnectionInfo) 'Export the Report to Response stream in PDF format and file name Customers ourReport.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "Visitor_Badges") Response.End() 'Response.Close() End Sub Any Help would be much appreciated.

    Read the article

  • Create a unique ID by fuzzy matching of names (via agrep using R)

    - by tbrambor
    Using R, I am trying match on people's names in a dataset structured by year and city. Due to some spelling mistakes, exact matching is not possible, so I am trying to use agrep() to fuzzy match names. A sample chunk of the dataset is structured as follows: df <- data.frame(matrix( c("1200013","1200013","1200013","1200013","1200013","1200013","1200013","1200013", "1996","1996","1996","1996","2000","2000","2004","2004","AGUSTINHO FORTUNATO FILHO","ANTONIO PEREIRA NETO","FERNANDO JOSE DA COSTA","PAULO CEZAR FERREIRA DE ARAUJO","PAULO CESAR FERREIRA DE ARAUJO","SEBASTIAO BOCALOM RODRIGUES","JOAO DE ALMEIDA","PAULO CESAR FERREIRA DE ARAUJO"), ncol=3,dimnames=list(seq(1:8),c("citycode","year","candidate")) )) The neat version: citycode year candidate 1 1200013 1996 AGUSTINHO FORTUNATO FILHO 2 1200013 1996 ANTONIO PEREIRA NETO 3 1200013 1996 FERNANDO JOSE DA COSTA 4 1200013 1996 PAULO CEZAR FERREIRA DE ARAUJO 5 1200013 2000 PAULO CESAR FERREIRA DE ARAUJO 6 1200013 2000 SEBASTIAO BOCALOM RODRIGUES 7 1200013 2004 JOAO DE ALMEIDA 8 1200013 2004 PAULO CESAR FERREIRA DE ARAUJO I'd like to check in each city separately, whether there are candidates appearing in several years. E.g. in the example, PAULO CEZAR FERREIRA DE ARAUJO PAULO CESAR FERREIRA DE ARAUJO appears twice (with a spelling mistake). Each candidate across the entire data set should be assigned a unique numeric candidate ID. The dataset is fairly large (5500 cities, approx. 100K entries) so a somewhat efficient coding would be helpful. Any suggestions as to how to implement this?

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • MS Dynamics CRM trapping .NET error before I can handle it

    - by clifgriffin
    This is a fun one. I have written a custom search page that provides faster, more user friendly searches than the default Contacts view and also allows searching of Leads and Contacts simultaneously. It uses GridViews bound to SqlDataSources that query filtered views. I'm sure someone will complain that I'm not using the web services for this, but this is just the design decision we made. These GridViews live in UpdatePanels to enable very slick AJAX updates upon search. It's all working great. Nearly ready to be deployed, except for one thing: Some long running searches are triggering an uncatchable SQL timeout exception. [SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.DataBind() at System.Web.UI.WebControls.GridView.DataBind() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) I found that CRM is doing a server.transfer to capture this error because my UpdatePanels started throwing JavaSript errors when this error would occur. I was only able to get the full error message by using the JavaScript debugger in IE. Having found this error, I thought the solution would be simple. I just needed to wrap my databind calls in try/catch blocks to capture any errors. Unfortunately it seems CRM's IIS configuration has the magic ability to capture this error before it ever gets back to my code. Using the debugger I never see it. It never gets to my catch blocks, but it's clearly happening in the SQL Data Source which is clearly (by the stack trace) being triggered by my GridView bind. Any ideas on this? It's driving me crazy. Code Behind (with some irrelevant functions omitted): protected void Page_Load(object sender, EventArgs e) { //Initialize some stuff this.bannerOracle = new OdbcConnection(ConfigurationManager.ConnectionStrings["OracleConnectionString"].ConnectionString); //Prospect default HideProspects(); HideProspectAddressColumn(); //Contacts default HideContactAddressColumn(); //Default error messages gvContacts.EmptyDataText = "Sad day. Your search returned no contacts."; gvProspects.EmptyDataText = "Sad day. Your search returned no prospects."; //New search try { SearchContact(null, -1); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } } protected void txtSearchString_TextChanged(object sender, EventArgs e) { if(!String.IsNullOrEmpty(txtSearchString.Text)) { try { SearchContact(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } if (chkProspects.Checked == true) { try { SearchProspect(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvProspects.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvProspects.DataSource = null; gvProspects.DataBind(); } finally { ShowProspects(); } } else { HideProspects(); } } } protected void SearchContact(string search, int type) { SqlCRM_Contact.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvContacts.DataSourceID = "SqlCRM_Contact"; string strQuery = ""; string baseQuery = @"SELECT filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact "; switch(type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredcontact.birthdateutc BETWEEN @dateStart AND @dateEnd AND filteredcontact.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Contact.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do something break; case ADDRESS: strQuery = @"SELECT contactid, new_libertyid, fullname, line1, emailaddress1, telephone1, birthdate, gendercodename FROM (SELECT FC.contactid, FC.new_libertyid, FC.fullname, FA.line1, FC.emailaddress1, FC.telephone1, FC.birthdateutc AS birthdate, FC.gendercodename, ROW_NUMBER() OVER(PARTITION BY FC.contactid ORDER BY FC.contactid DESC) AS rn FROM filteredcontact FC INNER JOIN FilteredCustomerAddress FA ON FC.contactid = FA.parentid WHERE FA.line1 LIKE @value AND FA.addressnumber <> 1 AND FC.statecode = 0 ) AS RESULTS WHERE rn = 1"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowContactAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact WHERE filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideContactAddressColumn(); } gvContacts.PageIndex = 0; //try //{ // SqlCRM_Contact.DataBind(); //} //catch //{ // SqlCRM_Contact.DataBind(); //} gvContacts.DataBind(); } protected void SearchProspect(string search, int type) { SqlCRM_Prospect.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvProspects.DataSourceID = "SqlCRM_Prospect"; string strQuery = ""; string baseQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead "; switch (type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredlead.lu_dateofbirth BETWEEN @dateStart AND @dateEnd AND filteredlead.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Prospect.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do nothing break; case ADDRESS: strQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, filteredlead.address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE address1_line1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowProspectAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1 filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideProspectAddressColumn(); } gvProspects.PageIndex = 0; //try //{ // SqlCRM_Prospect.DataBind(); //} //catch (Exception ex) //{ // SqlCRM_Prospect.DataBind(); //} gvProspects.DataBind(); }

    Read the article

  • Having issue Deserializing array from an XML string

    - by LeeHull
    I'm having issues trying to deserializing my xml string that was from a dataset.. Here is the XML layout.. <DataSet> <User> <UserName>Test</UserName> <Email>[email protected]</Email> <Details> <ID>1</ID> <Name>TestDetails</Name> <Value>1</Value> </Details <Details> <ID>2</ID> <Name>Testing</Name> <Value>3</Value> </Details </User> </DataSet> Now I am able to deserialize the "UserName" and "Email" when doing public class User { public string UserName {get;set;} public string Email {get;set;} public Details[] Details {get;set;} } public class Details { public int ID {get;set;} public string Name {get;set;} public string Value {get;set;} } This deserializes fine when I just get the user node, the Details isnt null but has no items in it.. i know I am suppose to have between all the details but I rather not modify the XML, anyways to get this to deserialize properly without recreating the XML after I get it?

    Read the article

  • How to automatically read in calculated values with PHPExcel?

    - by Edward Tanguay
    I have the following Excel file: I read it in by looping over every cell and getting the value with getCell(...)->getValue(): $highestColumnAsLetters = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestColumn(); //e.g. 'AK' $highestRowNumber = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestRow(); $highestColumnAsLetters++; for ($row = 1; $row < $highestRowNumber + 1; $row++) { $dataset = array(); for ($columnAsLetters = 'A'; $columnAsLetters != $highestColumnAsLetters; $columnAsLetters++) { $dataset[] = $this->objPHPExcel->setActiveSheetIndex(0)->getCell($columnAsLetters.$row)->getValue(); if ($row == 1) { $this->column_names[] = $columnAsLetters; } } $this->datasets[] = $dataset; } However, although it reads in the data fine, it reads in the calculations literally: I understand from discussions like this one that I can use getCalculatedValue() for calculated cells. The problem is that in the Excel sheets I am importing, I do not know beforehand which cells are calculated and which are not. Is there a way for me to read in the value of a cell in a way that automatically gets the value if it has a simple value and gets the result of the calculation if it is a calculation? Answer: It turns out that getCalculatedValue() works for all cells, makes me wonder why this isn't the default for getValue() since I would think one would usually want the value of the calculations instead of the equations themselves, in any case this works: ...->getCell($columnAsLetters.$row)->getCalculatedValue();

    Read the article

  • Windows Mobile : How to bind dropdown's selectedvalue to a column in table A and the list data to a

    - by Rob
    Hi, I am trying to learn the basics of Windows Mobile development against SQL CE and have come across a basic problem. I have two tables. One called Customers that stores customer info and has an identity column called ID as the primary key. The other table is called Orders which has a column called CustomerID (the FK constraint is present). I have added a DataSet to the project that contains both tables and have autogenerated the edit/view forms. This has produced a text control for the CustomerID column in the Order table for the new/edit form and I deleted it and replaced it with a dropdown list. Then, using the 'Advanced' databinding options (in Properties) I set the datasource of the list to the Customers table setting the value to the ID field and the text to the CustomerName field. I then set the SelectedValue of the list box to the CustomerID field of the Orders dataset. So far so good. When I run the app in the emulator and view the 'New' form for Orders the Customer dropdown is indeed populated with a list of customer names and I can select one and happily create a new order successfully. This is confirmed when I see the order appear in the Orders Grid form. However, when I then click on the order in the grid and then select 'Edit' the order loads but the dropdown always shows the first customer in the list and doesn't seem to bind the SelectedValue to the Orders dataset CustomerID field. Now I am an ASP.NET guy and normally hand craft the DAL and it's binding to the UI so I'm not entirely sure where to look to investigate what is going wrong here as this is all generated code. I am sure it is something very trivial but any pointers would be appreciated. My gut feeling is that the SelectedValue and the Customers.CustomerID values do not match for some reason? Many thanks, Rob.

    Read the article

  • How to automatically read in calculated values with PHPExcel?

    - by Edward Tanguay
    I have the following Excel file: I read it in by looping over every cell and getting the value with getCell(...)->getValue(): $highestColumnAsLetters = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestColumn(); //e.g. 'AK' $highestRowNumber = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestRow(); $highestColumnAsLetters++; for ($row = 1; $row < $highestRowNumber + 1; $row++) { $dataset = array(); for ($columnAsLetters = 'A'; $columnAsLetters != $highestColumnAsLetters; $columnAsLetters++) { $dataset[] = $this->objPHPExcel->setActiveSheetIndex(0)->getCell($columnAsLetters.$row)->getValue(); if ($row == 1) { $this->column_names[] = $columnAsLetters; } } $this->datasets[] = $dataset; } However, although it reads in the data fine, it reads in the calculations literally: I understand from discussions like this one that I can use getCalculatedValue() for calculated cells. The problem is that in the Excel sheets I am importing, I do not know beforehand which cells are calculated and which are not. Is there a way for me to read in the value of a cell in a way that automatically gets the value if it has a simple value and gets the result of the calculation if it is a calculation?

    Read the article

  • more problems with the LAG function is SAS

    - by SAS_learner
    The following bit of SAS code is supposed to read from a dataset which contains a numeric variable called 'Radvalue'. Radvalue is the temperature of a radiator, and if a radiator is switched off but then its temperature increases by 2 or more it's a sign that it has come on, and if it is on but its temperature decreases by 2 or more it's a sign that it's gone off. Radstate is a new variable in the dataset which indicates for every observation whether the radiator is on or off, and it's this I'm trying to fill in automatically for the whole dataset. So I'm trying to use the LAG function, trying to initialise the first row, which doesn't have a dif_radvalue, and then trying to apply the algorithm I just described to row 2 onwards. Any idea why the columns Radstate and l_radstate come out completely blank? Thanks everso much!! Let me know if I haven't explained the problem clearly. Data work.heating_algorithm_b; Input ID Radvalue; Datalines; 1 15.38 2 15.38 3 20.79 4 33.47 5 37.03 ; DATA temp.heating_algorithm_c; SET temp.heating_algorithm_b; DIF_Radvalue = Radvalue - lag(Radvalue); l_Radstate = lag(Radstate); if missing(dif_radvalue) then do; dif_radvalue = 0; radstate = "off"; end; else if l_Radstate = "off" & DIF_Radvalue > 2 then Radstate = "on"; else if l_Radstate = "on" & DIF_Radvalue < -2 then Radstate = "off"; else Radstate = l_Radstate; run;

    Read the article

  • Why isn't the Cache invalidated after table update using the SqlCacheDependency?

    - by Jason
    I have been trying to get SqlCacheDependency working. I think I have everything set up correctly, but when I update the table, the item in the Cache isn't invalidated. Can you look at my code and see if I am missing anything? I enabled the Service Broker for the Sandbox database. I have placed the following code in the Global.asax file. I also restart IIS to make sure it is called. void Application_Start(object sender, EventArgs e) { SqlDependency.Start(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString); } I have placed this entry in the web.config file: <system.web> <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="Sandbox" connectionStringName="SandboxConnectionString"/> </databases> </sqlCacheDependency> </caching> </system.web> I call this code to put the item into the cache: protected void CacheDataSetButton_Click(object sender, EventArgs e) { using (SqlConnection sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand("SELECT PetID, Name, Breed, Age, Sex, Fixed, Microchipped FROM dbo.Pets", sqlConnection)) { using (SqlDataAdapter sqlDataAdapter = new SqlDataAdapter(sqlCommand)) { DataSet petsDataSet = new DataSet(); sqlDataAdapter.Fill(petsDataSet, "Pets"); SqlCacheDependency petsSqlCacheDependency = new SqlCacheDependency(sqlCommand); Cache.Insert("Pets", petsDataSet, petsSqlCacheDependency, DateTime.Now.AddSeconds(10), Cache.NoSlidingExpiration); } } } } Then I bind the GridView with this code: protected void BindGridViewButton_Click(object sender, EventArgs e) { if (Cache["Pets"] != null) { GridView1.DataSource = Cache["Pets"] as DataSet; GridView1.DataBind(); } } Between attempts to DataBind the GridView, I change the table's values expecting it to invalidate the Cache["Pets"] item, but it seems to stay in the Cache indefinitely.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >