Search Results

Search found 108959 results on 4359 pages for 'ado net data services'.

Page 202/4359 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Share Firefox/Thnderbird data between W7 and Linux Mint 12 in dual boot computer

    - by Albert
    I've just set up my laptop (where I had running only W7) with a dual boot to run Linux Mint 12 as well. I have a "Data" partition (apart from the required partitions for W7 and Linux) where I store pretty much everything that isn't software installations (music, videos, project files, etc). I seem to be able to access that NTFS partition totally fine from Mint (like I've always done with W7), which is cool because I can access all that stuff regardless of which OS I'm using. I would like to know if it's possible (and how) to go one step further and share programs data between the two OS. One example would be my Firefox and Thunderbird data. For example, in Firefox share my bookmarks (and if I could share history, autocomplete and all that stuff, that would be awesome). In thunderbird, be able to share my mail and configuration, seeing the same inbox, folders, message rules, etc... So if I receive/send an email from W7 and later switch to Mint, I can see that email as it had been received/sent from Mint, and vice versa. Is this even possible? Or am I asking for too much convenience? If it's possible, any clues on how to set it all up?

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • Validate MVC 2 form using Data annotations and Linq-to-SQL, before the model binder kicks in (with D

    - by Stefanvds
    I'm using linq to SQL and MVC2 with data annotations and I'm having some problems on validation of some types. For example: [DisplayName("Geplande sessies")] [PositiefGeheelGetal(ErrorMessage = "Ongeldige ingave. Positief geheel getal verwacht")] public string Proj_GeplandeSessies { get; set; } This is an integer, and I'm validating to get a positive number from the form. public class PositiefGeheelGetalAttribute : RegularExpressionAttribute { public PositiefGeheelGetalAttribute() : base(@"\d{1,7}") { } } Now the problem is that when I write text in the input, I don't get to see THIS error, but I get the errormessage from the modelbinder saying "The value 'Tomorrow' is not valid for Geplande sessies." The code in the controller: [HttpPost] public ActionResult Create(Projecten p) { if (ModelState.IsValid) { _db.Projectens.InsertOnSubmit(p); _db.SubmitChanges(); return RedirectToAction("Index"); } else { SelectList s = new SelectList(_db.Verbonds, "Verb_ID", "Verb_Naam"); ViewData["Verbonden"] = s; } return View(); } What I want is being able to run the Data Annotations before the Model binder, but that sounds pretty much impossible. What I really want is that my self-written error messages show up on the screen. I have the same problem with a DateTime, which i want the users to write in the specific form 'dd/MM/yyyy' and i have a regex for that. but again, by the time the data-annotations do their job, all i get is a DateTime Object, and not the original string. So if the input is not a date, the regex does not even run, cos the data annotations just get a null, cos the model binder couldn't make it to a DateTime. Does anyone have an idea how to make this work?

    Read the article

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

  • Core Data migration of to-one relationship to to-many relationship

    - by westsider
    I have a deployed app that samples measurements from sensors (e.g., Temp °C, Pressure kPa). The user can create Experiments and collect samples. Each sample is stored as a Run, such that there is a one-to-many relationship from Experiment to Run. In the interest of performance, Run has a to-one relationship with Data entity (which is where the actual raw data is stored); this allows some Run attributes to be loaded without necessarily loading lots of data. Most of our sensors have multiple measurements, so it would be nice to store all the data that is actually being sampled. But this means that the Run <--- Data relationship needs to become Run <-- Data (to use Xcode's convention). I am faced with trying to migrate data from old Run to-one Data model to new Run to-many Data model. Can this be done using Mapping Models? If so, does anyone have any pointers to examples? If not, does anyone have any pointers to examples of how to do that? Thanks for any pointers or advice.

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • documentFragment.cloneNode(true) doesn't clone jQuery data

    - by taber
    I have a documentFragment with several child nodes containing some .data() added like so: myDocumentFragment = document.createDocumentFragment(); for(...) { myDocumentFragment.appendChild( $('').addClass('button') .attr('href', 'javascript:void(0)') .html('click me') .data('rowData', { 'id': 103, 'test': 'testy' }) .get(0) ); } When I try to append the documentFragment to a div on the page: $('#div').append( myDocumentFragment ); I can access the data just fine: alert( $('#div a:first').data('rowData').id ); // alerts '103' But if I clone the node with cloneNode(true), I can't access the node's data. :( $('#div').append( myDocumentFragment.cloneNode(true) ); ... alert( $('#div a:first').data('rowData').id ); // alerts undefined Has anyone else done this or know of a workaround? I guess I could store the row's data in jQuery.data('#some_random_parent_div', 'rows', [array of ids]), but that kinda defeats the purpose of making the data immediately/easily available to each row. I've also read that jQuery uses documentFragments, but I'm not sure exactly how, or in what methods. Does anyone have any more details there? Thanks!

    Read the article

  • "Pattern matching" of algebraic type data constructors

    - by jetxee
    Let's consider a data type with many constructors: data T = Alpha Int | Beta Int | Gamma Int Int | Delta Int I want to write a function to check if two values are produced with the same constructor: sameK (Alpha _) (Alpha _) = True sameK (Beta _) (Beta _) = True sameK (Gamma _ _) (Gamma _ _) = True sameK _ _ = False Maintaining sameK is not much fun, it is potentially buggy. For example, when new constructors are added to T, it's easy to forget to update sameK. I omitted one line to give an example: -- it’s easy to forget: -- sameK (Delta _) (Delta _) = True The question is how to avoid boilerplate in sameK? Or how to make sure it checks for all T constructors? The workaround I found is to use separate data types for each of the constructors, deriving Data.Typeable, and declaring a common type class, but I don't like this solution, because it is much less readable and otherwise just a simple algebraic type works for me: {-# LANGUAGE DeriveDataTypeable #-} import Data.Typeable class Tlike t where value :: t -> t value = id data Alpha = Alpha Int deriving Typeable data Beta = Beta Int deriving Typeable data Gamma = Gamma Int Int deriving Typeable data Delta = Delta Int deriving Typeable instance Tlike Alpha instance Tlike Beta instance Tlike Gamma instance Tlike Delta sameK :: (Tlike t, Typeable t, Tlike t', Typeable t') => t -> t' -> Bool sameK a b = typeOf a == typeOf b

    Read the article

  • CascadingDropDown ViewState Problem

    - by Steven
    I have two Ajax CascadingDropDown extenders on my page. After a postback, the value of the first dropdown is set (presumably) triggering an event for the second dropdown to refresh. Question: How do I maintain both the contents (from queries) and selected value of both dropdowns after postback? C# answers also welcome. Default.aspx Active States<br /><asp:DropDownList ID="StatesDrop" runat="server" /><br /> Active Cities<br /><asp:DropDownList ID="CitiesDrop" runat="server" /><br /> <ajax:CascadingDropDown ID="StatesCasc" TargetControlID="StatesDrop" ServicePath="WebService1.asmx" ServiceMethod="GetActiveStates" Category="States" runat="server" PromptText="Select a State" PromptValue="?" /> <ajax:CascadingDropDown ID="CitiesCasc" TargetControlID="CitiesDrop" ServicePath="WebService1.asmx" ServiceMethod="GetActiveCities" Category="Cities" runat="server" ParentControlID="StatesDrop" PromptText="Select a City" PromptValue="?" /> WebService1.asmx.vb Imports System.Web.Services Imports System.Web.Services.Protocols Imports System.ComponentModel Imports System.Web.Script.Services Imports AjaxControlToolkit <System.Web.Script.Services.ScriptService()> _ <System.Web.Services.WebService(Namespace:="http://tempuri.org/")> _ <System.Web.Services.WebServiceBinding _ (ConformsTo:=WsiProfiles.BasicProfile1_1)> _ <ToolboxItem(False)> _ Public Class WebService1: Inherits System.Web.Services.WebService <WebMethod()> _ Public Function GetActiveStates (ByVal knownCategoryValues As String, _ ByVal category As String) As CascadingDropDownNameValue() Dim values As New List(Of CascadingDropDownNameValue)() 'Populate values with query' Return values.ToArray() End Function <WebMethod()> _ Public Function GetActiveCities (ByVal knownCategoryValues As String, _ ByVal category As String) As CascadingDropDownNameValue() Dim kv As StringDictionary = _ CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues) Dim SelState As String = "" If kv.ContainsKey("State") Then SelState = kv("State") Dim values As New List(Of CascadingDropDownNameValue)() ' Populate values with query.' Return values.ToArray() End Function End Class

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • java: libraries for immutable functional-style data structures

    - by Jason S
    This is very similar to another question (Functional Data Structures in Java) but the answers there are not particularly useful. I need to use immutable versions of the standard Java collections (e.g. HashMap / TreeMap / ArrayList / LinkedList / HashSet / TreeSet). By "immutable" I mean immutable in the functional sense (e.g. purely functional data structures), where updating operations on the data structure do not change the original data, but instead return a new instance of the same kind of data structure. Also typically new and old instances of the data structure will share immutable data to be efficient in time and space. From what I can tell my options include: Functional Java Scala Clojure but I'm not sure whether any of these are particularly appealing to me. I have a few requirements/desirements: the collections in question should be usable directly in Java (with the appropriate libraries in the classpath). FJ would work for me; I'm not sure if I can use Scala's or Clojure's data structures in Java w/o having to use the compilers/interpreters from those languages and w/o having to write Scala or Clojure code. Core operations on lists/maps/sets should be possible w/o having to create function objects with confusing syntaxes (FJ looks slightly iffy) They should be efficient in time and space. I'm looking for a library which ideally has done some performance testing. FJ's TreeMap is based on a red-black tree, not sure how that rates. Documentation / tutorials should be good enough so someone can get started quickly using the data structures. FJ fails on that front. Any suggestions?

    Read the article

  • Fasted way to develop data entry screens for a .NET backend ?

    - by jay23
    I am a .NET / C# back end guy. I am working on a app that will have about 200 different data entry screens. For me exposing DTO as a collection for CRUD (IUpdatable and IQueryable) is the easy part, can do it in sleep :-). What I am trying to decide is what type of front end technology will allow me to develop these data entry screens fast. They don't have to be fancy but they are not just plain grid either and on average they have about 15 form fields and some client side data validation (no db look up) Options I am looking at are Use ExtJS on the front and REST / JSON on the back. ASP.NET RIA but I do not know SL (Well XAML) Plain ASP.NET / MVC One idea I had was the DTO will contain the meta data about the form (As Attributes) and the form can be dynamically generated, but i do not want to reinvent the wheel if their is an easy way. I have looked at RAD software but all of them look at the DB and generate screens. I rather want some thing that can look at my DTO and generate screens. Jay

    Read the article

  • .NET: Avoidance of custom exceptions by utilising existing types, but which?

    - by Mr. Disappointment
    Consider the following code (ASP.NET/C#): private void Application_Start(object sender, EventArgs e) { if (!SetupHelper.SetUp()) { throw new ShitHitFanException(); } } I've never been too hesitant to simply roll my own exception type, basically because I have found (bad practice, or not) that mostly a reasonable descriptive type name gives us enough as developers to go by in order to know what happened and why something might have happened. Sometimes the existing .NET exception types even accommodate these needs - regardless of the message. In this particular scenario, for demonstration purposes only, the application should die a horrible, disgraceful death should SetUp not complete properly (as dictated by its return value), but I can't find an already existing exception type in .NET which would seem to suffice; though, I'm sure one will be there and I simply don't know about it. Brad Abrams posted this article that lists some of the available exception types. I say some because the article is from 2005, and, although I try to keep up to date, it's a more than plausible assumption that more have been added to future framework versions that I am still unaware of. Of course, Visual Studio gives you a nicely formatted, scrollable list of exceptions via Intellisense - but even on analysing those, I find none which would seem to suffice for this situation... ApplicationException: ...when a non-fatal application error occurs The name seems reasonable, but the error is very definitely fatal - the app is dead. ExecutionEngineException: ...when there is an internal error in the execution engine of the CLR Again, sounds reasonable, superficially; but this has a very definite purpose and to help me out here certainly isn't it. HttpApplicationException: ...when there is an error processing an HTTP request Well, we're running an ASP.NET application! But we're also just pulling at straws here. InvalidOperationException: ...when a call is invalid for the current state of an instance This isn't right but I'm adding it to the list of 'possible should you put a gun to my head, yes'. OperationCanceledException: ...upon cancellation of an operation the thread was executing Maybe I wouldn't feel so bad using this one, but I'd still be hijacking the damn thing with little right. You might even ask why on earth I would want to raise an exception here but the idea is to find out that if I were to do so then do you know of an appropriate exception for such a scenario? And basically, to what extent can we piggy-back on .NET while keeping in line with rationality?

    Read the article

  • Long time to load first sql connection in .NET

    - by Shrage Smilowitz
    For some reason it takes 7 seconds to open a connection to a sql server database for the firt time, subsequent connections takes a second. any idea what could be the reason? I'm using C# and asp.net Its after compilation, i essence every time i restart the site, which means every time it needs to actualy create the "first" connection. i understand that setting up connection pooling has overhead, but i have never seen that i should take 7 second to set it up.

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • Code Camp 2011 – Summary

    - by hajan
    Waiting whole twelve months to come this year’s Code Camp 2011 event was something which all Microsoft technologies (and even non-Microsoft techs.) developers were doing in the past year. Last year’s success was enough big to be heard and to influence everything around our developer community and beyond. Code Camp 2011 was nothing else but a invincible success which will remain in our memory for a long time from now. Darko Milevski (president of MKDOT.NET UG and SharePoint MVP) said something interesting at the event keynote that up to now we were looking at the past by saying what we did… now we will focus on the future and how to develop our community more and more in the future days, weeks, months and I hope so for many years… Even though it was held only two days ago (26th of November 2011), I already feel the nostalgia for everything that happened there and for the excellent time we have spent all together. ORGANIZED BY ENTHUSIASTS AND EXPERTS Code Camp 2011 was organized by number of community enthusiasts and experts who have unselfishly contributed with all their free time to make the best of this event. The event was organized by a known community group called MKDOT.NET User Group, name of a user group which is known not only in Macedonia, but also in many countries abroad. Organization mainly consists of software developers, technical leaders, team leaders in several known companies in Macedonia, as well as Microsoft MVPs. SPEAKERS There were 24 speakers at five parallel tracks. At Code Camp 2011 we had two groups of speakers: Professional Experts in various technologies and Student Speakers. The new interesting thing here is the Student Speakers, which draw attention a lot, especially to other students who were interested to see what their colleagues are going to speak about and how do they use Microsoft technologies in different coding scenarios and practices, in different topics. From the rest of the professional speakers, there were 7 Microsoft MVPs: Two ASP.NET/IIS MVPs, Two C# MVPs, and One MVP in SharePoint, SQL Server and Exchange Server. I must say that besides the MVP Speakers, who definitely did a great job as always… there were other excellent speakers as well, which were speaking on various technologies, such as: Web Development, Windows Phone Development, XNA, Windows 8, Games Development, Entity Framework, Event-driven programming, SOLID, SQLCLR, T-SQL, e.t.c. SESSIONS There were 25 sessions mainly all related to Microsoft technologies, but ranging from Windows 8, WP7, ASP.NET till Games Development, XNA and Event-driven programming. Sessions were going in five parallel tracks named as Red, Yellow, Green, Blue and Student track. Five presentations in each track, each with level 300 or 400. More info MY SESSION (ASP.NET MVC Best Practices) I must say that from the big number of speaking engagements I have had, this was one of my best performances and definitely I have set new records of attendees at my sessions and probably overall. I spoke on topic ASP.NET MVC Best Practices, where I have shown tips, tricks, guidelines and best practices on what to use and what to avoid by developing with one of the best web development frameworks nowadays, ASP.NET MVC. I had approximately 350+ attendees, the hall was full so that there was no room for staying at feet. Besides .NET developers, there were a lot of other technology oriented developers, who has also received the presentation very well and I really hope I gave them reason to think about ASP.NET as one of the best options for web development nowadays (if you ask me, it’s the best one ;-)). I have included 10 tips in using ASP.NET MVC each of them followed by a demo. Besides these 10 tips, I have briefly introduced the concept of ASP.NET MVC for those that haven’t been working with the framework and at the end some bonus tips. I must say there was lot of laugh for some funny sentences I have stated, like “If you code ASP.NET MVC, girls will love you more” – same goes for girls, only replace girls with boys :). [LINK TO SESSION WILL GO HERE, ONCE SESSIONS ARE AVAILABLE ON MK CODECAMP WEBSITE] VOLUNTEERS Without strong organization, such events wouldn’t be able to gather hundreds of attendees at one place and still stay perfectly organized to the smallest details, without dedicated organization and volunteers. I would like to dedicate this space in my blog to them and to say one big THANK YOU for supporting us before the event and during the whole day in the event. With such young and dedicated volunteers, we couldn’t achieve anything but great results. THANK YOU EVERYONE FOR YOUR CONTRIBUTION! NETWORKING One of the main reasons why we do such events is to gather all professionals in one place. Networking is what everyone wants because through this way of networking, we can meet incredible people in one place. It is amazing feeling to share your knowledge with others and exchange thoughts on various topics. Meet and talk to interesting people. I have had very special moments with many attendees especially after my presentation. Special Thank You to all of them who come to meet me in person, whether to ask a question, say congrats for my session or simply meet me and just smile :)… everything counts! Thank You! TWITTER During the event, twitter was one of the most useful event-wide communication tool where everyone could tweet with hash tag #mkcodecamp or #mkdotnet and say what he/she wants to say about the current state and happenings at that moment… In my next blog post I will list the top craziest tweets that were posted at this event… FUTURE OF MKDOT.NET Having such strong community around MKDOT.NET, the future seems very bright. The initial plans are to have sub-groups in several technologies, however all these sub-groups will belong to the MKDOT.NET UG which will be, somehow, the HEAD of these sub-groups. We are doing this to provide better divisions by technologies and organize ourselves better since our community is very big, around 500 members in MKDOT.NET.We will have five sub-groups:- Web User Group (Lead:Hajan Selmani - me)- Mobile User Group (Lead: Filip Kerazovski)- Visual C# User Group (Lead: Vekoslav Stefanovski)- SharePoint User Group (Lead: Darko Milevski)- Dynamics User Group (Lead: Vladimir Senih) SUMMARY Online registered attendees: ~1.200 Event attendees: ~800 Number of members in organization: 40+ Organized by: MKDOT.NET User Group Number of tracks: 5 Number of speakers: 24 Number of sessions: 25 Event official website: http://codecamp.mkdot.net Total number of sponsors: 20 Platinum Sponsors: Microsoft, INETA, Telerik Place held: FON University City and Country: Skopje, Macedonia THANK YOU FOR BEING PART OF THE BEST EVENT IN MACEDONIA, CODE CAMP 2011. Regards, Hajan

    Read the article

  • Moving away from .Net to Ruby and coping without intellisense

    - by user460667
    I am in the process of trying to learn Ruby, however after spending nearly 10 years in the MS stack I am struggling to get by without intellisense. I've given RubyMine a try which does help however ideally I would like to go free which would mean no RubyMine. How have other people leant to cope with remembering everything instead of relying on Ctrl-Space? Any advice is appreciated as at the moment I am feeling very stupid (no jokes about MS devs please ;)) Thanks

    Read the article

  • Oracle Consulting North America is now live on PeopleSoft Services Procurement and PeopleSoft Resource Management

    - by Howard Shaw
    Last month, Oracle's own internal consulting group (OCS North America) went live on PeopleSoft Services Procurement and PeopleSoft Resource Management to manage all aspects of identifying, recruiting, and deploying billable subcontractors on North America Applications customer consulting projects. The primary goals were to enhance the subcontractor staffing process, improve operational and informational processes, and improve collaboration between the Oracle NA Consulting Subcontractor Program and subcontractor suppliers. Over 200 registered external suppliers access the tool, review open needs and competitively bid their resources to work on NA Applications projects. This implementation highlights the usage of Oracle’s own solutions to streamline and enhance business operations, as the PeopleSoft 9.1 applications (Services Procurement and Resource Management) were deployed using Sun hardware, Oracle Enterprise Linux, and Oracle Virtual Machines.For more information, please navigate to the following web pages: PeopleSoft Services Procurement PeopleSoft Resource Management

    Read the article

  • Working with Reporting Services Filters–Part 1

    - by smisner
    There are two ways that you can filter data in Reporting Services. The first way, which usually provides a faster performance, is to use query parameters to apply a filter using the WHERE clause in a SQL statement. In that case, the structure of the filter depends upon the syntax recognized by the source database. Another way to filter data in Reporting Services is to apply a filter to a dataset, data region, or a group. Using this latter method, you can even apply multiple filters. However, the use of filter operators or the setup of multiple filters is not always obvious, so in this series of posts, I'll provide some more information about the configuration of filters. First, why not use query parameters exclusively for filtering? Here are a few reasons: You might want to apply a filter to part of the report, but not all of the report. Your dataset might retrieve data from a stored procedure, and doesn't allow you to pass a query parameter for filtering purposes. Your report might be set up as a snapshot on the report server and, in that case, cannot be dynamically filtered based on a query parameter. Next, let's look at how to set up a report filter in general. The process is the same whether you are applying the filter to a dataset, data region, or a group. When you go to the Filters page in the Properties dialog box for whichever of these items you selected (dataset, data region, group), you click the Add button to create a new filter. The interface looks like this: The Expression field is usually a field in the dataset, so to make it easier for you to make a selection,the drop-down list displays all of the current dataset fields. But notice the expression button to the right, which means that you can set up any type of expression-not just a dataset field. To the right of the expression button, you'll find a data type drop-down list. It's important to specify the correct data type for the field or expression you're using. Now for the operators. Here's a list of the options that you have: This Operator Performs This Action =, <>, >, >=, <, <=, Like Compares expression to value Top N, Bottom N Compares expression to Top (Bottom) set of N values (N = integer) Top %, Bottom % Compares expression to Top (Bottom) N percent of values (N = integer or float) Between Determines whether expression is between two values, inclusive In Determines whether expression is found in list of values Last, the Value is what you're comparing to the expression using the operator. The construction of a filter using some operators (=, <>, >, etc.) is fairly simple. If my dataset (for AdventureWorks data) has a Category field, and I have a parameter that prompts the user for a single category, I can set up a filter like this: Expression Data Type Operator Value [Category] Text = [@Category] But if I set the parameter to accept multiple values, I need to change the operator from = to In, just as I would have to do if I were using a query parameter. The parameter expression, [@Category], which translates to =Parameters!Category.Value, doesn’t need to change because it represents an array as soon as I change the parameter to allow multiple values. The “In” operator requires an array. With that in mind, let’s consider a variation on Value. Let’s say that I have a parameter that prompts the user for a particular year – and for simplicity’s sake, this parameter only allows a single value, and I have an expression that evaluates the previous year based on the user’s selection. Then I want to use these two values in two separate filters with an OR condition. That is, I want to filter either by the year selected OR by the year that was computed. If I create two filters, one for each year (as shown below), then the report will only display results if BOTH filter conditions are met – which would never be true. Expression Data Type Operator Value [CalendarYear] Integer = [@Year] [CalendarYear] Integer = =Parameters!Year.Value-1 To handle this scenario, we need to create a single filter that uses the “In” operator, and then set up the Value expression as an array. To create an array, we use the Split function after creating a string that concatenates the two values (highlighted in yellow) as shown below. Expression Data Type Operator Value =Cstr(Fields!CalendarYear.Value) Text In =Split( CStr(Parameters!Year.Value) + ”,” + CStr(Parameters!Year.Value-1) , “,”) Note that in this case, I had to apply a string conversion on the year integer so that I could concatenate the parameter selection with the calculated year. Pay attention to the second argument of the Split function—you must use a comma delimiter for the result to work correctly with the In operator. I also had to change the Expression value from [CalendarYear] (or =Fields!CalendarYear.Value) so that the expression would return a string that I could compare with the values in the string array. More fun with filter expressions in future posts!

    Read the article

  • More details on America's Cup use of Oracle Data Mining

    - by charlie.berger
    BMW Oracle Racing's America's Cup: A Victory for Database Technology BMW Oracle Racing's victory in the 33rd America's Cup yacht race in February showcased the crew's extraordinary sailing expertise. But to hear them talk, the real stars weren't actually human. "The story of this race is in the technology," says Ian Burns, design coordinator for BMW Oracle Racing. Gathering and Mining Sailing DataFrom the drag-resistant hull to its 23-story wing sail, the BMW Oracle USA trimaran is a technological marvel. But to learn to sail it well, the crew needed to review enormous amounts of reliable data every time they took the boat for a test run. Burns and his team collected performance data from 250 sensors throughout the trimaran at the rate of 10 times per second. An hour of sailing alone generates 90 million data points.BMW Oracle Racing turned to Oracle Data Mining in Oracle Database 11g to extract maximum value from the data. Burns and his team reviewed and shared raw data with crew members daily using a Web application built in Oracle Application Express (Oracle APEX). "Someone would say, 'Wouldn't it be great if we could look at some new combination of numbers?' We could quickly build an Oracle Application Express application and share the information during the same meeting," says Burns. Analyzing Wind and Other Environmental ConditionsBurns then streamed the data to the Oracle Austin Data Center, where a dedicated team tackled deeper analysis. Because the data was collected in an Oracle Database, the Data Center team could dive straight into the analytics problems without having to do any extract, transform, and load processes or data conversion. And the many advanced data mining algorithms in Oracle Data Mining allowed the analytics team to build vital performance analytics. For example, the technology team could remove masking elements such as environmental conditions to give accurate data on the best mast rotation for certain wind conditions. Without the data mining, Burns says the boat wouldn't have run as fast. "The design of the boat was important, but once you've got it designed, the whole race is down to how the guys can use it," he says. "With Oracle database technology we could compare the incremental improvements in our performance from the first day of sailing to the very last day. With data mining we could check data against the things we saw, and we could find things that weren't otherwise easily observable and findable."

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >