Search Results

Search found 6232 results on 250 pages for 'excel 2013'.

Page 236/250 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Accounts in Work Items after migration to TFS 2010 and to new domain

    - by Clara Oscura
    Lately I’ve been doing some tests on migrating our TFS 2008 installation to TFS 2010, coupled with a machine and domain change. One particular topic that was tricky is user accounts. We installed first a new machine with TFS 2010 and then migrated the projects in the old server. The work items were migrated with the projects. Great, but if I try to edit one of the old work items I cannot save it anymore because some fields contain old user names (ex. OLDDOMAIN\user) which are not known in the new domain (it should be NEWDOMAIN\user). The errors look like this: When I correct the ‘Assigned To’ field value, I get another error regarding another field: Before TFS 2010, we had TFSUsers power tool. It allow you to map an old user name to a new user name. This is not available anymore because WI fields with user accounts are now synchronized with AD display names changes (explained here). The correct way to go about this in TFS 2010 is to use TFSConfig Identities before adding the new domain accounts into the TFS groups (documented here). So, too late for us. I’ve found a (tedious) workaround to change those old account in work items in order to allow people to keep working with them. 1. Install TFS 2010 power tools 2. Export WIT from your project (VS | Tools | Process Editor | Work Item Types). Save the definition, for example: Original_MyProject_Task.xml 3. Copy the xml (NoReadOnly_MyProject_Task.xml) and edit it. From the field definition of ‘Activated By’, ‘Closed By’ and ‘Resolved By’, remove the following:        <WHENNOTCHANGED field="System.State">           <READONLY />         </WHENNOTCHANGED> 4. Import WIT in VS. Choose the new file (NoReadOnly_MyProject_Task.xml) and import it in MyProject 5. Open all tasks in Excel (flat list). Display the following columns: Asssigned To Activated By Closed By Resolved By Change the user accounts to the new ones (I usually sort each column alphabetically to make it easier). 6. Publish. If you get a conflict on a field, tough luck. You will have to manually choose “Local version” for each work item. I told you it was a tedious process. 7. Import original WIT (Original_MyProject_Task.xml) in MyProject. We only changed the WI definition so that we could change some fields. The original definition should be put back. And what about these other fields? Created By Authorized As These fields are not editable by definition (VS | Tools | Process Editor | Work Item Fields Explorer), even if they are not marked as read-only in the WIT. You can leave the old values. It doesn’t seem to matter to TFS. The other four fields are editable by definition, so only the WIT readonly rule prevents us from changing them. Technorati Tags: TFS,Team Foundation Server 2010,Work Item,Domain change

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • Data Binding to Attached Properties

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/14/data-binding-to-attached-properties.aspx When I was working on my C#/XAML game framework, I discovered I wanted to try to data bind my sprites to background objects. That way, I could update my objects and the draw functionality would take care of the work for me. After a little experimenting and web searching, it appeared this concept was an impossible dream. Of course, when has that ever stopped me? In my typical way, I started to massively dive down the rabbit hole. I created a sprite on a canvas, and I bound it to a background object. <Canvas Name="GameField" Background="Black"> <Image Name="PlayerStrite" Source="Assets/Ship.png" Width="50" Height="50" Canvas.Left="{Binding X}" Canvas.Top="{Binding Y}"/> </Canvas> Now, we wire the UI item to the background item. public MainPage() { this.InitializeComponent(); this.Loaded += StartGame; }   void StartGame( object sender, RoutedEventArgs e ) { BindingPlayer _Player = new BindingPlayer(); _Player.X = Window.Current.Bounds.Height - PlayerSprite.Height; _Player.X = ( Window.Current.Bounds.Width - PlayerSprite.Width ) / 2.0; } Of course, now we need to actually have our background object. public class BindingPlayer : INotifyPropertyChanged { private double m_X; public double X { get { return m_X; } set { m_X = value; NotifyPropertyChanged(); } }   private double m_Y; public double Y { get { return m_Y; } set { m_Y = value; NotifyPropertyChanged(); } }   public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } } I fired this baby up, and my sprite was correctly positioned on the screen. Maybe the sky wasn't falling after all. Wouldn't it be great if that was the case? I created some code to allow me to move the sprite, but nothing happened. This seems odd. So, I start debugging the application and stepping through code. Everything appears to be working. Time to dig a little deeper. After much profanity was spewed, I stumbled upon a breakthrough. The code only looked like it was working. What was really happening is that there was an exception being thrown in the background thread that I never saw. Apparently, the key call was the one to PropertyChanged. If PropertyChanged is not called on the UI thread, the UI thread ignores the call. Actually, it throws an exception and the background thread silently crashes. Of course, you'll never see this unless you're looking REALLY carefully. This seemed to be a simple problem. I just need to marshal this to the UI thread. Unfortunately, this object has no knowledge of this mythical UI Thread in which we speak. So, I had to pull the UI Thread out of thin air. Let's change our PropertyChanged call to look this. public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync( Windows.UI.Core.CoreDispatcherPriority.Normal, new Windows.UI.Core.DispatchedHandler( () => { PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } ) ); } Now, we raised our notification on the UI thread. Everything is fine, people are happy, and the world moves on. You may have noticed that I didn't await my call to the dispatcher. This was intentional. If I am trying to update a slew of sprites, I don't want thread being hung while I wait my turn. Thus, I send the message and move on. It is worth nothing that this is NOT the most efficient way to do this for game programming. We'll get to that in another blog post. However, it is perfectly acceptable for a business app that is running a background task that would like to notify the UI thread of progress on a periodic basis. It is worth noting that this code was written for a Windows Store App. You can do the same thing with WP8 and WPF. The call to the marshaler changes, but it is the same idea.

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • Is changing my job now a wise decision? [closed]

    - by FlaminPhoenix
    First a little background about myself. I am a javascript programmer with 3.8 years of experience. I joined my current company a year and 3 months ago, and I was recruited as a javascript programmer. I was under the impression I was a programmer in a programming team but this was not the case. No one else except me and my manager knows anything about programming in my team. The other two teammates, copy paste stuff from websites into excel sheets. I was told I was being recruited for a new project, and it was true. The only problem was that the server side language they were using was PHP. They were using a popular library with PHP, and I had never worked with PHP before. Nevertheless, I learnt it well enough to get things working, and received high praise from my boss's boss on whichever project I worked on. Words like "wow" , "This looks great, the clients gonna be impressed with this." were sprinkled every now and then on reviewing my work. They even managed to sell my work to a couple of clients and as I understand, both of my projects are going to fetch them a pretty buck. The problem: I was asked to move into a project which my manager was handling. I asked them for training on the project which never came, and sure enough I couldnt complete my first task on the new project without shortcomings. I told my manager there were things I didnt know how to get done in the new project due to lack of training. His project had 0 documentation. I was told he would "take care" of everything relating to those shortcomings. In the meantime, I was asked to switch to another project. My manager made the necessary changes and later told me that the build had "broken" on the production server and that I needed to "test" my changes before saying things were done. I never deployed it on the production server. He did. I never saw / had the opportunity to see the final build before it went to production. He called me for a separate meeting and started pointing fingers at me, but I took full responsibility even if I didnt have to. He later on got on a call with his boss, in my presence, and gave him the impression that it was all my fault. I did not confront him about this so far. I have worked late / done overtime without them asking a lot, but last week, I just got home from work, and I got calls asking me to solve an issue which till then they had kept quiet about even though they were informed about it. I asked my manager why I hadnt been tasked with this when I was in office. He started telling me which statements to put where, as if to mock me, and that this "is hardly an overtime issue" and this pissed me off. Also, during the previous meeting, he was constantly talking highly about his work, at the same time trying to demean mine. In the meantime, I have attended an interview with another MNC, and the interviewers there were fully respectful of my decision to leave my current company. Its a software company, so I can expect my colleagues to know a lot more than me. Im told I can expect their offer anytime this week. My questions: Is my anger towards my manager justified? While leaving, do I tell him that its because of his actions that Im leaving? Do I erupt in anger and tell him that he shouldnt have put the blame on me since he was the one doing the deployment? This is going to be my second resignation to this company. The first time I wanted to resign, I was asked to stay back and my manager promised a lot of changes, a couple of which were made. How do I keep myself from getting into such situations with my employers in the future?

    Read the article

  • ????????????????

    - by Yusuke.Yamamoto
    ????????? ????????? ????????? ????·???? ?????:Oracle???????·??????(????)??:?????????????? Pickup!:IT???????????!|????????:100???????!|???????????? ????????:?Oracle DB?????????????????????Windows?VMware?? ?????:Oracle OpenWorld Tokyo 2012|JavaOne Tokyo 2012|??????:151 ?OTN ???? ?????? ????????:???????100???????! ?????????????????Amazon????Get! ???? Oracle Technology Network, ????/????, ??IT???????·?????????????? ???????/???MySQL?????? ?????? ???????/???MySQL????? ?????????Oracle VM VirtualBox????Oracle Real Application Clusters (RAC) 11g Release 2????? ???????/??????????????????????????????? ???????/?????????????/????·????????Flashback Database with SSD? ????? ????? Oracle?????????????????????????!???????????? Oracle Enterprise Manager 12c(EM12c):????????? ~????????~ ???????????!??????·??????? Oracle Linux 5.8?????????? ???DB?????Oracle SQL Developer 3.1???????????? Oracle??????(OUI:Oracle Universal Installer)???? ????? ???? ????????? ????????????·???????? ????????????? ???????? 11gR2 ?????????? Oracle Database 11g R2 Oracle WebLogic Server 11g R1 Oracle Enterprise Manager 11g R1 ????? ????????? Oracle Database ????????????????·?? ???????/???????????????? ?????????(????????, ???, etc) ????????(???, ?????, REDO, ???·????, etc) ????·?????????????????(?????, ???, ??, ??, ??? etc) ????·?????????(??, etc) ????????????? ???????????????????·?????? ??????? ???? ????????·??|SQL Server Windows Server ??????????PL/SQL|Java|.NET|PHP ??/??? ORACLE MASTER ???? DWH(?????????)??·?? ????? ?????(SAN, NAS, SSD, etc) Oracle Database ??????? Amazon EC2 Microsoft Excel Microsoft Visual Studio MSFC/MSCS(Microsoft Cluster Service) SAP ??·??????? Oracle Database Oracle Database 11g Release 2(11gR2) Oracle Database Standard Edition ????????: Advanced Compression ?????????: Advanced Security Application Express(APEX) Automatic Storage Management(ASM) SSD???Oracle???: Database Smart Flash Cache ??????????: Data Guard/Active Data Guard Data Pump Oracle Data Provider for .NET(ODP.NET) ????: Oracle Text Partitioning(???????/?????????) DB????: Real Application Clusters(RAC) Real Application Testing Recovery Manager(RMAN) SQL*Loader|SQL*Plus|Statspack ??????|????????|???????? Database Appliance Database Firewall Exadata Database Machine SQL Developer ?????DB: TimesTen In-Memory Database Oracle Fusion Middleware Java Oracle Coherence Oracle Data Integrator(ODI) Oracle GoldenGate Oracle JRockit JVM Oracle WebLogic Server Oracle Enterprise Manager for Database|for Middleware ????????????: Oracle Application Testing Suite Oracle Linux Oracle Solaris DTrace|ZFS|???/???? Oracle VM Server for x86 ?????? ???????? ?????????Oracle???????????????·????????????????? ?????????(??·??????) OTN??????(??????) ???????(????????) Oracle University(??) ??????! ?????... ????? ?????? ????? ?????? ?????|?Sun?? ???????? OTN???????? OTN(????) ?????? ???? OTN???|???? OTN?????? ??????? ?????? ???????? ???? ???????

    Read the article

  • Exception converting Office files to PDF using ABCpdf.NET onWindows Server 2008

    - by drivendevelopment
    Has anyone delt with this exception from ABCpdf? We're running on Server 2008 and only have issues converting Office files (Word and Excel). This all worked well on Server 2003. Because we're only having issues with Office files I wonder if it's related to the XPS support on Server 2008? The code that calls into this function is running as a Windows Service. Private Overloads Function ConvertMicrosoftOfficeDocToPdf(ByVal inputFile As Byte(), ByVal fileExt As String) As Byte() Dim abcDoc As WebSupergoo.ABCpdf7.Doc = Nothing Try abcDoc = New WebSupergoo.ABCpdf7.Doc() Dim xro As New WebSupergoo.ABCpdf7.XReadOptions() xro.FileExtension = fileExt Try abcDoc.Read(inputFile, xro) Catch ex As Exception System.Diagnostics.Trace.Write(ex.ToString()) Throw ex End Try Dim fileBytes As Byte() = abcDoc.GetData() Return fileBytes Finally If Not abcDoc Is Nothing Then abcDoc.Clear() abcDoc.Dispose() End If End Try End Function WebSupergoo.ABCpdf7.Internal.PDFException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at WebSupergoo.ABCpdf7.Internal.NDoc._InvokeMethod(IntPtr inDoc, Int32 inMethod, Int32 inIndex, Int32 inFlags, String inParams, String& outErr) at WebSupergoo.ABCpdf7.Internal.NDoc.InvokeMethod(IntPtr inDoc, Int32 inMethod, Int32 inIndex, Int32 inFlags, String inParams, String& outErr) at WebSupergoo.ABCpdf7.Doc.PrintToXps(String inputFile, String outputFile, Int32 timeout, String printerName) at WebSupergoo.ABCpdf7.Operations.XpsImportOperation.ImportAny(Doc doc, String path, Int32 timeout) at WebSupergoo.ABCpdf7.XReadOptions.ImportXpsAny(Doc doc, String path, Boolean clear) at WebSupergoo.ABCpdf7.XReadOptions.Read(Doc doc, Byte[] data, ReadModuleType module) at WebSupergoo.ABCpdf7.XReadOptions.Read(Doc doc, Byte[] data)

    Read the article

  • WPF Toolkit DataGridCell Style DataTrigger

    - by KrisTrip
    I am trying to change the color of a cell to Yellow if the value has been updated in the DataGrid. My XAML: <toolkit:DataGrid x:Name="TheGrid" ItemsSource="{Binding}" IsReadOnly="False" CanUserAddRows="False" CanUserResizeRows="False" AutoGenerateColumns="False" CanUserSortColumns="False" SelectionUnit="CellOrRowHeader" EnableColumnVirtualization="True" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto"> <toolkit:DataGrid.CellStyle> <Style TargetType="{x:Type toolkit:DataGridCell}"> <Style.Triggers> <DataTrigger Binding="{Binding IsDirty}" Value="True"> <Setter Property="Background" Value="Yellow"/> </DataTrigger> </Style.Triggers> </Style> </toolkit:DataGrid.CellStyle> </toolkit:DataGrid> The grid is bound to a List of arrays (displaying a table of values kind of like excel would). Each value in the array is a custom object that contains an IsDirty dependency property. The IsDirty property gets set when the value is changed. When i run this: change a value in column 1 = whole row goes yellow change a value in any other column = nothing happens I want only the changed cell to go yellow no matter what column its in. Do you see anything wrong with my XAML?

    Read the article

  • Algorithm design, "randomising" timetable schedule in Python although open to other languages.

    - by S1syphus
    Before I start I should add I am a musician and not a native programmer, this was undertook to make my life easier. Here is the situation, at work I'm given a new csv file each which contains a list of sound files, their length, and the minimum total amount of time they must be played. I create a playlist of exactly 60 minutes, from this excel file. Each sample played the by the minimum number of instances, but spread out from each other; so there will never be a period where for where one sound is played twice in a row or in close proximity to itself. Secondly, if the minimum instances of each song has been used, and there is still time with in the 60 min, it needs to fill the remaining time using sounds till 60 minutes is reached, while adhering to above. The smallest duration possible is 15 seconds, and then multiples of 15 seconds. Here is what I came up with in python and the problems I'm having with it, and as one user said its buggy due to the random library used in it. So I'm guessing a total rethink is on the table, here is where I need your help. Whats is the best way to solve the issue, I have had a brief look at things like knapsack and bin packing algorithms, while both are relevant neither are appropriate and maybe a bit beyond me.

    Read the article

  • how many types of code signing certificates do I need?

    - by gerryLowry
    in Canada, website SSL certificates can be had for as low as US$10. unfortunately, code signing certificates cost about 10 time as much, one website mentions Vista compatibility ... this seems strange because my assumption is they must support XP, Vista, Windows 7, Server 2003, and Server 2008 or they would be useless. https://secure.ksoftware.net/code_signing.html US$99 Support Platforms Microsoft Authenticode. Sign any Microsoft executable format (32 and 64 bit EXE, DLL, OCX, DLL or any Active X control). Signing hardware drivers is not currently supported. Abode AIR. Sign any Adobe AIR application. Java. Sign any JAR applet Microsoft Office. Sign any MS Office Macro or VBA (Visual Basic for Applications) file. Mozilla. Sign any Mozilla Object file. The implication is that a single code signing certificate can do ALL of the above. ksoftware actually discounts Commodo certificates and the Commode website is unclear. QUESTION: Will ONE code signing certificate be enough or do I need one for Microsoft executables, and a second for things like Word and Excel macros? my main goal is to sign things like vs2008 code snippets so that I can export them securely; however, I would like to be able to use the same code signing certificate for signing other items too. Thank you ~~ regards, Gerry (Lowry)

    Read the article

  • Using the Salesforce PHP API to generate a User Profile Report

    - by Phill Pafford
    Hi All, Looking to do a security audit of all user permissions. I think I can use the Salesforce PHPToolkit 11 API to generate the report but new to Salesforce and a little confused on where to start. In Salesforce Setup Under: Administration Setup -> Manage Users -> Profiles -> Profile Names If you click on each user name you can see the permissions set and the actions the user is allowed to perform. Wanted a way to generate an excel report for all users with all the permissions for that user. Example: User Name | Can view Case | Can edit case | Can delete case | etc... phill yes no no x... and so on. I see that in Salesforce I can run a high level report on the Profile but I need to drill down for each user. Has anyone every done this type of reporting before? any help on this would be great. Thanks in advacne, --Phill

    Read the article

  • input type file alternative and file upload best practice

    - by Ioxp
    Background: I am working on a file upload page that will extend an existing web portal. This page will allow for an end user to upload files from there local computer to our network (the files will not be stored on the web server, rather a remote workstation). The end user will have the ability to view the data that they have submitted by hyper-linking the files that have been uploaded on this page. Question 1: Is there an ASP.net alternative to the <input type="file" runat="server" /> HTML tag? The reason for asking is i would rather use an image button and display the file as an asp label on the portal to keep with a consistent style. Question 2: So i understand that giving the end user the ability to upload files to the server and then turn around to show them the data that they posted poses a security threat. So far i am using the id.PostedFile.ContentType and the file extension to reject the data if its not an accepted format (i.e. "text/plain", "application/pdf", "application/vnd.ms-excel", or "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"). Also the location where the files are uploaded to has a sufficient amount of virus and malware protection and this is not a concern. What, from the C# point of view, additional steps should i take to ensure that the end user cant take advantage and compromise the system in regards to allowing them to upload files?

    Read the article

  • MySQL performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? (Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results. If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections. Thanks Rob

    Read the article

  • VB.Net plugin using Matlab COM Automation Server...Error: 'Could not load Interop.MLApp'

    - by Ben
    My Problem: I am using Matlab COM Automation Server to call and execute matlab .m files from a VB.Net plugin for a CAD program called Rhino 3D. The code works flawlessly when set up as a simple Windows Application in Visual Studio, but when I insert it (and make the requisite reference) into my .Net plugin and test it in the CAD program I get the following error: "Could not load file or assembly 'Interop.MLApp, Version 1.0.0.0, culture=neutral, PublicKeyToken=null' or one of its dependencies. the system cannot find the file specified." What I've Tried: I am baffled as to why this occurs, but I was able to contact the CAD program's technical support staff and they suggested that it has something to do with their DotNet SDK having trouble with references that are located far outside the CAD program directory. They didn't have any solutions so I tried playing around with copylocal and this made no difference. I tried using other COM libraries and the Open Office automation server works fine, although uses url's instead of requiring a reference. I also tested Excel, which does require a reference, and it returned the error: "retrieving the COM class factory for component with CLSID {...} failed due to the following error: 80040154." This may or may not be related to the issue with the Matlab COM reference, but I thought was worthwhile to share. Perhaps is there another way to reference Interop.MLApp? I would appreciate any suggestions or thoughts on how I might make the Matlab Interop.MLApp reference work. Best regards, Ben

    Read the article

  • What is the worst programming language you ever worked with? [closed]

    - by Ludwig Weinzierl
    If you have an interesting story to share, please post an answer, but do not abuse this question for bashing a language. We are programmers, and our primary tool is the programming language we use. While there is a lot of discussion about the best one, I'd like to hear your stories about the worst programming languages you ever worked with and I'd like to know exactly what annoyed you. I'd like to collect this stories partly to avoid common pitfalls while designing a language (especially a DSL) and partly to avoid quirky languages in the future in general. This question is not subjective. If a language supports only single character identifiers (see my own answer) this is bad in a non-debatable way. EDIT Some people have raised concerns that this question attracts trolls. Wading through all your answers made one thing clear. The large majority of answers is appropriate, useful and well written. UPDATE 2009-07-01 19:15 GMT The language overview is now complete, covering 103 different languages from 102 answers. I decided to be lax about what counts as a programming language and included anything reasonable. Thank you David for your comments on this. Here are all programming languages covered so far (alphabetical order, linked with answer, new entries in bold): ABAP, all 20th century languages, all drag and drop languages, all proprietary languages, APF, APL (1), AS400, Authorware, Autohotkey, BancaStar, BASIC, Bourne Shell, Brainfuck, C++, Centura Team Developer, Cobol (1), Cold Fusion, Coldfusion, CRM114, Crystal Syntax, CSS, Dataflex 2.3, DB/c DX, dbase II, DCL, Delphi IDE, Doors DXL, DOS batch (1), Excel Macro language, FileMaker, FOCUS, Forth, FORTRAN, FORTRAN 77, HTML, Illustra web blade, Informix 4th Generation Language, Informix Universal Server web blade, INTERCAL, Java, JavaScript (1), JCL (1), karol, LabTalk, Labview, Lingo, LISP, Logo, LOLCODE, LotusScript, m4, Magic II, Makefiles, MapBasic, MaxScript, Meditech Magic, MEL, mIRC Script, MS Access, MUMPS, Oberon, object extensions to C, Objective-C, OPS5, Oz, Perl (1), PHP, PL/SQL, PowerDynamo, PROGRESS 4GL, prova, PS-FOCUS, Python, Regular Expressions, RPG, RPG II, Scheme, ScriptMaker, sendmail.conf, Smalltalk, Smalltalk , SNOBOL, SpeedScript, Sybase PowerBuilder, Symbian C++, System RPL, TCL, TECO, The Visual Software Environment, Tiny praat, TransCAD, troff, uBasic, VB6 (1), VBScript (1), VDF4, Vimscript, Visual Basic (1), Visual C++, Visual Foxpro, VSE, Webspeed, XSLT The answers covering 80386 assembler, VB6 and VBScript have been removed.

    Read the article

  • Remove mailmerge data source via OpenXML

    - by Dan
    I have some code that uses OpenXML to open up a docx file, find all mailmerge fields, and replace them with data (ignoring the datasource that may have been provided). I initially tested this against a document created in Office 2007 and it seemed to work great. We then created one in 2003 based off an Excel spreadsheet data source and saved it to 2007 docx format. When we open the file produced by my code, Word warns the user that it is going to execute some SQL, specifically "SELECT * from 'Sheet1$'". It has options of Yes/No. Selecting Yes requires I find the data source. Selecting No brings me to the document, which appears to be correct. I'm not sure why I'm now seeing this pop up. Perhaps it's due to a different data source for the 2003 document? My hope was that there was a way to delete all references to any datasources and that the pop-up wouldn't show. I found this, but it doesn't seem to work. Any suggestions?

    Read the article

  • Change find method in database search so that it isn't case sensitive in Rails app

    - by Ryan
    Hello, I am learning Rails and have created a work-in-progress app that does one-word searches on a database of shortcut keys for various programs (http://keyboardcuts.heroku.com/shortcuts/home). The search method in the Shortcut model is the following: def self.search(search) search_condition = "%" + search + "%" find(:all, :conditions => ['action LIKE ? OR application LIKE ?', search_condition, search_condition]) end ...where 'action' and 'application' are columns in a SQLite table. (source: https://we.riseup.net/rails/simple-search-tutorial) For some reason, the search seems to be case sensitive (you can see this by searching 'Paste' vs. 'paste'). Can anyone help me figure out why and what I can do to make it not case sensitive? If not, can you at least point me in the right direction? Database creation: I first copied shortcuts from various website into Excel and saved it as a CSV. Then I migrated the database and filled it with the data using db:seed and a small script I wrote (I viewed the database and it looked fine). To get the SQLite database to the server, I used Taps as outline by the Heroku website (http://blog.heroku.com/archives/2009/3/18/push_and_pull_databases_to_and_from_heroku/). I am using Ubuntu. Please let me know if you need more information. Thanks in advance for you help, very much appreciated! Ryan

    Read the article

  • Crosstab/Cube/Pivot Components for Delphi

    - by Anagoge
    I'm looking for a Delphi VCL crosstab/cube/pivotcube/olap grid component for Delphi 2009, 2010, or XE. I'm willing to sacrifice advanced features to get something open/free (or very cheap if I must) to make it easier to collaborate with any future developers without anyone having to purchase more components than I already use, since this will just be used in one screen. If there isn't anything appropriate out there, I may try to implement something simple on my own. I can live with some fairly basic features: drag and drop to configure dimensions, sort by a column, allow totals/min/max for a column, and (optionally) expand/collapse or drill down to sub-categories. Blazing performance and enterprise scalability are not required, since there should be less than 2000 source rows. There appear to be several decent options in the commercial space (ExpressPivotCube, FastCube, HierCube), but they are all a few hundred dollars. This project already uses existing installations of Excel 2007 and SQL Server 2005/2008, so I might consider leveraging those, though I'd prefer a native Delphi component, if possible. There are also the very old Decision Cube components included in Delphi's Source\xtab directory, but they apparently no longer support unicode compilers (Delphi 2009+), since I got dozens of unicode-related compilation errors while test compiling that source in Delphi XE. Those components also still link to the long-deprecated BDE! Has anyone modified Decision Cube to support unicode/pure-TDataSet? The online tutorials I found were incomplete and silent on the dozens of BDE/unicode compilation errors I see, so I might have to tackle that on my own. Does anyone have suggestions where to start for a free/cheap basic crosstab/pivot grid component?

    Read the article

  • User (MS-Office) generated content - how?

    - by Avi
    How can I allow users to share Microsoft Office generated content on an ASP.Net site? For example usage, imagine a site similar to StackOverflow. George, writing a question, uses Word, Excel or OneNote to create content, and then inserts the content into the question area (probably copying it into the clipboard and then using some "paste from office" widget). Harry, who doesn't have MS-Office on his computer, can still see in his browser the content George has generate. If Harry wants to add content, he can use the built in editor, same like in Stackoverflow, and have to be satisfied with lesser functionality. Sue, who has MS-Office installed, can of course see the content in the browser just like Harry. In addition, she can "export" this content and process it in the application George used to generate it. So, how do I do it? Would Save/Export to HTML feature work? Any tools? Samples? Articles? Office 2007 or later is OK.

    Read the article

  • Programmatically creating vector arrows in KML

    - by mettadore
    Does anyone have any practical examples of programmatically drawing icons as vectors in KML? Specifically, I have data with a magnitude and an azimuth at given coordinates, and I would like to have icons (or another graphical element) generated based on these values. Some thoughts on how I might approach it: Image directory (a brute force way): Make an image director of 360 different image files (probably batch rotate a single image) each pointing in a cooresponding azimuth. I've seen things like "Excel to KML," but am looking for code that I can use within a program, rather than a web utility. Issue: Arrow does not contain magnitude context, so that would have to be a label. I'd rather dynamically lengthen the arrow. Line creation in KML: Perhaps create a formula that creates a line with the origin at the coordinate points, with the length of the line proportional to the magnitute, and angled according to azimuth. There would then be two more lines, perhaps 30 degrees or so extending from the end of the previous line to make the arrow head. Issues: Not a separate image icon, so not sure how it would work in KML. Also not sure how easy it would be to generate this algorithm. Separate image generation: Perhaps create a PHP file that uses imagemagick or something similar to dynamically generate a .png file in a similar method to the above, and then link to the icon using the URI "domain.tld/imagegen.php?magnitude=magvalue&azimuth=azmvalue". Issue: Still have the problem of actually writing the algorithm for image generation. So, the question: has anyone else come up with solutions for programmatic vector (rather than merely arrow) generation?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >