Search Results

Search found 97411 results on 3897 pages for 'code analysis tool'.

Page 509/3897 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • The Google Prediction API

    The Google Prediction API The Prediction API enables you to make your smart apps even smarter. The API accesses Google's machine learning algorithms to analyze your historic data and predict likely future outcomes. Using the Google Prediction API, you can build the following intelligence into your applications. Read more at code.google.com From: GoogleDevelopers Views: 15834 113 ratings Time: 01:37 More in Science & Technology

    Read the article

  • Problem with javascript [migrated]

    - by Andres Orozco
    Well i have a problem, and i made this test code to show you my problem. HTML Code -- http://i.stack.imgur.com/7qlZx.png Javascript Code -- http://i.stack.imgur.com/DYvuq.png What i want to do here in this example is to change the styles of the id. But for some reason it doesn't work the bad news is that i have some problem to detect... problems on my code, so i don't know what happen and i really need your help. Thank's. Andres.

    Read the article

  • Google Chrome Extensions: Identity, Signing and Auto Update

    Google Chrome Extensions: Identity, Signing and Auto Update Antony Sargent, a software engineer at Google discusses topics related to ids, packaging and distribution of extensions in the Google Chrome Extension system. To get more information, visit code.google.com/chrome/extensions From: GoogleDevelopers Views: 27337 54 ratings Time: 04:08 More in Science & Technology

    Read the article

  • Expression Blend 4 available and training resources

    - by pluginbaby
    As you may know Expression Blend 4 has shipped! It is still part of Expression Studio, which now comes in 2 “flavors”: Expression Studio 4 Ultimate Expression Blend SketchFlow Expression Web + SuperPreview Expression Encoder Expression Design Expression Studio 4 Web Professional Expression Web + SuperPreview Expression Encoder Expression Design So the version you want for Silverlight is Expression Studio 4 Ultimate (because you can’t buy Expression Blend alone). Expression Blend is an awesome tool but might be difficult to approach at first, specially for people coming from Visual Studio… this tool target designers so it can takes time for a developer to get comfortable enough. Good news is the availability of a free “Blend Fundamentals Training” which contains plenty of resources to help you master Expression Blend in 5 days: http://www.microsoft.com/expression/resources/BlendTraining/   Also don’t forget the .toolbox: http://www.microsoft.com/design/toolbox/ This Microsoft website contains courses and tutorials to help you learn UI Design for Silverlight with Expression Blend.

    Read the article

  • Google Python Class Day 2 Part 1

    Google Python Class Day 2 Part 1 Google Python Class Day 2 Part 1: Regular Expressions. By Nick Parlante. Support materials and exercises: code.google.com From: GoogleDevelopers Views: 18 0 ratings Time: 42:00 More in Science & Technology

    Read the article

  • EF 4 Pluralization Update

    - by Ken Cox [MVP]
    I previously wrote about playing with EF 4’s PluralizationService class . Now that OrcsWeb is running ASP.NET 4, you can play with my little pluralization page and its WCF service online. The source code (such as it is!) can be downloaded from the MSDN Code Gallery here: http://code.msdn.microsoft.com/PluralizationService BTW, one annoyance is that the WDSL still includes the default namespace:  namespace="http://tempuri.org/" I swatted a couple of these instances, but if you know...(read more)

    Read the article

  • Using the RSSBus Salesforce Excel Add-In From Excel Macros (VBA)

    - by dataintegration
    The RSSBus Salesforce Excel Add-In makes it easy to retrieve and update data from Salesforce from within Microsoft Excel. In addition to the built-in wizards that make data manipulation possible without code, the full functionality of the RSSBus Excel Add-Ins is available programmatically with Excel Macros (VBA) and Excel Functions. This article shows how to write an Excel macro that can be used to perform bulk inserts into Salesforce. Although this article uses the Salesforce Excel Add-In as an example, the same process can be applied to any of the Excel Add-Ins available on our website. Step 1: Download and install the RSSBus Excel Add-In available on our website. Step 2: Open Excel and create place holder cells for the connection details that are needed from the macro. In this article, a spreadsheet will be created for batch inserts, and these cells will store the connection details, and will be used to report the job Id, the batch Id, and the batch status. Step 3: Switch to the Developer tab in Excel. Add a new button on the spreadsheet, and create a new macro associated with it. This macro will contain the code needed to insert a batch of rows into Salesforce. Step 4: Add a reference to the Excel Add-In by selecting Tools --> References --> RSSBus Excel Add-In. The macro functions of the Excel Add-In will be available once the reference has been added. The following code shows how to call a Stored Procedure. In this example, a job is created to insert Leads by calling the CreateJob stored procedure. CreateJob returns a jobId that can be used to upload a large number of Leads in one transaction. Note the use of cells B1, B2, B3, and B4 that were created in Step 2 to read the connection settings from the Excel SpreadSheet and to write out the status of the procedure. methodName = "CreateJob" module.SetProviderName ("Salesforce") nameArray = Array("ObjectName", "Action", "ConcurrencyMode") valueArray = Array("Lead", "insert", "Serial") user = Range("B1").value pass = Range("B2").value atoken = Range("B3").value If (Not user = "" And Not pass = "" And Not atoken = "") Then module.SetConnectionString ("User=" + user + ";Password=" + pass + ";Access Token=" + atoken + ";") If module.CallSP(methodName, nameArray, valueArray) Then Dim ColumnCount As Integer ColumnCount = module.GetColumnCount Dim idIndex As Integer For Count = 0 To ColumnCount - 1 Dim colName As String colName = module.GetColumnName(Count) If module.GetColumnName(Count) = "id" Then idIndex = Count End If Next While (Not module.EOF) Range("B4").value = module.GetValue(idIndex) module.MoveNext Wend Else MsgBox "The CreateJob query failed." End If Exit Sub Else MsgBox "Please specify the connection details." Exit Sub End If Error: MsgBox "ERROR: " & Err.Description Step 5: Add the code to your macro. If you use the code above, you can check the results at Salesforce.com. They can be seen at Administration Setup -> Monitoring -> Bulk Data Load Jobs. Download the attached sample file for a more complete demo. Distributing an Excel File With Macros An Excel file with macros is saved using the .xlms extension. The code for the macro remains in the Excel file, and you can distribute your Excel file to any machine where the RSSBus Salesforce Excel Add-In is already installed. Macro Sample File Please download the fully functional sample excel file that includes the code referenced here. You will also need the RSSBus Excel Add-In to make the connection. You can download a free trial here. Note: You may get an error message stating: "Can't find project or library." in Excel 2007, since this example is made using Excel 2010. To resolve this, navigate to Tools -> References and uncheck the "MISSING: RSSBus Excel Add-In", then scroll down and check the "RSSBus Excel Add-In" listed below it.

    Read the article

  • ASP.NET ViewState Tips and Tricks #2

    - by João Angelo
    If you need to store complex types in ViewState DO implement IStateManager to control view state persistence and reduce its size. By default a serializable object will be fully stored in view state using BinaryFormatter. A quick comparison for a complex type with two integers and one string property produces the following results measured using ASP.NET tracing: BinaryFormatter: 328 bytes in view state IStateManager: 28 bytes in view state BinaryFormatter sample code: // DO NOT [Serializable] public class Info { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } } public class ExampleControl : WebControl { protected override void OnLoad(EventArgs e) { base.OnLoad(e); if (!this.Page.IsPostBack) { this.User = new Info { Id = 1, Name = "John Doe", Age = 27 }; } } public Info User { get { object o = this.ViewState["Example_User"]; if (o == null) return null; return (Info)o; } set { this.ViewState["Example_User"] = value; } } } IStateManager sample code: // DO public class Info : IStateManager { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } private bool isTrackingViewState; bool IStateManager.IsTrackingViewState { get { return this.isTrackingViewState; } } void IStateManager.LoadViewState(object state) { var triplet = (Triplet)state; this.Id = (int)triplet.First; this.Name = (string)triplet.Second; this.Age = (int)triplet.Third; } object IStateManager.SaveViewState() { return new Triplet(this.Id, this.Name, this.Age); } void IStateManager.TrackViewState() { this.isTrackingViewState = true; } } public class ExampleControl : WebControl { protected override void OnLoad(EventArgs e) { base.OnLoad(e); if (!this.Page.IsPostBack) { this.User = new Info { Id = 1, Name = "John Doe", Age = 27 }; } } public Info User { get; set; } protected override object SaveViewState() { return new Pair( ((IStateManager)this.User).SaveViewState(), base.SaveViewState()); } protected override void LoadViewState(object savedState) { if (savedState != null) { var pair = (Pair)savedState; this.User = new Info(); ((IStateManager)this.User).LoadViewState(pair.First); base.LoadViewState(pair.Second); } } }

    Read the article

  • Lua & Javascript documentation generation

    - by Tiddo
    I am in the beginning phase of create a mobile MMO with my team. The server software will be written in JavaScript using NodeJS, and the client software in Lua using Corona. We need a tool to auto-generate documentation for both the server-side and client-side code. Are there any tools which can generate documentation for both Lua and Javascript? And as a bonus: we are hosting our project on Bitbucket and the Bitbucket Wiki uses the Creole markup language. So if it's possible I want the tool to export to Creole.

    Read the article

  • Should I migrate to MVC3?

    - by eestein
    Hi everyone. I have a MVC2 project, my question is: should I migrate to MVC3? Why? I'd like the opinion of some who already migrated, or at least used MVC3 and MVC2. Already read http://weblogs.asp.net/scottgu/archive/2011/01/13/announcing-release-of-asp-net-mvc-3-iis-express-sql-ce-4-web-farm-framework-orchard-webmatrix.aspx and I already know about the described tool for migrating: http://blogs.msdn.com/b/marcinon/archive/2011/01/13/mvc-3-project-upgrade-tool.aspx What I'd really appreciate is your valuable insight. Best regards.

    Read the article

  • Migrating Databases from SQL Server 2008 to SqlAzure

    - by kaleidoscope
    Connect SQL Azure through SSMS. (It will get connected, only if you have port 1433 open.) Create required databases on SQL Azure. Create and execute schema scripts for databases.(Make sure you have written scripts which are 100% compatible with SQL Azure. Tool called SQL Azure migration tool by CodePlex helps with doing this.) Create and execute insert scripts for databases. (One can use SSMS for generating insert scripts.) For further details refer to the Windows Azure Platform Kit Nov 2009.   Anish

    Read the article

  • Google I/O 2011: High-performance GWT: best practices for writing smaller, faster apps

    Google I/O 2011: High-performance GWT: best practices for writing smaller, faster apps David Chandler The GWT compiler isn't just a Java to JavaScript transliterator. In this session, we'll show you compiler optimizations to shrink your app and make it compile and run faster. Learn common performance pitfalls, how to use lightweight cell widgets, how to use code splitting with Activities and Places, and compiler options to reduce your app's size and compile time. From: GoogleDevelopers Views: 4791 21 ratings Time: 01:01:32 More in Science & Technology

    Read the article

  • In C what is the difference between null and a new line character? Guys help please [migrated]

    - by Siddhartha Gurjala
    Whats the conceptual difference and similarity between NULL and a newline character i.e between '\0' and '\n' Explain their relevance for both integer and character data type variables and arrays? For reference here is an example snippets of a program to read and write a 2d char array PROGRAM CODE 1: int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); for(j=0;j<256;j++) { scanf("%c",&name[i][j]); if(name[i][j]=='\n') { name[i][j]='\0'; j=257; } } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } The above code is working good where as the same logic given with slight diff is not giving appropriate output. Here's the code PROGRAM CODE 2: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\0';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Here one more instance of same program not giving proper output given below PROGRAM CODE 3: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\n';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } name[i][i]='\0'; } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Why the program code 2 and program code 3 are not working as expected as that of the code 1?

    Read the article

  • Finding an alert in the middle of your javascript

    - by Ariel Popovsky
    I was debugging a script injection issue the other day using some sample code with an alert in it. The alert was popping out meaning the code got executed leaving open the possibility for a hacker to put there some nasty malicious code. I knew my alert was being executed but didn’t know how. So I tried something that worked perfectly for this problem, replaced the native alert function with my own one. All I had to do in Chrome was open the javascript console and type: alert = function(msg){ console.log(msg); console.trace(); }; The next time the malicious code was executed, instead of the regular alert I got something similar to this:   alert("testing") testing console.trace() alert:2 (anonymous function):2 InjectedScript._evaluateOn:312 InjectedScript._evaluateAndWrap:294 InjectedScript.evaluate:288 undefined In my case I was able to see what was going on and find the offending function. This was tested on Firebug in Firefox and it works as.

    Read the article

  • Switching from Java/Java EE career path to C POS path?

    - by Muhammad
    I am a Java/Java EE Developer with about 3 years in this field. I like low-level programming so much... I favor back-end code over front-end. I've a knowledge in C and know little about C++. I got an offer to work with C in Point-of-Sale Payment terminals. I don't know much about how POS works (IDE/toolsets, etc). although I have a payment experience (ISO8583, etc...) I need you own opinion from Switching from the Java's High-level world to POS low-level world Although I love low-level world, but I am afraid from not being found what I seek.. I know programmers are not measured by the tools they use (including prog. langs.) but with their minds. I need your opinions of: Is programming POS terminals in C is an interesting thing, or I'll find myself doing usual code-writing job? (especially I am about to switch my whole career path). I find myself writing an elegant code in Java (like: Sobat http://code.google.com/p/sobat/) a code where I find myself in... So do I'll find the same thing in POS C? or It will all about Libraries that I'll call to finish my work?! Lastly, does this thing worse adventure with my current career (stability, conference, etc.. )? (as I currently don't think to move to a new job) Thanks.

    Read the article

  • ct.sym steals the ASM class

    - by Geertjan
    Some mild consternation on the Twittersphere yesterday. Marcus Lagergren not being able to find the ASM classes in JDK 8 in NetBeans IDE: And there's no such problem in Eclipse (and apparently in IntelliJ IDEA). Help, does NetBeans (despite being incredibly awesome) suck, after all? The truth of the matter is that there's something called "ct.sym" in the JDK. When javac is compiling code, it doesn't link against rt.jar. Instead, it uses a special symbol file lib/ct.sym with class stubs. Internal JDK classes are not put in that symbol file, since those are internal classes. You shouldn't want to use them, at all. However, what if you're Marcus Lagergren who DOES need these classes? I.e., he's working on the internal JDK classes and hence needs to have access to them. Fair enough that the general Java population can't access those classes, since they're internal implementation classes that could be changed anytime and one wouldn't want all unknown clients of those classes to start breaking once changes are made to the implementation, i.e., this is the rt.jar's internal class protection mechanism. But, again, we're now Marcus Lagergen and not the general Java population. For the solution, read Jan Lahoda, NetBeans Java Editor guru, here: https://netbeans.org/bugzilla/show_bug.cgi?id=186120 In particular, take note of this: AFAIK, the ct.sym is new in JDK6. It contains stubs for all classes that existed in JDK5 (for compatibility with existing programs that would use private JDK classes), but does not contain implementation classes that were introduced in JDK6 (only API classes). This is to prevent application developers to accidentally use JDK's private classes (as such applications would be unportable and may not run on future versions of JDK). Note that this is not really a NB thing - this is the behavior of javac from the JDK. I do not know about any way to disable this except deleting ct.sym or the option mentioned above. Regarding loading the classes: JVM uses two classpath's: classpath and bootclasspath. rt.jar is on the bootclasspath and has precedence over anything on the "custom" classpath, which is used by the application. The usual way to override classes on bootclasspath is to start the JVM with "-Xbootclasspath/p:" option, which prepends the given jars (and presumably also directories) to bootclasspath. Hence, let's take the first option, the simpler one, and simply delete the "ct.sym" file. Again, only because we need to work with those internal classes as developers of the JDK, not because we want to hack our way around "ct.sym", which would mean you'd not have portable code at the end of the day. Go to the JDK 8 lib folder and you'll find the file: Delete it. Start NetBeans IDE again, either on JDK 7 or JDK 8, doesn't make a difference for these purposes, create a new Java application (or use an existing one), make sure you have set the JDK above as the JDK of the application, and hey presto: The above obviously assumes you have a build of JDK 8 that actually includes the ASM package. And below you can see that not only are the classes found but my build succeeded, even though I'm using internal JDK classes. The yellow markings in the sidebar mean that the classes are imported but not used in the code, where normally, if I hadn't removed "ct.sym", I would have seen red error marking instead, and the code wouldn't have compiled. Note: I've tried setting "-XDignore.symbol.file" in "netbeans.conf" and in other places, but so far haven't got that to work. Simply deleting the "ct.sym" file (or back it up somewhere and put it back when needed) is quite clearly the most straightforward solution. Ultimately, if you want to be able to use those internal classes while still having portable code, do you know what you need to do? You need to create a JDK bug report stating that you need an internal class to be added to "ct.sym". Probably you'll get a motivation back stating WHY that internal class isn't supposed to be used externally. There must be a reason why those classes aren't available for external usage, otherwise they would have been added to "ct.sym". So, now the only remaining question is why the Eclipse compiler doesn't hide the internal JDK classes. Apparently the Eclipse compiler ignores the "ct.sym" file. In other words, at the end of the day, far from being a bug in NetBeans... we have now found a (pretty enormous, I reckon) bug in Eclipse. The Eclipse compiler does not protect you from using internal JDK classes and the code that you create in Eclipse may not work with future releases of the JDK, since the JDK team is simply going to be changing those classes that are not found in the "ct.sym" file while assuming (correctly, thanks to the presence of "ct.sym" mechanism) that no code in the world, other than JDK code, is tied to those classes.

    Read the article

  • Windows 7 Virtualized on Ubuntu Server

    - by garbagecollector
    I have an issue, we are moving to a production build server now. I need a virtual machine up and running on my Ubuntu 10.10 server edition. I have to setup and install various tool and plugins, on this windows 7 virtual machine as well The problem I am facing is how do I install windows 7 on this machine ( ubuntu 10.10 server) also, how am i supposed to gain access to it in order to install tools that are required on it. I would prefer virtual box as my tool of choice. Please and thank you.

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Integrating Google Apps with Salesforce using Google Apps Script

    Integrating Google Apps with Salesforce using Google Apps Script A very special Google Developers Live episode, in which Arun Nagarajan talks about using Google Apps Script with Salesforce to show how easily developers can integrate Salesforce with Google Sheets, Gmail, Google Docs and other Google Apps products. Download the full source code of these demo scripts here: github.com From: GoogleDevelopers Views: 52 6 ratings Time: 41:23 More in Science & Technology

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

  • Programmatically Starting and Stopping FTP Sites in IIS 7 and IIS 8

    - by The Official Microsoft IIS Site
    I was recently contacted by someone who was trying to use Windows Management Instrumentation (WMI) code to stop and restart FTP websites by using code that he had written for IIS 6.0; his code was something similar to the following: Option Explicit On Error Resume Next Dim objWMIService, colItems, objItem ' Attach to the IIS service. Set objWMIService = GetObject( "winmgmts:\root\microsoftiisv2" ) ' Retrieve the collection of FTP sites. Set colItems = objWMIService.ExecQuery( "Select...(read more)

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Google I/O 2010 - Data pipelines with Google App Engine

    Google I/O 2010 - Data pipelines with Google App Engine Google I/O 2010 - Building high-throughput data pipelines with Google App Engine App Engine 301 Brett Slatkin This session will cover how to build, test, and maintain large-scale data pipelines on Google App Engine. It will cover maximizing efficiency, productionization, and how to deal with changing requirements. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 01:01:52 More in Science & Technology

    Read the article

  • Is sticking to one language on a particular project a good practice?

    - by Ans
    I'm developing a pipeline for processing text that will go into production. The question I keep asking myself is: should I stick to one language for the project when I'm looking for a tool to do a particular task (e.g. NLTK, PDFMiner, CLD, CRFsuite, etc.)? Or is it OK to mix and match languages on the project? So I pick the best tool regardless of what language it's written in (e.g. OpenNLP, ParsCit, poppler, CFR++, etc.) and warp (wrap) my code around it? Note, I am not asking about should a developer stick to just one language for their career.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >