Search Results

Search found 1829 results on 74 pages for 'automated'.

Page 62/74 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Experience migrating legacy Cobol/PL1 to Java

    - by MadMurf
    ORIGINAL Q: I'm wondering if anyone has had experience of migrating a large Cobol/PL1 codebase to Java? How automated was the process and how maintainable was the output? How did the move from transactional to OO work out? Any lessons learned along the way or resources/white papers that may be of benefit would be appreciated. EDIT 7/7: Certainly the NACA approach is interesting, the ability to continue making your BAU changes to the COBOL code right up to the point of releasing the JAVA version has merit for any organization. The argument for procedural Java in the same layout as the COBOL to give the coders a sense of comfort while familiarizing with the Java language is a valid argument for a large organisation with a large code base. As @Didier points out the $3mil annual saving gives scope for generous padding on any BAU changes going forward to refactor the code on an ongoing basis. As he puts it if you care about your people you find a way to keep them happy while gradually challenging them. The problem as I see it with the suggestion from @duffymo to Best to try and really understand the problem at its roots and re-express it as an object-oriented system is that if you have any BAU changes ongoing then during the LONG project lifetime of coding your new OO system you end up coding & testing changes on the double. That is a major benefit of the NACA approach. I've had some experience of migrating Client-Server applications to a web implementation and this was one of the major issues we encountered, constantly shifting requirements due to BAU changes. It made PM & scheduling a real challenge. Thanks to @hhafez who's experience is nicely put as "similar but slightly different" and has had a reasonably satisfactory experience of an automatic code migration from Ada to Java. Thanks @Didier for contributing, I'm still studying your approach and if I have any Q's I'll drop you a line.

    Read the article

  • Video-codec rater by image comparison algorithm?

    - by Andreas Hornig
    Hi, perhaps anyone knows if this is possible. comparing image quality is almost imposible to describe without subjective influences. When someone rates an image quality as good there is at least one person, that doesn't think so. human preferences are always different. So, I would like to know if there is away to "rate" the image quality by an algorithm that compares the original image to the produced one in following issues colour change(difference pixel by pixel blur rate artifacts and macroblocking the first one would be the easiest one because you could check just the diffeence in colours and can give 3 values in +- of each hex-value both last once I don't know if this is possible, but the blocking could be detected by edge-finding. and the king's quest would be to do that for more then just one image, because video is done with several frames. perhaps you expert programmers could tell me, if such an automated algo can be done to bring some objective measurement divice into rating image quality. this could perhaps calm down some h.264 is better than x264 and better than vp8 and blaaah people :) Andreas 1st posted here http://www.hdtvtotal.com/index.php?name=PNphpBB2&file=viewtopic&p=9705

    Read the article

  • Extend base type and automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? I would prefer to not do any reflection, so using a text template is not out of the question. A solution has been proposed to override SaveChanges and do it there, but in order to achieve this i would either have to use reflection (in which I don't want to do ) or derive a base class. Assuming i go down this route how would I achieve this? For example EXAMPLE_DB_TABLE CODE NAME --Audit Tables CREATE_DATE CREATE_USER UPDATE_DATE UPDATE_USER And if i create a base class public abstract class IUpdatable{ public virtual DateTime CreateDate {set;} public virtual string CreateUser { set;} public virtual DateTime UpdateDate { set;} public virtual string UpdateUser { set;} } The end goal is to be able to do something like... public overrride void SaveChanges(){ //Go through state manager and update audit infromation //FOREACH changed entity in state manager if(entity is IUpdatable){ //If state is created... update create audit. //if state is updated... update update audit } } But I am not sure how I go about generating the code that would extend the interface.

    Read the article

  • Problem with sfRemember cookie / sfGuard Remember me

    - by Tom
    I'm using Symfony 1.4 with Doctrine. Sorry if this is a silly question but what exactly does one need to build on top of the sfDoctrineGuardPlugin to get the "remember me" functionality working? When I login a user, the sfRemember cookie is created with the default 15-day lifetime, and the remember key is saved in the plugin's sf_guard_remember_key table. Without any tweaks to the plugin, the sfGuardSecurityUser SignIn() method creates the cookie, but the Signout() method erases it, leaving no cookie unless you're logged in! Signin(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, $key, time() + $expiration_age); Signout(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, '', time() - $expiration_age); I can see that the database table saves the cookie as a relation of sf_guard_user, but that's not much good if the cookie is gone.... I'd be grateful if someone could tell me what I'm missing here, and ideally, if I prevent the Signout() method from removing the cookie, do I need to write code to read the cookie myself or is this automated somewhere/somehow? I've got box-standard Symfony 1.4 and sfDoctrineGuardPlugin installations. It all just seems totally wrong and the documentation on this is non-existent. Any help would appreciated.

    Read the article

  • Adobe AIR non-Administrator application installation/upgrade on Windows

    - by bzlm
    Is there any way to allow non-Administrator users to install, upgrade or uninstall an Adobe AIR application on Windows? I've made an Adobe AIR application and packaged it as a .air package using the standard AIR mechanism for creating deployment packages. If a normal or Power user tries to install this AIR application, the Application Event Log shows an error saying administrative rights are required. And even if the user elevates during installation, administrative rights are still required for an upgrade using the automated AIR upgrade system (since an upgrade is essentially, behind the scenes, an uninstallation of a .msi package followed by an installation of another .msi package). Is there any way around this? What I've tried so far is: Using the Group Policy editor, setting Windows Installer to elevate during installations. Doesn't work, since AIR attempts a "for all users" installation. Specifying My Documents as the installation directory. Doesn't work, since AIR attempts a "for all users" installation. Giving the user Modify access to the Program Files folder where the application would usually reside. Doesn't work, since this isn't a file permissions issue. Making the user a Power User. Doesn't work, since AIR attempts a "for all users" installation. I'm guessing that both installing and upgrading would work fine for a user if the AIR installer would attempt to make an "only for me" application installation instead of a "for all users" installation, and the user was a Power User, and possibly the application was installed to My Documents I'm also guessing that this problem doesn't exist on OSX and Linux, since they have more intuitive concepts for per-user application installations.

    Read the article

  • Switching from Sourcesafe - What to look for in a product

    - by asp316
    We're looking to move off of sourcesafe and on to a more robust source control system for our .Net apps. We're also looking for scripted/automated deployments. I'm a .Net developer (web and winforms). However, most of our development staff is RPG for the IBM iSeries and the devs use Aldon's LMI for source control and deployment. Our manager would prefer to stick with Aldon so all of our products are in the same system. However, I don't have experience with Aldon's products on the .Net side. I've used TFS and Subversion with Tortoise a bit, but not enough to recommend one or the other, especially in comparison to Aldon's product. Does anybody have experience with Aldon's products? If so, thoughts please? Also, other than the obvious things source control systems do, are there things I should avoid or are there must haves? I'm open to any system. A bit of background, I'm the only .Net dev in our company but I let operations do the deployments. I do want the ability to support concurrent checkouts if we hire a new dev.

    Read the article

  • Best Way to automatically compress and minimize JavaScript files in an ASP.NET MVC app

    - by wgpubs
    So I have an ASP.NET MVC app that references a number of javascript files in various places (in the site master and additional references in several views as well). I'd like to know if there is an automated way, and if so what is the recommended approach, for compressing and minimizing such references into a single .js file where possible. Such that this ... <script src="<%= ResolveUrl("~") %>Content/ExtJS/Ext.ux.grid.GridSummary/Ext.ux.grid.GridSummary.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.rating/ext.ux.ratingplugin.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext-starslider/ext-starslider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.dollarfield.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.combobox.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.datepickerplus/ext.ux.datepickerplus-min.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/SessionProvider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/TabCloseMenu.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/ActivityForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/UserForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/SwappedGrid.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/Tree.js" type="text/javascript"></script> ... could be reduced to something like this ... <script src="<%= ResolveUrl("~") %>Content/MyViewPage-min.js" type="text/javascript"></script> Thanks

    Read the article

  • Yahoo Account Has Been Closed

    - by VIVEK MISHRA
    My Domain www.manumachu.com had been closed by Yahoo due to non- Payment. I want to backorder it. is there anyway to do the following. This is the mail i received from Yahoo :- This is an automated notice. Replies to this address will not be received. If you have questions, please contact Yahoo! Customer Care. For your protection, Yahoo! will never ask you to provide your billing information via email. Dear Ravi Reddy manumachu, This notice is to inform you that your Yahoo! GeoCities Pro account has been closed due to nonpayment. The Yahoo! ID associated with this account: ravi_manumachu The domain name for this account: manumachu.com For questions, please visit our online help center or call our toll-free number at (800) 318-0870 between 6 a.m. and 6 p.m. PT, Monday through Friday, excluding holidays. Best regards, The Yahoo! Billing team This is a service email related to your use of Yahoo! Small Business. To learn more about Yahoo!'s use of personal information, including the use of web beacons in HTML-based email, please read our privacy policy. Yahoo! is located at 701 First Avenue, Sunnyvale, CA 94089. Copyright Policy - Terms of Service - Additional Terms - Help

    Read the article

  • Way to automate setting of MergeOptions

    - by Nix
    I am looking for an automated way to iterate over all ObjectQueries and set the merge option to no tracking (read only context). Once i find out how to do it i will be able to generate a default read only context using a T4 template. Is this possible? For example lets say i have these tables in my object context SampleContext TableA TableB TableC I would have to go through and do the below. SampleContext sc = new SampleContext(); sc.TableA.MergeOption = MergeOption.NoTracking; sc.TableB.MergeOption = MergeOption.NoTracking; sc.TableC.MergeOption = MergeOption.NoTracking; I am trying to find a way to generalize this using object context. I want to get it down to something like foreach(var objectQuery : sc){ objectQuery.MergeOption = MergeOption.NoTracking; } Preferably I would like to do it using the baseclass(ObjectContext): ObjectContext baseClass = sc as ObjectContext var objectQueries = sc.MetadataWorkspace.GetItem("Magic Object Query Option); But i am not sure i can even get access to the queries. Any help would be appreciated.

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • Field specific errors for ETL

    - by AaronLS
    I am creating a ETL process in MS SQL Server and I would like to have errors specific to a particular column of a particular row. For example, the data is initially loaded from excel files into a table(we'll call the Initial table) where all columns are varchar(2000) and then I stage the data to another table(the DataTypedTable) that contains more specific data types (datetime,int, etc.) or more tightly constrained varchar lengths. I need to be able to create error messages for a specific field such as: "Jan. 13th" is not a valid date format for the submission date. Please use a format of MM/DD/YYYY These error messages would need to be stored in some way such that later in the process a automated process can create reports with the error messages such that each message references a specific row and field(someone will need to go back and correct the data in the source system and resubmit the excel file). So ideally it would be inserted into a Failures tables of some sort and contain the primary key of the failed row, the column name, and the error message. Question: So I am wondering if this can be accomplished with SSIS, or some open source tool like Talend, and if so, what would be your general approach? Or what hand coded approach you would take? Couple approaches I've thought of using SQL(up until no I have done ETL by hand in SQL procs, but I want to consider other approaches. Possible C# even.): Use a cursor to read through the Initial table, and for each row insert a blank record with only the primary key into the DataTyped table, then use a single update statement for each column, such that if that update fails I can insert a very specific error message specific to that column in the error messages table. Insert all the data as is into the DataTyped table, but have duplicate columns like SubmissionDate and SubmissionDateOld. After the initial insert the *Old columns have data, the rest are blank, and I have a single update for each column that sets the SubmissionDate based on the SubmissionDateOld. In addition to suggesting an approach, I'd like to know if you are using that approach or something similar already in the work you do.

    Read the article

  • Are there any ASP.NET MVC subscription-based starter kits or examples?

    - by Wayne M
    Basically something that handles the low-level "plumbing" code for a subscription-based service. I see a lot of things dealing with basic membership, but nothing that handles the subscription aspect (recurring billing, automated jobs for setting up billing, notification for billing, etc). This might be the one thing that keeps me from using ASP.NET MVC for my SaaS idea, since it would take a fair amount of development time to write my own; if I go with my other option, Ruby on Rails, I can buy a kit that does all of this for $250. I haven't found anything even remotely close to this for .NET - all of the SaaS sample apps I've seen are more like StackOverflow et all where you have one site that multiple people log on to, not the web application model where you have subscribers who are billed monthly, each of whom has users and other entities (e.g. Customers, Tasks, etc) for their own site. Is there anything similar for ASP.NET, or some kind of guidelines for writing my own if I have to, so I don't waste too much time? As a startup that means that I'm doing all the coding myself. I've found this, but it seems to only be for billing and didn't seem to have much (any?) documentation on exactly how to set it up.

    Read the article

  • Is there an event that raises after a View/PartialView executes in ASP.NET MVC 2 RC2?

    - by sabanito
    I have the following problem: We have an ASP.NET MVC 2 RC 2 application that programmatically impersonates an AD Account that the user specifies at logon. This account is used to access the DB. At first we had the impersonating code in the begin_request and we were undoing the impersonation at the end_request, but when we tried to use IIS 7.5 in integrated mode, we learned that it's not possible to impersonate in the Global.asax so we tried different things. We have succesfully moved our code from the BeginRequest to the ActionExecuting event and the EndRequest to the ResultExecuted, and now, about 80% of our code works. We've just discovered that since we're passing the Entity Framework objects as models for our views, there's this remaining 20% that won't work because some Navigation Properties are not loaded when the view begins it's execution, so we're getting connection exceptions from Sql Server. Is there any event or method that executes AFTER the view, so we can undo the impersonation in it? We thought ResultExecuted will do just that, but it doesn't. We've been told that passing the plain Entities into the view as models is not a good idea, but we have A LOT of views that may have this problem and there's not automated way to know it. If some of you could explain why it's not a good idea, maybe we can convince the team to fix it!

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • Packaging reference documentation with jar file

    - by soren.enemaerke
    We are porting our .NET library to a java equivalent and is now looking at how to distribute this port. Packaging the classes into a jar-file seems like best practice and we would then ship this jar file in a zip along with some license terms. But what about the documentation? In .NET land it seems like best practice to distribute the xml file that can be consumed by tooling (Visual Studio) but we can't seem to find such best practices for java. We have javadoc comments on our public classes and interfaces, so we are just looking for a way to generate and distribute these comments in a way that is developer friendly (we're thinking easily consumed from various IDEs). What are developers expecting and how do you best deliver this? We would really prefer to bundle the documentation along with the jar file and not have to host the documentation on our website EDIT: We would like for our documentation to appear inside the java IDEs so we want to provide the documentation in a way that integrates into the IDEs as gracefully as possible. In .NET land this is as an xml file placed next to the .dll file, but is there a similar concept for jar files that enables the integration into tooling? PS: We are developing in Eclipse and have an ant task doing the building and jar-file packaing in our automated build.

    Read the article

  • What's the Matlab equivalent of NULL, when it's calling COM/ActiveX methods?

    - by David M
    Hi, I maintain a program which can be automated via COM. Generally customers use VBS to do their scripting, but we have a couple of customers who use Matlab's ActiveX support and are having trouble calling COM object methods with a NULL parameter. They've asked how they do this in Matlab - and I've been scouring Mathworks' COM/ActiveX documentation for a day or so now and can't figure it out. Their example code might look something like this: function do_something() OurAppInstance = actxserver('Foo.Application'); OurAppInstance.Method('Hello', NULL) end where NULL is where in another language, we'd write NULL or nil or Nothing, or, of course, pass in an object. The problem is this is optional (and these are implemented as optional parameters in most, but not all, cases) - these methods expect to get NULL quite often. They tell me they've tried [] (which from my reading seemed the most likely) as well as '', Nothing, 'Nothing', None, Null, and 0. I have no idea how many of those are even valid Matlab keywords - certainly none work in this case. Can anyone help? What's Matlab's syntax for a null pointer / object for use as a COM method parameter? Update: Thanks for all the replies so far! Unfortunately, none of the answers seem to work, not even libpointer. The error is the same in all cases: Error: Type mismatch, argument 2 This parameter in the COM type library is described in RIDL as: HRESULT _stdcall OurMethod([in] BSTR strParamOne, [in, optional] OurCoClass* oParamTwo, [out, retval] VARIANT_BOOL* bResult); The coclass in question implements a single interface descending from IDispatch.

    Read the article

  • Alternative to MS Project 2007 for production scheduling?

    - by john c
    OK... Im coming to grips with the fact that MS Project 2007 may not be the correct tool for my production scheduling. We serve 120 to 150 projects a year with durations from 6 weeks to 12 months. The task are simple (6 to 8 per project) and the resource pool is stable (15 to 20 people). It's really an assembly line product but with extremely varied durations. I need to be able to prioritize the projects for production and run projects concurrently to fully utilize my resources. What are the feelings of the stackoverflow community. Am I using the wrong program? I was really hoping to make this simple for non-programer types to input project data into a form and have the schedule software automated enough to make most of the decisions. Is there a better solution available commercially? I'd like to hold on writing a custom spreadsheet as a last resort but if that's the best route then so be it. Thank you so much for your input.

    Read the article

  • latin1/unicode conversion problem with ajax request and special characters

    - by mfn
    Server is PHP5 and HTML charset is latin1 (iso-8859-1). With regular form POST requests, there's no problem with "special" characters like the em dash (–) for example. Although I don't know for sure, it works. Probably because there exists a representable character for the browser at char code 150 (which is what I see in PHP on the server for a literal em dash with ord). Now our application also provides some kind of preview mechanism via ajax: the text is sent to the server and a complete HTML for a preview is sent back. However, the ordinary char code 150 em dash character when sent via ajax (tested with GET and POST) mutates into something more: %E2%80%93. I see this already in the apache log. According to various sources I found, e.g. http://www.tachyonsoft.com/uc0020.htm , this is the UTF8 byte representation of em dash and my current knowledge is that JavaScript handles everything in Unicode. However within my app, I need everything in latin1. Simply said: just like a regular POST request would have given me that em dash as char code 150, I would need that for the translated UTF8 representation too. That's were I'm failing, because with PHP on the server when I try to decode it with either utf8_decode(...) or iconv('UTF-8', 'iso-8859-1', ...) but in both cases I get a regular ? representing this character (and iconv also throws me a notice: Detected an illegal character in input string ). My goal is to find an automated solution, but maybe I'm trying to be überclever in this case? I've found other people simply doing manual replacing with a predefined input/output set; but that would always give me the feeling I could loose characters. The observant reader will note that I'm behind on understanding the full impact/complexity with things about Unicode and conversion of chars and I definitely prefer to understand the thing as a whole then a simply manual mapping. thanks

    Read the article

  • How can I convert a bunch of files from ISO-8859-1 to UTF-8 using Perl?

    - by tau
    I have several documents I need to convert from ISO-8859-1 to UTF-8 (without the BOM of course). This is the issue though. I have so many of these documents (it is actually a mix of documents, some UTF-8 and some ISO-8859-1) that I need an automated way of converting them. Unfortunately I only have ActivePerl installed and don't know much about encoding in that language. I may be able to install PHP, but I am not sure as this is not my personal computer. Just so you know, I use Scite or Notepad++, but both do not convert correctly. For example, if I open a document in Czech that contains the character "ž" and go to the "Convert to UTF-8" option in Notepad++, it incorrectly converts it to an unreadable character. There is a way I CAN convert them, but it is tedious. If I open the document with the special characters and copy the document to Windows clipboard, then paste it into a UTF-8 document and save it, it is okay. This is too tedious (opening every file and copying/pasting into a new document) for the amount of documents I have. Any ideas? Thanks!!!

    Read the article

  • How to document an accessor/mutator method in phpDoc/javaDoc?

    - by nickf
    Given a function which behaves as either a mutator or accessor depending on the arguments passed to it, like this: // in PHP, you can pass any number of arguments to a function... function cache($cacheName) { $arguments = func_get_args(); if (count($arguments) >= 2) { // two arguments passed. MUTATOR. $value = $arguments[1]; // use the second as the value $this->cache[$cacheName] = $value; // *change* the stored value } else { // 1 argument passed, ACCESSOR return $this->cache[$cacheName]; // *get* the stored value } } cache('foo', 'bar'); // nothing returned cache('foo') // 'bar' returned How do you document this in PHPDoc or a similar automated documentation creator? I had originally just written it like this: /** * Blah blah blah, you can use this as both a mutator and an accessor: * As an accessor: * @param $cacheName name of the variable to GET * @return string the value... * * As a mutator: * @param $cacheName name of the variable to SET * @param $value the value to set * @return void */ However, when this is run through phpDoc, it complains because there are 2 return tags, and the first @param $cacheName description is overwritten by the second. Is there a way around this?

    Read the article

  • Allowing Google to bypass CAPTCHA verification - sensible or not?

    - by edanfalls
    My web site has a database lookup; filling out a CAPTCHA gives you 5 minutes of lookup time. There is also some custom code to detect any automated scripts. I do this as I don't want someone data mining my site. The problem is that Google does not see the lookup results when it crawls my site. If someone is searching for a string that is present in the result of a lookup, I would like them to find this page by Googling it. The obvious solution to me is to use the PHP variable $_SERVER['HTTP_USER_AGENT'] to bypass the CAPTCHA and custom security code for the Google bots. My question is whether this is sensible or not. People could then use Google's cache to view the lookup results without having to fill out the CAPTCHA, but would Google's own script detection methods prevent them from data mining these pages? Or would there be some way for people to make $_SERVER['HTTP_USER_AGENT'] appear as Google to bypass the security measures? Thanks in advance.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >