Search Results

Search found 3022 results on 121 pages for 'loose typing'.

Page 55/121 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • How to detect collisions in AS3?

    - by Gabriel Meono
    I'm trying to make a simple game, when the ball falls into certain block, you win. Mechanics: The ball falls through several obstacles, in the end there are two blocks, if the ball touches the left block you win, the next level will contain more blocks and less space between them. Test the movie (click on the screen to drop the ball): http://gabrielmeono.com/downloads/Lucky_Hit_Alpha.swf These are the main variables: var winBox:QuickObject;//You win var looseBox:QuickObject;//You loose var gameBall:QuickObject;//Ball dropped Question: How do I trigger a collision function if the ball hits the winBox? (Win message/Next level) Thanks, here is the full code: package { import flash.display.MovieClip; import com.actionsnippet.qbox.*; import flash.events.MouseEvent; [SWF(width = 600, height = 600, frameRate = 60)] public class LuckyHit extends MovieClip { public var sim:QuickBox2D; var winBox:QuickObject; var looseBox:QuickObject; var gameBall:QuickObject; /** * Constructor */ public function LuckyHit() { sim = new QuickBox2D(this); //sim.createStageWalls(); winBox = sim.addBox({x:5,y:600/30, width:300/30, height:10/30, density:0}); looseBox = sim.addBox({x:15,y:600/30, width:300/30, height:10/30, density:0}); // make obstacles for (var i:int = 0; i<(stage.stageWidth/50); i++){ //End sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid End sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); //Middle Start sim.addCircle({x:0 + i * 1.5, y:09, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:08, radius:0.1, density:0}); sim.addCircle({x:0 + i * 1.5, y:07, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:06, radius:0.1, density:0}); } sim.start(); stage.addEventListener(MouseEvent.CLICK, _clicked); } /** * .. * @param e MouseEvent.CLICK */ private function _clicked(e:MouseEvent) { gameBall = sim.addCircle({x:(mouseX/30), y:(1), radius:0.25, density:5}); } } }

    Read the article

  • Is commented out code really always bad?

    - by nikie
    Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on. Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results (e.g. draw a set of points to the screen or save a bitmap file). Looking at these values in the debugger usually means looking at a wall of numbers (coordinates, raw pixel values). Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product (it hurts performance, and usually just confuses the end user), but I don't want to loose it, either. In C++, I can use #ifdef to conditionally compile that code, but I don't see much differnce between this: /* // Debug Visualization: draw set of found interest points for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); */ and this: #ifdef DEBUG_VISUALIZATION_DRAW_INTEREST_POINTS for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); #endif So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized. When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Should I feel bad about that? Why? Is there a superior solution? Update: S. Lott asks in a comment Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I recently read Robert Glass' "Clean Code", which says: Few practices are as odious as commenting-out code. Don't do this!. I've looked at the paragraph in the book again (p. 68), there's no qualification, no distinction made between different reasons for commenting out code. So I wondered if this rule is over-generalizing (or if I misunderstood the book) or if what I do is bad practice, for some reason I didn't know.

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • A few questions about integrating AudioKinetic Wwise and Unity

    - by SaldaVonSchwartz
    I'm new to Wwise and to using it with Unity, and though I have gotten the integration to work, I'm still dealing with some loose ends and have a few questions: (I'm on Unity 4.3 as of now but I think it shouldn't make any difference) The base path: The Wwise documentation implies you set this in the AkGlobalSoundEngineInitializer basePath public ivar, which is exposed to the editor. However, I found that this variable is not really used. Instead, the path is hardcoded to /Audio/GeneratedSoundBanks in AkBankPath. I had to modify both scripts to actually look in the path that I set in the editor property. What's the deal with this? Just sloppyness or am I missing something? Also about paths: since I'm on Mac, I'm using Unity natively under OS X and in tadem, the Wwise authoring tool via VMWare and I share the OS X Unity project folder so I can generate the soundbanks into the assets folder. However, the authoring tool (downloaded the latest one for Windows) doesn't automatically generate any "platform-specific" subfolders for my wwise files. That is, again, the Unity integration scripts assume the path to be /Audio/GeneratedSoundBanks/<my-platform>/ which in my case would be Mac (I set the authoring tool to generate for Mac). The documentation says wwise will automatically generate the platform-specific folders but it just dumps all the stuff in GeneratedSoundBanks. Am I missing some setting? cause right now I just manually create the /Mac folder. The C# methods AkSoundEngine.PostEvent and AkSoundEngine.LoadBank for instance, have a few overloads, including ones where I can refer to the soundbanks or events by their ID. However, if I try to use these, for instance: AkSoundEngine.LoadBank(, AkSoundEngine.AK_DEFAULT_POOL_ID) where the int I got from the .h header, I get Ak_Fail. If I use the overloads that reference the objects by string name then it works. What gives? Converting the ID header to C#: The integration comes with a C# script that seems to fork a process to call Python in turn to covert the C++ header into a C# script. This always fails unless I manually execute the Python script myself from outside Unity. Might be a permissions thing, but has anyone experienced this? The Profiler: I set up the Unity player to run in the background and am using the "Profile" version of the plugin. However, when I start the Unity OS X standalone app, the profiler in VMWare does not see it. This I'm thinking might just be that I'm trying to see a running instance of the sound engine inside an OS X binary from a Windows virtual machine. But I'm just wondering if anyone has gotten the Windows profiler to see an OS X Unity binary. Different versions of the integration plugin: It's not clear to me from the documentation whether I have to manually (or write a script to do it) remove the "Profile" version and install the "Release" version when I'm going to do a Release build or if I should install both version in Unity and it'll select the right one. Thanks!

    Read the article

  • Diagnosing the problem when Canon printer fails to print under Ubuntu

    - by MMA
    I understand that the issue of Canon printer under Linux has a number of posts. Actually one of these, was started by me. After inputs from others I have successfully printed using Canon LBP6000 from my Ubuntu machine for around a year. If it failed to print, restarting the daemon using this homemade script coaxed the printer to print. #!/bin/bash pkill -9 -x ccpd pkill -9 -x captmoncnabc /etc/init.d/ccpd start /etc/init.d/ccpd status Recently, I am not successful any more, or successful on a limited and sporadic basis. Sometimes it prints when turned on after logging in, sometimes when the driver is reinstalled. I keep on trying the random steps (call abracadabras) until I get success. Again, not always success comes. I frustrate on for hours only to get single page printed. I loose precious time on the issue of printing. I have read and read all the documents available over the Internet. However, if you please notice, none of the guides, articles, tutorials (these are too many to list here) seem to be dealing with diagnosing the problem when it fails to print. They tell you where to find the drivers, how to install them, or which script to run to make the installation process automatic. Yes, some of the articles or comments suggest a step to try, without any systematic order. But these fail to suggest a step based on symptoms, mostly. This morning, my Canon LBP6000 failed to print. After sometime, there was a message for system error, details of which was found to be something like this. When I search for this error (c3pldrv crashed with SIGSEGV in write ()), I find a number of articles including this one. None of these are actually helpful. Mostly, these are 'me too', 'tell me if you find anything'. Running captstatusui -P LBP6000 produced this, Yes, the printer is connected and actually turned on. I believe that there a number of frustrated Canon printer users out there like me. But there is not a step by step definitive guide to systematically diagnose a non-printing printer. Do you think that you can provide your diagnosing inputs so that a systematic document can be built? May be we will want the Ubuntu users to stay away from Canon printers. But as I believe, as a Linux user for more than fifteen years, such a scenario is not acceptable any more. May be this was acceptable in the infancy days of Linux, but not today. I am using Ubuntu 12.04, by the way, I prefer LTS versions.

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • How To Build An Enterprise Application - Introduction

    - by Tuan Nguyen
    An enterprise application is a software which fulfills 4 core quality attributes: Reliability Flexibility Reusability Maintainability Reliability is the ability of a system or component to perform its required functions under stated conditions for a specific period of time. Because there are no ways more than testing to make sure a system is reliability, we can exchange the term reliability with the term testability. Flexibility is the ability of changing a system's core features without violating unrelated features or components. Although flexibility can helps us to achieve interoperability easily but the opposite is not true. For example, a program might run on multiple platforms, contains logic for many scenarios but that wouldn't mean it was flexibility if it forces us rewrite code in all components when we just want to change some aspects of a feature it had. Reusability is the ability of sharing one or more system's components for another system. We should just open a component's reusability in the context in which it is used. For example, we write classes that implement UI logic and deliver them to only classes which implementing UI. Maintainability is the ability of adding or removing features to a system after it was released. Maintainability consists of many factors such as readability, analyzability, extensibility therein extensibility is critical. Maintainability requires us to write code that is longer and complexer than normal but it doesn't mean we introduce unneccessarily complex code. We always try to make our code clear and transparent to everyone. An application enterprise is built on an enterprise design which consists of two parts: low-level design and high-level design. At low-level design, it focuses on building loose-coupled classes or components. Particularly, it recommends: Each class or component undertakes only single responsibility (design based on unit test) Classes or components implement and work through interfaces (design based on contract) Dependency relationship between classes and components could be injected at run-time (design based on dependency) At high-level design, it focuses on architecting system into tiers and layers. Particularly, it recommends: Divide system into subsystems for deployment. Each subsytem is called a tier. Typical, an enterprise application would have 3 tiers as illustrated in the following figure: Arrange classes and components to logical containers called layers. Typical, an enterprise application would have 5 layers as illustrated in the following figure

    Read the article

  • Silverlight Cream for December 08, 2010 -- #1005

    - by Dave Campbell
    In this Issue: Peter Kuhn, David Anson, Jesse Liberty, Mike Taulty(-2-, -3-), Kunal Chowdhury, Jeremy Likness, Martin Krüger, Beth Massi(-2-, -3-)/ Above the Fold: Silverlight: "Rebuilding the PDC 2010 Silverlight Application (Part 1)" Mike Taulty WP7: "WP7: Glossy text block custom control" Martin Krüger Lightswitch: "How to Create a Screen with Multiple Search Parameters in LightSwitch" Beth Massi From SilverlightCream.com: Requirements of and pitfalls in Windows Phone 7 serialization Peter Kuhn discusses Data Contract Serializer issuses on WP7 and how to work around them. Managed implementation of CRC32 and MD5 algorithms updated; new release of ComputeFileHashes for Silverlight, WPF, and the command-line! David Anson ties up some loose ends from a prior post on hash functions, and updates his CRC32 and MD5 algorithms. Windows Phone From Scratch #9 – Visual State Jesse Liberty's latest Windows Phone from Scratch tutorial up... and is on the Visual State... he extends a Button and codes up the State Transitions. Rebuilding the PDC 2010 Silverlight Application (Part 1) Mike Taulty has taken the time to rebuild the PDC2010 Silverlight App that folks wanted the source for... and he's taking multiple posts to explain the heck out of it. This first one is mostly infrastructure. Rebuilding the PDC 2010 Silverlight Application (Part 2) In the 2nd post of the series, Mike Taulty is handling the In/Out of Browser business because he eventually is going to be going OOB. Rebuilding the PDC 2010 Silverlight Application (Part 3) Part 3 finds Mike Taulty delving into WCF Data Services and getting some data on the screen. Paginating Records in Silverlight DataGrid using PagedCollectionView Kunal Chowdhury continues with his investigation of the PagedCollectionView with this post on Pagination of your data. Old School Silverlight Effects If you haven't seen Jeremy Likness' 'Old School' Effects page yet, go just for the entertainment... you'll find yourself hanging around for the code :) WP7: Glossy text block custom control Martin Krüger's latest post is a very cool custom control for WP7 that displays Glossy text... it ain't Metro, but it looks pretty nice... some of it almost like etched text. How to Create a Screen with Multiple Search Parameters in LightSwitch Looks like Beth Massi got a few Lightswitch posts in while I wasn't looking. First up is this one on a multiple-parameter search screen. Adding Static Images and Logos to LightSwitch Applications In the 2nd post, Beth Massi shows how to add your own static images and logos to Lightswitch apps... in response to reader questions. Getting the Most out of LightSwitch Summary Properties In her latest post, Beth Massi explains what Summary Properties are in Lightswitch and how to use them to get the best results for your users. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • MS Tech Ed 2011 Coming Soon

    - by sonam
    Microsoft Tech ed 2010 was a great success. Infact  Most of such conferences always provide a great place to meet other  technology enthusiasts and ofcourse,whats in the pipeline for future products of a company or field.. And yet again,MS Tech ed India is coming on 23-25 march  in Banglore,India.Well,the place is  ofcourse right suited for any IT/Computing conference.After all,Its Silicon Valley of India.. From Last year.I remember  a session by Harish about  “Building pure client side apps with  Jquery and Microsoft Ajax .” Here’s the video: http://live.viasilverlight.com/TechEdOnDemand/Breakouts/TheWebSimplified1/Session4/AjaxClientSideApps.wmv At that time only,I got to know that jquery is so easy to use for  ajax or client side templating.Though I prefer jquery over  Microsoft Ajax many folds.UpdatePanel  is Dead for sure in my view. I believe,Web forms will be dead sooner or later with ASP.Net  MVC  gaining share many folds.(TODO:Learn MVC). The new standard is surely:JQUERY . Between,Last years videos and ppt’s  are available to browse and download: http://microsoftteched.in/2010/downloads.aspx After going through Tech Ad 2011 session agendas : http://www.microsoft.com/india/teched2011/agenda.aspx Few of my personal choices to watch would be: Day 1: a) Identity And Access Control in the Cloud        b)Windows 7 at  Home:Digitizing your Home.(Sounds cool.)        c) And ofcourse,Jquery and MS ajax(Lets see if MS can do something that’s not already happening with their version Of Ajax).. Day 2:  a) Lap Around Silverlight 5 and Html 5 as I have heard some hot talks that html5 will kill Silverlight,(I don’t see it in near future though).        b) Html 5 more than “Html 5”…Google will be seeing this one. Day3: a) Cross Browser applications in Azure       b)VS 2010 sessions of automated testing azure apps etc. Windows Phone 7 sessions will surely be of more interest now after MS-Nokia Deal. Though,Personally,I would want atleast some worth of  sessions on MS  future in Robotics,AI.Perhaps  I am looking at wrong place..(When is PDC?) And Since,Bill Gates  consider Robotics as the next big thing, Refer  this one : http://www.cs.virginia.edu/robins/A_Robot_in_Every_Home.pdf  I am sure,they wont loose this new hot spot to competitors,  like how google rules in Online  Search now.Robotics and AI will surely provide a big battlefield  for future.See,What IBM is doing with IBM Watson. OR see this, http://www.sciencedaily.com/releases/2011/02/110218083711.htm this is cool only if you can control your mind.Atleast,I’ll prefer regular driving (I would devote my mind seeing  people,places which we see on road).thats what jouney makes “cool”.:P.

    Read the article

  • Strengthening code with possibly useless exception handling

    - by rdurand
    Is it a good practice to implement useless exception handling, just in case another part of the code is not coded correctly? Basic example A simple one, so I don't loose everybody :). Let's say I'm writing an app that will display a person's information (name, address, etc.), the data being extracted from a database. Let's say I'm the one coding the UI part, and someone else is writing the DB query code. Now imagine that the specifications of your app say that if the person's information is incomplete (let's say, the name is missing in the database), the person coding the query should handle this by returning "NA" for the missing field. What if the query is poorly coded and doesn't handle this case? What if the guy who wrote the query handles you an incomplete result, and when you try to display the informations, everything crashes, because your code isn't prepared to display empty stuff? This example is very basic. I believe most of you will say "it's not your problem, you're not responsible for this crash". But, it's still your part of the code which is crashing. Another example Let's say now I'm the one writing the query. The specifications don't say the same as above, but that the guy writing the "insert" query should make sure all the fields are complete when adding a person to the database to avoid inserting incomplete information. Should I protect my "select" query to make sure I give the UI guy complete informations? The questions What if the specifications don't explicitly say "this guy is the one in charge of handling this situation"? What if a third person implements another query (similar to the first one, but on another DB) and uses your UI code to display it, but doesn't handle this case in his code? Should I do what's necessary to prevent a possible crash, even if I'm not the one supposed to handle the bad case? I'm not looking for an answer like "(s)he's the one responsible for the crash", as I'm not solving a conflict here, I'd like to know, should I protect my code against situations it's not my responsibility to handle? Here, a simple "if empty do something" would suffice. In general, this question tackles redundant exception handling. I'm asking it because when I work alone on a project, I may code 2-3 times a similar exception handling in successive functions, "just in case" I did something wrong and let a bad case come through.

    Read the article

  • 302: this blog will be closed

    - by preishuber
    After nearly 7 years I will discontinue blogging on this site. My resources are limited. You can reach my German blog which is used to support my customers. Looking back to a long an interesting journey ASP.NET by ScottGu That was the reason to attend this site and support Microsoft as much as I can. For that I was honored as ASP.NET MVP- thanks again. Meet Scoot several times. Great guy! Forums I have left NNTP forums a few years ago and now Microsoft closed it- It was my idea ;-) AJAX Was the wrong way- JQuery won the game IIS7 That is really a great plattform and the IIS team rules. I am sad that is so silent around that topic. ASP.NET after 2.0 Is no longer my world. I love ASP.NET and ASP.NET Server controls. I hate the discussion about how to follow the holy rules of MVC. Microsoft have dropped the goal to bring ASP.NET to #1 and accepted PHP is it. Facebook & Twittering Microblogging takes over a part of the blogging business. Shorter faster cheaper- or as SteveB mentioned - do more with less. Google Google is taking over the web. I am using Bing every time as I can but Google have more options. Sorry Microsoft you will loose that game. Apple That is not the biggest problem of Microsoft. the Ixxx takes over a small part but big money of the market, but the customers are not strongly linked. New wave new hype- Game over Apple. Silverlight My new home. I can reuse a lot of my skills and love the possibilitys. Silverligth will passing WPF-and strike Flash Windows phone 7 Also my skills fit. I just will use it for fun. I am not really satisfied about what I have heard from MIX. Guys from Redmond, I am sad to say you have been the best Smartphone OS and lost everything. The ADO vNext Story That will be the next mystic point. WCF, REST, JSON, ATOM and now OData. Nothing about SQL commands. LINQ, ORM is also not the final solution for multilayered disconnected async scenarios. Personally I prefere the OData idea and dislike the Swiss Army Knife (German Eierlegende Wollmilchsau) WCF. I am still in INETA Speakers board and I am glad to come to your user group. In all other cases you can hire me over ppedv AG. Good by and have good live.

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • How can a solo programmer become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Enterprise Service Bus (ESB): Important architectural piece to a SOA or is it just vendor hype?

    Is an Enterprise Service Bus (ESB) an important architectural piece to a Service-Oriented Architecture (SOA), or is it just vendor hype in order to sell a particular product such as SOA-in-a-box? According to IBM.com, an ESB is a flexible connectivity infrastructure for integrating applications and services; it offers a flexible and manageable approach to service-oriented architecture implementation. With this being said, it is my personal belief that ESBs are an important architectural piece to any SOA. Additionally, generic design patterns have been created around the integration of web services in to ESB regardless of any vendor. ESB design patterns, according to Philip Hartman, can be classified in to the following categories: Interaction Patterns: Enable service interaction points to send and/or receive messages from the bus Mediation Patterns: Enable the altering of message exchanges Deployment Patterns: Support solution deployment into a federated infrastructure Examples of Interaction Patterns: One-Way Message Synchronous Interaction Asynchronous Interaction Asynchronous Interaction with Timeout Asynchronous Interaction with a Notification Timer One Request, Multiple Responses One Request, One of Two Possible Responses One Request, a Mandatory Response, and an Optional Response Partial Processing Multiple Application Interactions Benefits of the Mediation Pattern: Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently Design an intermediary to decouple many peers Promote the many-to-many relationships between interacting peers to “full object status” Examples of Interaction Patterns: Global ESB: Services share a single namespace and all service providers are visible to every service requester across an entire network Directly Connected ESB: Global service registry that enables independent ESB installations to be visible Brokered ESB: Bridges services that are reluctant to expose requesters or providers to ESBs in other domains Federated ESB: Service consumers and providers connect to the master or to a dependent ESB to access services throughout the network References: Mediator Design Pattern. (2011). Retrieved 2011, from SourceMaking.com: http://sourcemaking.com/design_patterns/mediator Hartman, P. (2006, 24 1). ESB Patterns that "Click". Retrieved 2011, from The Art and Science of Being an IT Architect: http://artsciita.blogspot.com/2006/01/esb-patterns-that-click.html IBM. (2011). WebSphere DataPower XC10 Appliance Version 2.0. Retrieved 2011, from IBM.com: http://publib.boulder.ibm.com/infocenter/wdpxc/v2r0/index.jsp?topic=%2Fcom.ibm.websphere.help.glossary.doc%2Ftopics%2Fglossary.html Oracle. (2005). 12 Interaction Patterns. Retrieved 2011, from Oracle® BPEL Process Manager Developer's Guide: http://docs.oracle.com/cd/B31017_01/integrate.1013/b28981/interact.htm#BABHHEHD

    Read the article

  • How to become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • CES 2011–Microsoft Keynote Impressions

    - by guybarrette
    Microsoft has been kicking off the CES for a number of years by doing a keynote the evening of the event first day.  This year, SteveB talked about Xbox, Kinect, Windows 7 new laptops, Surface 2 and Windows vNext running on the ARM architecture. Some of the design of the new laptops showed are quite amazing.  This one has a dual screen with no physical keyboard.  The image is split between both screens.  A software keyboard appears when you place your 10 fingers on the lower screen. This one from Samsung has a sliding keyboard somewhat like numerous cell phones have. What I found the most amazing is that Intel was able to miniaturized a full Intel architecture (CPU, motherboard, memory) in a tiny form factor.  Imagine having the power of a full PC running .NET apps in a Zune/iPod form factor! They also showed V2 of the Surface device.  This one is called the Samsung SUR40 for Surface PC.  It’s much sleeker and it will likely loose the BAT (Big Ass Table) moniker  More info here SteveB announced that Windows vNext will run on ARM chips.  I’m intrigued by this announcement (you can read about it here) and I have many questions: -In the past ARM devices were slow, what now makes the ARM architecture able to run Windows? -ARM is 32-bit only, I think. -Does this mean that Intel wasn't able to provide such a lightweight architecture or simply that they weren't interested? -From what I understand, apps would need to be recompiled for ARM. Will we need to do that from an ARM PC or could it be done natively on Intel or on an Intel PC running in an ARM VM?  VS 2012? Ahhhh, smells like a cool PDC is coming up    Clearly it looks like PC have enough power for most of us right now and that the race is now about miniaturization, power consumption and battery life. You can watch the Microsoft CES 2011 keynote here var addthis_pub="guybarrette";

    Read the article

  • Live-Ubuntu 12.04 ran fine, now stopped booting!

    - by user89743
    I've seen similar problems to this several times in the forum, but mine is a bit different, so the other posts I saw were no help to me. When I boot Ubuntu 12.04 64-bit from live-SD-card (3GB persistence) I suddenly get this error: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: Invalid argument Can not mount /dev/loop/0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs (it says I can type "help" for commands, but I don't know anything about how to go from there, totally new to linux) The reason I say my case is different is because my Ubuntu worked fine for over a week, even pretty fast, and now this problem happened. Before that I used to run my live ubuntu from USB sticks but that was slower (especially when booting which took 15 minutes from USB stick!). Also I kept getting the same above problem after a while when booting and had to re-create a live USB linux several times. Installing on harddrive is not an option because my harddrive has physical damage and getting a replacement will take a while, therefore I can only use live-USB or live-SD-card Ubuntu. As I said I used Ubuntu without problems for more than a week, before that as well for several weeks on USB sticks, but the above problem occured sooner or later. This time I paid attention to when it happened: I was rebooting my computer (HP 620 laptop, 4 GB RAM, 64 bit system) from SD flash card and when I was booting I selected F6 and then the first option "no acpi" or something like that...I had used it before and noticed it slowed down the time it took Linux to use. This time it caused this error. Now even when I boot normally/default I get this error. Now I'm accessing Ubuntu from my USB stick without persistence file, when I check my SD card, all the files mentioned in the error message are there and the filesystem.squashfs is 691.2 MB so nothing seems to have been deleted by accident. (I have already made many changes/downloaded programs to my SD card persistent Ubuntu and would hope to loose them, since downloading is expensive for me, and since the problem seems to re-occur...) Can anyone help me, preferably without having to create another startup disk on my SD card? I'm totally new to this. Sorry for the long posts, just didn't know what info is relevant and what isnt! Kon

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • Are they asking too much of me?

    - by Tesserex
    Or am I just whining? Background: I work for a "startup," which I put in air quotes because the company has been around for 4 years. We have about 40 employees in three offices, 9 here plus some part time. We have a good amount of investment and bring in about 75% of what we spend (so not profitable just yet.) Standard work week is supposed to be about 60 hours, but they justify that as we have to be online when our international (Taiwan and Vietnam) offices are awake. When I started the job 6 months ago, I spent about a month prototyping an iphone app and did really well on my own. They also found out about my facebook applications and how many users they got. Putting 2 and 2 together (and winding up at -7) they realized 1. I'm independent and innovative (because I was able to use stackoverflow to answer my iOS questions instead of bugging my superiors) and 2. I must have an eye for marketing (since my fb apps grew totally organically without me doing any advertising), and assigned me to a project optimizing adwords campaigns. Today I got reviewed, and then chewed out, by our CEO for not totally rocking this project. Now I thought I was doing ok, but the CEO said the project is stagnant and they're expecting more from me. But since it's a startup, they play loose with job roles and I've had plenty of other things to do in the past three months. Every time I ask what's most important, I get conflicting responses depending who I ask, and the end result is that almost everything has equal priority - high. I could go on about how I don't think adwords is worthwhile for us since our profit margin is so slim, and how we should be trying to improve our website first, but that's not the point. I also have explained to the office director (who originally assigned me the project, not the CEO) that I don't actually know anything about marketing, I'm just a decent programmer, but they think my general smarts will prove capable of tackling this challenge. The CEO also clarified that he wants a more technical and algorithmic approach to the problem. So is there something I can do to address this? Combined with my existing and confusing workload, should I be raising an issue? Or should I do the grown up thing and give it my all, asking for help when I need it and hoping for the best? Sorry if this is very rant-ish.

    Read the article

  • Documenting sp_ssiscatalog

    - by jamiet
    What is the best way to document an API? Moreover, what is the best way to document a T-SQL API? Before I try to answer those questions I should explain what I mean by “a T-SQL API”. I think of an API as being a collection of well-defined, known, code modules that provide some notion of a service to whomever uses it; in T-SQL terms I tend to think of a collection of stored procedures and functions as a form of API. Its a loose definition, I admit, and in SQL Server circles we don’t tend to think of stored procedures collectively as an API but if you think about it that’s exactly what they are. The question of how to document a T-SQL API came to my mind as I worked on sp_ssiscatalog. How could I make it easy for people to learn about the capabilities of sp_ssiscatalog without forcing them to dig through the code and find out for themselves? My opening gambit was to write documentation pages on the wiki at http://ssisreportingpack.codeplex.com. That’s kinda useful but it does suffer the disadvantage that someone using sp_ssiscatalog needs to go visit a webpage to read it – I want the documentation to be available wherever the user is using sp_ssiscatalog. Moreover, maintaining the wiki is a real PITA. Intellisense works up to a point, I guess: but that only shows whatever SQL Server knows about the various parameters, which isn’t all that much! I wanted a better way for my API users to learn about its capabilities and so I hit upon the idea of simply using PRINT statements within the code itself to inform the user what options are available; hence I added such PRINT statements in the latest check-in. Now when you execute (for example): EXEC sp_ssiscatalog @operation_type='execs' you can hit F6 a few times to view the messages pane and you shall see something like this: Notice that I’m returning information about all the parameters that can be used to affect the results that just got returned. I really do think this will be very useful to anyone using sp_ssiscatalog; I myself am always forgetting what the parameters are and I wrote the damn thing so I can’t really expect anyone else to remember them. I have not yet made available a release that has these changes in it but when I do I’ll blog about it right here. At the time of writing the latest available release of sp_ssiscatalog is DB v1.0.1.0 but if you want to the latest and greatest simply download it straight from source. Feedback is welcome as always. @Jamiet

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

  • Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name

    - by Tapas Bose
    I have a batch file which starts the Oracle Services net start OracleOraDb11g_home1TNSListener net start OracleServiceORCL call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole pause But on executing the script I am getting: C:\windows\system32>net start OracleOraDb11g_home1TNSListener The requested service has already been started. More help is available by typing NET HELPMSG 2182. C:\windows\system32>net start OracleServiceORCL The OracleServiceORCL service is starting......... The OracleServiceORCL service was started successfully. C:\windows\system32>call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name. Press any key to continue . . . I am using Windows 7 64 bit with Oracle 11gR2 64 bit. Any information will be very helpful. Thanks and Regards.

    Read the article

  • Office 2013 OCT unhandled exception when saving .RSP

    - by user52874
    I'm trying to prepare a deployment of office 2013 pro plus. If I deploy an existing .rsp file that was left behind by the old analyst (typing from the client): PS C: \\deploybox\software\Office2013\setup.exe /adminfile \\deploybox\software\Office2013\SWKS.MSP Things seem to deploy just fine. if I make any changes to the .rsp file by doing (all from the client): PS C: \\deploybox\software\Office2013\setup.exe /admin * Open SWKS.MSP * Make changes * Save under a different name SWKS1.MSP I get the following errorbox: Unhandled Exception: MsiGetSummaryInformation call failed. And if I try to deploy the new SWKS1.MSP, PS C: \\deploybox\software\Office2013\setup.exe /adminfile \\deploybox\software\Office2013\SWKS1.MSP it fails with the message: Path or file specified with /adminfile did not contain any customization patches that apply to this product or platform. If I even open the old known good .rsp file SWKS.MSP, and immediately save it as a new name SWKS1.MSP, making no changes, then the same thing happens. So what stupid newbie mistake am I making here? Thanks!

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >