Search Results

Search found 40998 results on 1640 pages for 'setup project'.

Page 57/1640 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Media Center setup won't complete for watching TV

    - by Robert
    I have a problem watching TV in Media Center. The TV constantly pauses 1/2 second then plays 1 second, pauses 1/2 second, plays 1 second - it is constant and does not vary. This problem occurs on all channels, live or recorded. The bottom 5th of the screen is solid green. I know the problem is Media Center because I can use Pinnacle's TVCenterPro to watch TV and there is no skipping/pausing (and not green on bottom). I was using cable, and switched to DirecTV (satellite). Trying to do "Set up TV signal" in Media Center seems to be what broke it. I get an error "IR Hardware not detected." I can use the remote to "try again" - so the IR hardware works fine (Media Center's remote/sensor). I tried plugging the IR Blaster into both ports, and I tried a different USB port for the IR receiver. I can't complete the setup. Media Center was playing TV okay (with the new DirecTV) before I tried to run setup. (I ran setup to try to do recording with Media Center.) Hardware/Software: Pinnacle PCTV 800i HD PCI card (coax cable from DirecTV tuner), ATI Radeon HD 3200 Graphics, Windows XP SP3 Media Center Edition, AMD Athlon Dual Core 2.5 GHz, 1.75 GB RAM.

    Read the article

  • NAnt errors when generating assembly info after project is upgraded to VS2010

    - by Grant Palin
    I have a project I recently upgraded to VS2010 - the project/solution files are updated, but I'm still targeting .NET 3.5. Until now, my standard NAnt build script has not given me any trouble. However, it appears that after updating the project, and updating the NAnt config to be aware of the new tooling, I am now receiving an error when autogenerating assembly information, which fails the build. The relevant build task is below: <asminfo output="${dir.src}\${file.commonAssemblyInfo}" language="${project.codeLanguage}"> <imports> <import namespace="System.Reflection" /> </imports> <attributes> <attribute type="AssemblyVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyFileVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyInformationalVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyCopyrightAttribute" value="${assembly.copyright}" /> <attribute type="AssemblyCompanyAttribute" value="${assembly.company}" /> <attribute type="AssemblyConfigurationAttribute" value="${project.config}" /> <attribute type="AssemblyTrademarkAttribute" value="${assembly.trademark}" /> <attribute type="AssemblyProductAttribute" value="${assembly.product}" /> </attributes> </asminfo> The error is highlighted for the first line of the asminfo task. It reads: AssemblyInfo file 'C:\Users\Grant\Projects\VisualStudio\Checklist\src\CommonAssemblyInfo.cs' could not be generated. This method implicitly uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkID=155570 for more information. I've gathered so far that this is something new to .NET 4. Has anyone had to address this error before? Does anyone know what it is about asminfo that may be triggering the error?

    Read the article

  • I'm starting a new project in Perl, how should I begin?

    - by Brad Gilbert
    The question is about how to start a new Perl project. How should I create the skeleton of the Project? What should the directory layout look like? How do I start testing? What build system should I use? Should I even use a build system? I have been writing Perl programs for a while now. I only started to run tests on my recent programs. I know Perl the language fairly well, now it is time to learn the way to build full blown Perl projects. I already add these to the beginning of every Perl file: use strict; use warnings; # and occasionally use autodie; I have also used Moose.

    Read the article

  • Exclude filetypes in a Textmate project

    - by cwd
    I know in TextMate I can go to preferences - advanced - folder references and play with the regex pattern to remove certain types of files and folders by default. I heard that if you have an existing project, however, and chance these, the project is not affected. From a similar but different angle, I am interested to know if I can exclude certain types of files, say anything named "index.html" from an existing project while not changing the global scope. Thanks!

    Read the article

  • How do you go from an abstract project description to actual code?

    - by Jason
    Maybe its because I've been coding around two semesters now, but the major stumbling block that I'm having at this point is converting the professor's project description and requirements to actual code. Since I'm currently in Algorithms 101, I basically do a bottom-up process, starting with a blank whiteboard and draw out the object and method interactions, then translate that into classes and code. But now the prof has tossed interfaces and abstract classes into the mix. Intellectually, I can recognize how they work, but am stubbing my toes figuring out how to use these new tools with the current project (simulating a web server). In my professors own words, mapping the abstract description to Java code is the real trick. So what steps are best used to go from English (or whatever your language is) to computer code? How do you decide where and when to create an interface, or use an abstract class?

    Read the article

  • Windows Server 2008 R2 Virtual Network Setup

    - by jpearl01
    Hi all, Some background: I'm very much new to networking in general, and virtualization in particular. I'm trying to set up a series of VMs as we are transitioning to a thin client setup. I have been supplied a limited number of static ip addresses. The server is located in an offsite building which houses the network we use to connect to the internet, share folders etc. The setup I've been trying to go for is this: The host OS (Windows Server 2008 R2) is bound to one nic using one of the static ips (say, Nic1 and ip 10.255.6.61). I've set up another external virtual network attached to another physical nic , and a virtual private network attached to no nic. There is one VM running the same os (as the host). This VM is connected to both the external virtual network (and uses another static ip say Nic2 and ip 10.255.6.62) and also to the virtual private network (I gave it a static random ip 192.168.88.1 subnet mask 255.255.255.0). This virtual private network is connected to all the other VMs. I'd like to share the internet connection with all the other VMs on the private virtual network, and so I installed the RRAS role on the server connected to Nic2, and selected the option to share the internet over the vpn. I've run through the RRAS wizard a few times, trying different configurations, but none of them seem to be letting the other vms connect to the 'net. The vms seem to connect to the virtual private network fine, they are assigned an ip address and everything, but no internet, and no rest of the network either. The other problem is in general I connect to the vms with RDP. Will that be possible with a setup like this? i.e. will the vms show up as computers on the network? If not, what are my other options? Thanks! ~josh

    Read the article

  • Windows Server 2008 R2 Virtual Network Setup

    - by jpearl01
    Some background: I'm very much new to networking in general, and virtualization in particular. I'm trying to set up a series of VMs as we are transitioning to a thin client setup. I have been supplied a limited number of static ip addresses. The server is located in an offsite building which houses the network we use to connect to the internet, share folders etc. The setup I've been trying to go for is this: The host OS (Windows Server 2008 R2) is bound to one nic using one of the static ips (say, Nic1 and ip 10.255.6.61). I've set up another external virtual network attached to another physical nic , and a virtual private network attached to no nic. There is one VM running the same os (as the host). This VM is connected to both the external virtual network (and uses another static ip say Nic2 and ip 10.255.6.62) and also to the virtual private network (I gave it a static random ip 192.168.88.1 subnet mask 255.255.255.0). This virtual private network is connected to all the other VMs. I'd like to share the internet connection with all the other VMs on the private virtual network, and so I installed the RRAS role on the server connected to Nic2, and selected the option to share the internet over the vpn. I've run through the RRAS wizard a few times, trying different configurations, but none of them seem to be letting the other vms connect to the 'net. The vms seem to connect to the virtual private network fine, they are assigned an ip address and everything, but no internet, and no rest of the network either. The other problem is in general I connect to the vms with RDP. Will that be possible with a setup like this? i.e. will the vms show up as computers on the network? If not, what are my other options? Thanks! ~josh

    Read the article

  • DHCP Relay setup in ubuntu server

    - by jerichorivera
    I have a network appliance (QNO) that works as traffic load balancer and dhcp server. I would like to add a linux server in between the network appliance and the client computers. The linux server will be used to monitor bandwidth usage. My problem is I still want DHCP to be served by the network appliance so that load balancing will still work efficiently. We are afraid that if we setup the linux server as the DHCP server the network appliance will not be able to load balance the traffic if it only sees the linux server as a single client connecting to it. I've been searching all over for a tutorial on how to setup DHCP relay but have not found any. How do I setup DHCP relay on my linux server given there are two NICs attached to it, one connects the linux server to the network appliance and the other connects the linux server to the client computers. EDIT Router (DHCP) ---- [eth0] Linux Server (Relay agent) [eth1] ----- PC (network) Router IP is 192.168.0.100 eth0 is on DHCP eth1 is static 192.168.2.11 (if I need to change this I can) Tried to do dhcrelay -i eth1 192.168.0.100, but the PC was not getting any DHCP lease from the DHCP router. I might be missing something here.

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Source folders for a maven project in eclipse

    - by 4NDR01D3
    Hello all, I have a that uses maven... and I want to put it in my working environment with eclipse(Galileo)... the project is in a svn server, and I can create check out the project and everything looks OK. I even can run the unit test and everything is working there. However, now that everything is there I wanted to work in the code, and oh surprise there are no packages in my project... I mean all the source code is in the src folder and browsing through it i can see all my files, ut if I open the files from there, the files are opened as text files with no coloring, but worst no help at all about errors in compilation. I don't know what im I doing wrong now, because I had the same project in other machine and it was working well. So here is what I did, please let me know if you notice if I did something wrong, miss any steps or anything that can help me: In the SVN Repository (Using subclipse 1.6.10) I added my SVN Repository Browsed to the folder where I have the pom file Right Click Check out as a Maven project...(Using m2eclipse 0.10.020100209) Used the default options and finish. The projects were created with no problem. I said projects because this maven project has modules, and each module became a project in eclipse. Back in the java perspective, Right click in the project, Run as maven test(Using JWebUnitTest, because I am testing a servlet) BUILD SUCCESS!! But as I said there is not packages so I can't really develop in this environment. Any help?? Thanks!

    Read the article

  • How is the Trac Project List page customised?

    - by Completenutter2
    We've been using Trac for a while now for our developers only. However we are now opening it up for our (internal) clients. We have a project listing page (based on the default one that comes with Trac). What we'd like to do, is display more information about the project than what is currently available. I have searched google and here, to see if I can find how to get more information. There seems to be a variable called $project which has .name, .description and .href as attributes. Is there somewhere, a list of the attributes available? Or perhaps a different solution altogether that will allow us to display more information on the project list page. Such as the number of open tickets etc.

    Read the article

  • how to merge changes from original project -- GitHub in Windows

    - by user62046
    I created an account at https://github.com/, fork someone's project so I have my own repository, instal github client for windows, and clone my repository to my local drive. I will work on my local drive. But during the developement of the project, I would like to merge the changes in the official, original, project. I didn't find how to do this. Before, I use tortoiseSVN client for windows, and there is an option "SVN Update" which can update the project to the latest revision. But I am new to Github and its client, and don't know how to do it.

    Read the article

  • Can you pass parameters for OnAction in MS Project VBA?

    - by Anne Schuessler
    The way I can define a method to be executed with OnAction in VBA with Microsoft Project is as follows (and works correctly): .OnAction = "Macro ""DoSomething""" ... where DoSomething is the method to execute. I would like to pass a parameter to that method but can't find a way to pass it with this syntax. Does anybody have an idea how to do this? I'm getting the feeling that this is an impossible task, but maybe there's some VBA secret I'm not aware of. Please note that VBA in MS Project seems to have its quirks and is slightly different than VBA for Excel or Access. This seems to be the case for the OnAction property which needs the extra Macro keyword to work correctly. If I'm wrong here please enlighten me.

    Read the article

  • Visual Studio 2010 Database Project does not understand Schema Names anymore?

    - by Xenan
    I just tried to upgrade a Visual Studio 2008 Database project to VS2010 and actually it is quite a mess. Hundreds of warnings, all unsolved references. It seems to boil down to Visual Studio not to understand Schema Names (aka Ownership) anymore. For example, the standard dbo schema: [$(MyDataBase)].dbo.MyTable is fine but: [$(MyDataBase)].myschema.MyTable gives an unsolved reference. It did work in VS2008. Also the abbreviation for dbo, the double dot: [$(MyDataBase)]..MyTable Doesn't work anymore. In the project property windows I restored the references to the correct servers (which were lost after the conversion) but that didn't help. This seems pretty basic but I don't have a clue how to solve this. Any help is appreciated.

    Read the article

  • Follow through - How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by DNSDC
    I'm quite keen to setup similar service (but FREE) and seems you know how to do this. "you need to run your own private dns with artificial records for example pandora.com you also need a real dns to fall back on. now that all requests for these sites are going to your US located box you can open up port 80 on squid and listen for the traffic. your cache_peer settings should allow you to map each domain to their real ip. The trafic now flows initially from your US located box to the service but then the server responds it responds directly to the host. no magic here. I won't share the fine details as it probably best serves all to not over exploit this." Did you mean we need to 1. Setup Forward-only DNS on a US-based server/ip? 2. Setup cache_peer and cache_peer_domain in Squid, I got this. 3. Any iptables rule, prerouting, postrouting rules needed to accomplish this? Appreciate your expert advice. Cheers, Don

    Read the article

  • Tips for using Subversion and XCode in a team project

    - by FelipeUY
    Hi to all. I've been working on an Xcode (iPhone) project with three different persons. We have the project on a Subversion repository, but we still don't completely understand some aspects of the Subversion + Xcode methodology: 1) Each time someone does a commit on a single file, it may appear or not in the project of the other developers. Even though the same person that creates the new files, it adds those files to the Repository and then it commits on those files. Why does that happens? Any suggestions? 2) Each person that is involved on the project can't do a "Commit entire project" without causing a considerable headache to the rest of the developers... any idea how this should be done?. The working methodology that we are trying to implement is that only one developer (generally the leader of the project) can Commit the entire project but he must inform the rest of the team, so everybody can be prepared to receive a message asking him to discard his changes and read the new files from the repository. I need suggestions or advice on how to handle a project with multiple developers using subversion. We have read the Subversion handbook, and many other messages on StackOverflow but I still can't find any useful advice. Thanks for any tip!

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

  • VSDB to SSDT Part 2 : SQL Server 2008 Server Project &hellip; with SSDT

    - by Etienne Giust
    With Visual Studio 2012 and the use of SSDT technology, there is only one type of database project : SQL Server Database Project. With Visual Studio 2010, we used to have SQL Server 2008 Server Project which we used to define server-level objects, mostly logins and linked servers. A convenient wizard allowed for creation of this type of projects. It does not exists anymore. Here is how to create an equivalent of the SQL Server 2008 Server Project  with Visual Studio 2012: Create a new SQL Server Database Project : it will be created empty Create a new SQL Schema Compare ( SQL menu item > Schema Compare > New Schema Comparison ) As a source, select any database on the SQL server you want to mimic Set the target to be your newly Database Project In the Schema Compare options (cog-like icon), Object Types pane, set the options as below. You might want to tweak those and select only the object types you want. Then, run the comparison, review and select your changes and apply them to the project.

    Read the article

  • Windows installation repair option not showing up

    - by Carl
    I'm trying to repair an existing Windows XP installation. Following the instructions from http://www.microsoft.com/windowsxp/using/helpandsupport/learnmore/tips/doug92.mspx this should work: When the Press any key to boot from CD message is displayed on your screen, press a key to start your computer from the Windows XP CD. Press ENTER when you see the message To setup Windows XP now, and then press ENTER displayed on the Welcome to Setup screen. Do not choose the option to press R to use the Recovery Console. In the Windows XP Licensing Agreement, press F8 to agree to the license agreement. Make sure that your current installation of Windows XP is selected in the box, and then press R to repair Windows XP. Follow the instructions on the screen to complete Setup. On step 5 pressing R does nothing and there is nothing on the screen saying it would. When I just select to install I get a message that a previous installation is there and proceeding will destroy it and installed applications, I can optionally select a directory other than c:\windows, and I can optionally format before continuing. I had tried to go from SP2-SP3. It failed, and then I couldn't get to Safe Mode. I put the SP1 disk back in to do a repair, and I don't see that option. (I don't have an SP2 boot/install disk, I just have the non-boot upgrade package.) UPDATE: Upon loading the Recovery Console, I get a message saying The system registry does not appear to have an active ControlSet key. The system registry may be damaged. You can try restarting it with the Last Known Good configuration or you can try repairing the installation of Windows using the setup program's repair and recovery options. I then did bootcfg /scan - "successful" ... Total installs: 1 ... [1] c:\windows - with the c:\windows command prompt below it. bootcfg /list gives [1] Windows XP Pro; OS Load Options /noexecute=optin /fastdetect; OS Location: c:\windows I followed the instructions at http://michaelstevenstech.com/XPrepairinstall.htm - "Warning 2" link copy E:\i386\ntldr C:\ copy E:\i386\ntdetect.com C:\ attrib -h -r -s C:\boot.ini del C:\boot.ini BootCfg /Rebuild I added /fastdetect when it asked for options. I re-ran Windows setup - no change - no repair option. UPDATE: I followed the procedure at http://support.microsoft.com/default.aspx?scid=kb;en-us;307545 I rebooted. I now get a quick message on bootup to select the boot - 1: [blank] ; Windows XP Professional ; Windows Recover Console. The "1: " is new. The rest is the way it was when all was okay. Selecting 1: and the next one gives the same result - I get to a login icon, and then it asks for a password, with the blinking cursor, but I can't type anything. I reboot with the Windows CD. Now I see a repair option for installation "1: " I selected R on that, and it did "Setup is copying files..." and rebooted when it was done. Then it booted, and I got a window saying "Setup will complete in approximately 39 minutes." That's where I am now. I wasn't expecting this last part - I did a repair several months ago and I don't recall that. UPDATE: Booted up. Asked if I wanted to register Windows online. All my icons are there, and the old desktop documents. Good. All the applications I tried from the Start Menu work (tested a few), except Corel Photopaint - I get registry entry not found errors. Windows ran for a while, then froze. The mouse and keyboard don't work. Pressing the power button got Windows to shut down. I probably need to put SP2 on it, and then all the updates for my laptop for XP Pro SP2 (drivers), there's a bunch. The mouse and keyboard quit working again. That wasn't a problem when I first set up this laptop. I've ran 4 times now. Two mouse/keyboards hangs by pressing Ctrl-C (to copy text from a notepad document), and two by selecting Start-Run (wasn't able to type anything in the box).

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

  • In the Firing Line: The impact of project and portfolio performance on the CEO

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What are the primary measurements for rating CEO performance? For corporate boards, business analysts, investors, and the trade press the metrics they deploy are relatively binary in nature; what is being done to generate earnings, and what is being done to build and sustain high performance? As for the market, interest is primarily aroused when operational and financial performance falls outside planned commitments for the year. When organizations announce better than predicted results, they usually experience an immediate increase in share price. Likewise, poor results have an obviously negative impact on the share price and impact the role and tenure of the incumbent CEO. The danger for the CEO is that the risk of failure is ever present, ranging from manufacturing delays and supply chain issues to labor shortages and scope creep. This risk is enhanced by the involvement of secondary suppliers providing services critical to overall work schedules, and magnified further across a portfolio of programs and projects underway at any one time – and all set within a global context. All can impact planned return on investment and have an inevitable impact on the share price – the primary empirical measure of day-to-day performance. Read this complete complementary report, In the Firing Line and explore what is the direct link between the health of the portfolio and CEO performance. This report will provide an overview of the responsibility the CEO has for implementing and maintaining a culture of accountability, offer examples of some of the higher profile project failings in recent years, and detail the capabilities available to the CEO to mitigate the risks residing in their own portfolios. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Is it illegal to rewrite every line of an open source project in a slightly different way, and use it in a closed source project?

    - by Chris Barry
    There is some code which is GPL or LGPL that I am considering using for an iPhone project. If I took that code (JavaScript) and rewrote it in a different language for use on the iPhone would that be a legal issue? In theory the process that has happened is that I have gone through each line of the project, learnt what it is doing, and then reimplemented the ideas in a new language. To me it seems this is like learning how to implement something, but then reimplementing it separately from the original licence. Therefore you have only copied the algorithm, which arguably you could have learnt from somewhere else other than the original project. Does the licence cover the specific implementation or the algorithm as well? EDIT------ Really glad to see this topic create a good conversation. To give a bit more backing to the project, the code involved does some kind of audio analysis. I believe it is non-trivial to learn or implement, although I was prepared to embark on this task (I'm at the level where I can implement an FFT algorithm, and this was going to go beyond that.) It is a fairly low LOC script, so I didn't think it would be too hard to do a straight port. I really like the idea of rereleasing my port as well as using it in the application. I don't see any problem with that, and it would be a great way to give something back to the community. I was going to add a line about not wanting to discuss the moral issues, but I'm quite glad I didn't as it seems to have fired the debate a bit. I still feel a bit odd about using open source code to learn from. Does this mean that anything one learns from an open source project is not allowed to be used in a closed source project? And how long after or different does an implementation have to be to not be considered violation of the licence? Murky! EDIT 2 -------- Follow up question

    Read the article

  • Migrate existing Maven Project into an OSGI Bundle

    - by user1706291
    i am new to the whole OSGi stuff and my task is to create an OSGi Bundle out from an exisitng maven project. To get started i decided to pick the smallest part and starting with it: Here is the pom.xml project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <artifactId>cross</artifactId> <groupId>net.sf.maltcms</groupId> <version>1.2.12-SNAPSHOT</version> </parent> <artifactId>cross-main</artifactId> <packaging>jar</packaging> <name>cross-main</name> <dependencies> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-annotations</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-event</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-tools</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-exception</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.4</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-main-api</artifactId> <version>${project.version}</version> <exclusions> <exclusion> <artifactId>commons-logging</artifactId> <groupId>commons-logging</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-asm</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.0.6.RELEASE</version> <exclusions> <exclusion> <artifactId>commons-logging</artifactId> <groupId>commons-logging</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-expression</artifactId> <version>3.0.6.RELEASE</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.1</version> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.4.6</version> </dependency> <dependency> <groupId>${project.groupId}</groupId> <artifactId>cross-math</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com.db4o</groupId> <artifactId>db4o-all</artifactId> <version>8.0.249</version> </dependency> <dependency> <groupId>net.sf.mpaxs</groupId> <artifactId>mpaxs-spi</artifactId> <version>1.6.10</version> </dependency> <dependency> <groupId>net.sf.mpaxs</groupId> <artifactId>mpaxs-server</artifactId> <version>1.6.10</version> </dependency> </dependencies> I did some research and found the Apache Bundle Plugin for maven and changed the pom to this <packaging>bundle</packaging> and added <build> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>${pom.artifactId}</Bundle-SymbolicName> </instructions> </configuration> </plugin> </plugins> </build> mvn clean install went fine and i got a jar file containing the manifest, but of course the bundle could not be resolved BundleException: The bundle "cross-main_1.2.12.SNAPSHOT [30]" could not be resolved. Reason: Missing Constraint: Import-Package: com.db4o; version="[8.0.0,9.0.0) To make a long story short: What are the possibiliteis to migrate a maven application into an OSGi Bundle? Espacially how to manage the dependencys

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >