Search Results

Search found 21455 results on 859 pages for 'setup exe'.

Page 237/859 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • Write your Tests in RSpec with IronRuby

    - by kazimanzurrashid
    [Note: This is not a continuation of my previous post, treat it as an experiment out in the wild. ] Lets consider the following class, a fictitious Fund Transfer Service: public class FundTransferService : IFundTransferService { private readonly ICurrencyConvertionService currencyConvertionService; public FundTransferService(ICurrencyConvertionService currencyConvertionService) { this.currencyConvertionService = currencyConvertionService; } public void Transfer(Account fromAccount, Account toAccount, decimal amount) { decimal convertionRate = currencyConvertionService.GetConvertionRate(fromAccount.Currency, toAccount.Currency); decimal convertedAmount = convertionRate * amount; fromAccount.Withdraw(amount); toAccount.Deposit(convertedAmount); } } public class Account { public Account(string currency, decimal balance) { Currency = currency; Balance = balance; } public string Currency { get; private set; } public decimal Balance { get; private set; } public void Deposit(decimal amount) { Balance += amount; } public void Withdraw(decimal amount) { Balance -= amount; } } We can write the spec with MSpec + Moq like the following: public class When_fund_is_transferred { const decimal ConvertionRate = 1.029m; const decimal TransferAmount = 10.0m; const decimal InitialBalance = 100.0m; static Account fromAccount; static Account toAccount; static FundTransferService fundTransferService; Establish context = () => { fromAccount = new Account("USD", InitialBalance); toAccount = new Account("CAD", InitialBalance); var currencyConvertionService = new Moq.Mock<ICurrencyConvertionService>(); currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); fundTransferService = new FundTransferService(currencyConvertionService.Object); }; Because of = () => { fundTransferService.Transfer(fromAccount, toAccount, TransferAmount); }; It should_decrease_from_account_balance = () => { fromAccount.Balance.ShouldBeLessThan(InitialBalance); }; It should_increase_to_account_balance = () => { toAccount.Balance.ShouldBeGreaterThan(InitialBalance); }; } and if you run the spec it will give you a nice little output like the following: When fund is transferred » should decrease from account balance » should increase to account balance 2 passed, 0 failed, 0 skipped, took 1.14 seconds (MSpec). Now, lets see how we can write exact spec in RSpec. require File.dirname(__FILE__) + "/../FundTransfer/bin/Debug/FundTransfer" require "spec" require "caricature" describe "When fund is transferred" do Convertion_Rate = 1.029 Transfer_Amount = 10.0 Initial_Balance = 100.0 before(:all) do @from_account = FundTransfer::Account.new("USD", Initial_Balance) @to_account = FundTransfer::Account.new("CAD", Initial_Balance) currency_convertion_service = Caricature::Isolation.for(FundTransfer::ICurrencyConvertionService) currency_convertion_service.when_receiving(:get_convertion_rate).with(:any, :any).return(Convertion_Rate) fund_transfer_service = FundTransfer::FundTransferService.new(currency_convertion_service) fund_transfer_service.transfer(@from_account, @to_account, Transfer_Amount) end it "should decrease from account balance" do @from_account.balance.should be < Initial_Balance end it "should increase to account balance" do @to_account.balance.should be > Initial_Balance end end I think the above code is self explanatory, treat the require(line 1- 4) statements as the add reference of our visual studio projects, we are adding all the required libraries with this statement. Next, the describe which is a RSpec keyword. The before does exactly the same as NUnit's Setup or MsTest’s TestInitialize attribute, but in the above we are using before(:all) which acts as ClassInitialize of MsTest, that means it will be executed only once before all the test methods. In the before(:all) we are first instantiating the from and to accounts, it is same as creating with the full name (including namespace)  like fromAccount = new FundTransfer.Account(.., ..), next, we are creating a mock object of ICurrencyConvertionService, check that for creating the mock we are not using the Moq like the MSpec version. This is somewhat an interesting issue of IronRuby or maybe the DLR, it seems that it is not possible to use the lambda expression that most of the mocking tools uses in arrange phase in Iron Ruby, like: currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); But the good news is, there is already an excellent mocking tool called Caricature written completely in IronRuby which we can use to mock the .NET classes. May be all the mocking tool providers should give some thought to add the support for the DLR, so that we can use the tool that we are already familiar with. I think the rest of the code is too simple, so I am skipping the explanation. Now, the last thing, how we are going to run it with RSpec, lets first install the required gems. Open you command prompt and type the following: igem sources -a http://gems.github.com This will add the GitHub as gem source. Next type: igem install uuidtools caricature rspec and at last we have to create a batch file so that we can execute it in the Notepad++, create a batch like in the IronRuby bin directory like my previous post and put the following in that batch file: @echo off cls call spec %1 --format specdoc pause Next, add a run menu and shortcut in the Notepad++ like my previous post. Now when we run it it will show the following output: When fund is transferred - should decrease from account balance - should increase to account balance Finished in 0.332042 seconds 2 examples, 0 failures Press any key to continue . . . You will complete code of this post in the bottom. That's it for today. Download: RSpecIntegration.zip

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • using isight camera in macbookpro(8,2) on ubuntu 12.04 virtualbox VM

    - by Kurt Spindler
    I'm having a lot of trouble using the built-in isight camera on my macbookpro8,2 (early 2011) from an ubuntu 12.04 virtual machine, run inside VirtualBox. The following is the log I get when I try to run guvcview ubuntu@ubuntu:~$ guvcview guvcview 1.5.3 ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.surround71 ALSA lib setup.c:565:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory ALSA lib setup.c:565:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory ALSA lib setup.c:565:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started video device: /dev/video0 Init. FaceTime HD Camera (Built-in) (location: usb-0000:00:0b.0-1) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 1/10, { pixelformat = 'MJPG', description = 'MJPEG' } { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'RGB3', description = 'RGB3' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'BGR3', description = 'BGR3' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'YU12', description = 'YU12' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'YV12', description = 'YV12' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, vid:05ac pid:8509 driver:uvcvideo checking format: 1196444237 VIDIOC_G_COMP:: Invalid argument compression control not supported fps is set to 1/25 drawing controls no codec detected for H264 no codec detected for MP3 - (lavc) Checking video mode 960x540@32bpp : OK Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable write /home/ubuntu/.guvcviewrc OK free controls cleaned allocations - 100% Closing portaudio ...OK Closing GTK... OK ubuntu@ubuntu:~$ Any help would be greatly appreciated. Only clue I have is that I initially was having problems, tried using the old method of fixing isights (involving installing isight-firmware-tools) before realizing that I just hadn't turned on the VM setting to allow the VM to access the webcam. :) Anyway, I wonder if installing that messed something up. However, I think this is a red herring because I've: shut down and turned back on the Mac, restarted the VM, tried a different VM (for which I never installed isight-firmware-tools, and created an entirely new ubuntu vm. All instances have had this problem. Similarly, other viewers, such as cheese, avplay, avconv have had all various kinds of errors.

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Neo4j Windows Plugin Installation desktop V2.M06

    - by user2904850
    I have downloaded and installed the latest Window V2 Community M06 build of Neo4j on a windows 7 64 bit machine (I have tried the 32bit and 64 bit installs). Installation proceeds without and problems and the system runs normally but there is no /plugins directory (on both versions):- \Program Files (x86)\Neo4j Community\ \Program Files (x86)\Neo4j Community\.install4j \Program Files (x86)\Neo4j Community\bin I am trying to install the Spatial plugin .. so I tried creating the \plugins directory. I extracted the zip file and left the zip file in the directory but the plugins are not found:- C:\Users\WFN44217>curl localhost:7474/db/data/ { "extensions" : { }, "node" : "http://localhost:7474/db/data/node", "reference_node" : "http://localhost:7474/db/data/node/0", "node_index" : "http://localhost:7474/db/data/index/node", "relationship_index" : "http://localhost:7474/db/data/index/relationship", "extensions_info" : "http://localhost:7474/db/data/ext", "relationship_types" : "http://localhost:7474/db/data/relationship/types", "batch" : "http://localhost:7474/db/data/batch", "cypher" : "http://localhost:7474/db/data/cypher", "transaction" : "http://localhost:7474/db/data/transaction", "neo4j_version" : "2.0.0-M06" } I have tried some other plugins, but these are also not found. Any idea what might be missing? Extract from the log files: 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: VM Arguments: [-Dexe4j.semaphoreName=Local\c:_program_files_neo4j_community_bin_neo4j-community.exe, -Dexe4j.isInstall4j=true, -Dexe4j.moduleName=C:\Program Files\Neo4j Community\bin\neo4j-community.exe, -Dexe4j.processCommFile=C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp, -Dexe4j.tempDir=, -Dexe4j.unextractedPosition=0, -Djava.library.path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin\;C:\Program Files\Java\jdk1.7.0_40\bin\;C:\Program Files (x86)\Git\cmd;C:\Program Files (x86)\Git\bin;c:\ikvm-7.2.4630.5\bin;C:\Program Files\nodejs\;C:\Users\WFN44217\AppData\Roaming\npm;c:\program files\neo4j community\jre\bin, -Dexe4j.consoleCodepage=cp0, -Dinstall4j.launcherId=24, -Dinstall4j.swt=false] 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: Java classpath: 2013-10-17 11:19:31.883+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\charsets.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\.install4j\i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/jaccess.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/zipfs.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\resources.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jfr.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jsse.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\bin\neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunec.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\classes 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\rt.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/ 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunmscapi.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dns_sd.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dnsns.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunjce_provider.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/localedata.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/access-bridge-64.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/.install4j/i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\sunrsasign.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jce.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: Library path: 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\wbem 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\WindowsPowerShell\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft\Web Platform Installer 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Java\jdk1.7.0_40\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\cmd 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\ikvm-7.2.4630.5\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\nodejs 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Users\WFN44217\AppData\Roaming\npm 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Neo4j Community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: System.properties: 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.moduleName = C:\Program Files\Neo4j Community\bin\neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.processCommFile = C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.semaphoreName = Local\c:_program_files_neo4j_community_bin_neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.boot.library.path = c:\program files\neo4j community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.consoleCodepage = cp0 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: path.separator = ; 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: file.encoding.pkg = sun.io 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.country = GB 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.script = 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.os.patch.level = Service Pack 1 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: install4j.exeDir = C:\Program Files\Neo4j Community\bin\ 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: user.dir = C:\Program Files\Neo4j Community\bin

    Read the article

  • CCNet 1.6 Conditional Plugin Help Needed!

    - by Mike M
    Hi all, I cannot get the conditional plugin to work that has been added to CCNet as of version 1.6 - clicky. I am running the latest version of CCNet (1.6.7258.1) and have the following code in my ccnet.config: <project name="9iCompile"> <sourcecontrol type="svn"> <trunkUrl>http://bis-build:81/svn/Oracle/oas_forms/COPEN</trunkUrl> <workingDirectory>C:\OAS\COPEN</workingDirectory> <username>*</username> <password>*</password> <executable>C:\Program Files\VisualSVN\bin\svn.exe</executable> </sourcecontrol> <conditional> <conditions> <compareCondition> <value1>$[ProjectType]</value1> <value2>copen</value2> <evaluation>equal</evaluation> <ignoreCase>true</ignoreCase> </compareCondition> </conditions> <tasks> <nant> <executable>C:\Program Files\nant-0.85\bin\nant.exe</executable> <baseDirectory>C:\OAS</baseDirectory> <buildFile>Oracle9i_Automation_v2.build</buildFile> <targetList> <target>build</target> </targetList> </nant> </tasks> </conditional> <!-- more conditional statements would be here for different project types if I can get it to work --> <parameters> <selectParameter name="ProjectType"> <description>The type of project to operate on.</description> <allowedValues> <value name="COPEN">copen</value> <value name="BCS">bcs</value> <value name="FCDD">fcdd</value> </allowedValues> </parameters> <security type="defaultProjectSecurity" defaultRight="Deny"> <permissions> <rolePermission name="Developers" ref="Developers"/> <rolePermission name="Accepters" ref="Accepters"/> <rolePermission name="Releasers" ref="Releasers"/> <rolePermission name="Administrators" ref="Administrators"/> </permissions> </security> </project> The CCNet server crashes whenever I try to run this config though with the following output: [14:ERROR] Exception: Unused node detected: <conditional> <conditions> <compareCondition> <value1>$[ProjectType]</value1> <value2>copen</value2> <evaluation>equal</evaluation> <ignoreCase>true</ignoreCase> </compareCondition> </conditions> <tasks> <nant> <executable>C:\Program Files\nant-0.85\bin\nant.exe</executable> <baseDirectory>C:\OAS</baseDirectory> <buildFile>Oracle9i_Automation_v2.build</buildFile> <targetList> <target>build</target> </targetList> </nant> </tasks> </conditional> ---------- ThoughtWorks.CruiseControl.Core.Config.ConfigurationException: Unused node detected: <conditional> <conditions> <compareCondition> <value1>$[ProjectType]</value1> <value2>copen</value2> <evaluation>equal</evaluation> <ignoreCase>true</ignoreCase> </compareCondition> </conditions> <tasks> <nant> <executable>C:\Program Files\nant-0.85\bin\nant.exe</executable> <baseDirectory>C:\OAS</baseDirectory> <buildFile>Oracle9i_Automation_v2.build</buildFile> <targetList> <target>build</target> </targetList> </nant> </tasks> </conditional> at ThoughtWorks.CruiseControl.Core.Config.NetReflectorConfigurationReader.Defa­ultErrorProcesser.ProcessError(String message) at ThoughtWorks.CruiseControl.Core.Config.NetReflectorConfigurationReader.<>c_­_DisplayClass1.<Read>b__0(InvalidNodeEventArgs args) at Exortech.NetReflector.InvalidNodeEventHandler.Invoke(InvalidNodeEventArgsar­gs) at Exortech.NetReflector.NetReflectorTypeTable.OnInvalidNode(InvalidNodeEventA­rgs args) at Exortech.NetReflector.XmlTypeSerialiser.HandleUnusedNode(NetReflectorTypeTa­ble table, XmlNode orphan) at Exortech.NetReflector.XmlTypeSerialiser.ReadMembers(XmlNode node, Object instance, NetReflectorTypeTable table) at Exortech.NetReflector.XmlTypeSerialiser.Read(XmlNode node, NetReflectorTypeTable table) at Exortech.NetReflector.NetReflectorReader.Read(XmlNode node) at ThoughtWorks.CruiseControl.Core.Config.NetReflectorConfigurationReader.Read­(XmlDocument document, IConfigurationErrorProcesser errorProcesser) at ThoughtWorks.CruiseControl.Core.Config.DefaultConfigurationFileLoader.Load(­FileInfo configFile) at ThoughtWorks.CruiseControl.Core.Config.FileConfigurationService.Load() at ThoughtWorks.CruiseControl.Core.Config.FileWatcherConfigurationService.Load­() at ThoughtWorks.CruiseControl.Core.Config.CachingConfigurationService.Load() at ThoughtWorks.CruiseControl.Core.CruiseServer.Restart() at ThoughtWorks.CruiseControl.Core.Config.ConfigurationUpdateHandler.Invoke() at ThoughtWorks.CruiseControl.Core.Config.FileWatcherConfigurationService.Hand­leConfigurationFileChanged(Object source, FileSystemEventArgs args) ---------- Can someone please help?? I have no idea what I'm doing wrong here or if this is a bug :( I have also posted on the ccnet-user group several days ago but have not received any response :(

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • CodePlex Daily Summary for Thursday, June 10, 2010

    CodePlex Daily Summary for Thursday, June 10, 2010New Projectscab mgt: j mmmjk kjkjCAML Generator: CAMLBuilder makes it easier for generating CAML Query from code. It can be extensively used while Sharepoint Customization thru code. You no longer...Cloud Business Services: ISV Application Accelerator for business management of Cloud Applications build on Windows Azure or any hosting platform. Cloud Business Services ...Community Server 2.1 to WordPress WXL Exporter: This is a simple program to export all CS 2.1 posts and tags by a particular user to WordPress WXL format. DbIdiom for ADO.NET Core: DbIdiom is a set of idioms to use ADO.NET Core (without Dataset) easily.dotsoftRAID: This project is a software for using RAID on windows without special hardware ("Software RAID").DTSRun Job Runner: DTSJobRun makes it easier for SQL Server Developer to control DTS Job through 3rd party execution manager or process control or monitoring control ...Easy Share: Folder sharing is an indispensable part of our professional life. 'EasyShare' is a folder share creation, deletion and editing tool with integrated...elmah2: A project inspired by elmah (http://code.google.com/p/elmah/) The primary goals of this project are -> A plugin style architecture for logging ...Entropy: Entropy is a component for implementing undo/redo for object models. Entropy implements undo/redo with the memento pattern at the object level so t...eXtremecode Generator: eXtremecode generator is a code generator which makes it easier for asp.net developers to generate a well formed asp.net application by giving it j...GreenBean Script: GreenBean Script is a .NET port of the game-focussed scripting language, GameMonkey Script (http://www.somedude.net/gamemonkey) The first release ...IMAP POP3 SMTP Component for .NET C#, VB.NET and ASP.NET: The Ultimate Mail Component offers a comprehensive interface for sending, receiving e-mail messages from a server and managing your mailbox remotel...PRISM LayoutManager / WindowManager: A layout manager for PRISMSharePoint 2010 Taxonomy Import Utility: Build SharePoint 2010 taxonomies from XML. The SharePoint 2010 Taxonomy Import Utility allows taxonomy authors to define complete taxonomies in XML...SharePoint Geographic Data Visualizer: SharePoint Geographic Data Visualizer includes an Asp.Net server control and Microsoft Sharepoint web part which your users can use to visualize an...Silverlight Reporting: Silverlight Reporting is a simple report writer test bed for Silverlight 4+. The intent is to provide the basics of report writing while being flex...SSIS Expression Editor & Tester: An expression editing and testing tool for SQL Server Integration Services (SSIS). It also offers a reusable editor control for custom tasks or oth...STS Federation Metadata Editor: This is a federation metadata editor for Security Token Services (STS). STSs can be created on any platform (as long as it's based on the oasis sta...WebShopDiploma: WebShopDiploma is a sample application.WEI Share: WEI Share is an application for sharing your Windows Experience Index (WEI) scores from Windows 7 with others in the community. The data can be exp...New Releases.NET Transactional File Manager: 1.1.25: Initial CodePlex release. Code from Chinh's blog entry, plus bug fixes including one from "Mark".Active Directory Utils: Repldiag 2.0.3812.27900: Addressed a bug where lingering objects could not be cleaned due to unstable replication topologies resulting from the reanimation of lingering obj...Ajax ASP.Net Forum: developer.insecla.com-forum_v0.1.3: VERSION: 0.1.3 FEATURES Same as 0.1.2 with some bugs fixed: - Now the language/cultures DropDownList selectors works showing the related lang/cult...BigfootMVC: Development Environment Setup DNN 5.4.2: This is DNN development environment setup including the dabase. Does not include the TimeMaster / BigfootMVC code. BigfootMVC is in the source cont...CAML Generator: Version 1.0: Version 1.0 has been released. It is available for download. Version is stable and you can use it. If you have any concerns then please post issue...Community Forums NNTP bridge: Community Forums NNTP Bridge V35: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Server 2.1 to WordPress WXL Exporter: CStoWXL_v1.0: First (and probably only) release. Exports posts and tags bases on supplied user name. Does not do comments or attachments. To run, do the followin...DbIdiom for ADO.NET Core: DBIdiom for ADO.NET Core 1.0: DBIdiom for ADO.NET Core 1.0DocX: DocX v1.0.0.9: ImportantA bug was found in Table.SetDirection() by Ehsan Shahshahani. I have fixed this bug and updated version 1.0.0.9 of DocX. I did not want to...Easy Share: EasyShare ver 1.0: Version 1.0 of Easy Folder Share toolEnterprise Library Extensions: Release 1.2: This release contains Windows Communication Foundation service behavior which makes it possible to resolve services using Unity either by applying ...Excel-Dna: Excel-Dna Version 0.26: This version adds initial support for the following: Ribbon support for Excel 2007 and 2010 and hierarchical CommandBars for pre-2007 versions. D...Exchange 2010 RBAC Editor (RBAC GUI) - updated on 6/09/2010: RBAC Editor 0.9.4.2: only small GUI fixes; rest of the code is almost same with version 0.9.4.1 Please use email address in About menu of the tool for any feedback and...eXtremecode Generator: eXtremecode Generator 10.6: Download eXtremecode Generator. Open Connections.config file from eXDG folder. Define required connections in Connections.config file. (multi...FAST for Sharepoint MOSS 2010 Query Tool: Version 1.1: Added Search String options to FQL Added options for Managed Property queryGiving a Presentation: RC 1.2: This new release includes the following bug fixes and improvements: Bug fix: programs not running when presentation starts are not started when pre...HKGolden Express: HKGoldenExpress (Build 201006100300): New features: User can add emotion icons when posting new message or reply to message. Bug fix: (None) Improvements: (None) Other changes: C...IMAP POP3 SMTP Component for .NET C#, VB.NET and ASP.NET: Build 519: Contains source code for IMAP WinForms Client, POP3 WinForms Client, and SMTP Send Mail Client. Setup package for the lib is also included.Liekhus ADO.NET Entity Data Model XAF Extensions: Version 1.1.1: Compiled the latest bits that took care of some of the open issues and bugs we have found thus far.manx: manx data 1.1: manx data 1.1 Updated manx data. Includes language and mirror table data.MEDILIG - MEDICAL LIFE GUARD: MEDILIG 20100325: Download latest release from Sourceforge at http://sourceforge.net/projects/mediligMiniCalendar Web Part: MiniCalendar WebPart v1.8.1: A small web part to display links to events stored in a list (or document library) in a mini calendar (in month view mode). It shows tooltips for t...Mytrip.Mvc: Mytrip.Mvc 1.0.43.0 beta: Mytrip.Mvc 1.0.43.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto create table to database) Mytrip.Mv...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.126: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version fixes...Oxygen Smil Player: OxygenSmilPlayer 1.0.0.1a: Second Alpha UploadPowerAuras: PowerAuras V3.0.0K Beta1: New Auras: Item Name Equipment Slot TrackingPowerExt: v1.0 (Alpha 2): v1.0 (Alpha 2). PowerExt can display information such as assembly name, assembly version, public key etc in Explorer's File Properties dialog.Powershell Scripts for Admins: PowerBizTalk 1.0: BizTalk PowerShelll Module allows you to control : Action Component Start Stop Get Enlist Unenlist Remove Applications x...Refix - .NET dependency management: Refix v0.1.0.75 ALPHA: Latest updated version now supports a remote repository, an implementation of which is supplied as an ASP.NET MVC website.Resonance: TrainNode Service Beta: Train Node Service binary setup packageSCSM Incident SLA Management: SCSM Incident SLA Management Version 0.2: This is the first release of the SCSM SLA Management solution. It is an 'beta' release and has only been tested by the developers on the project. T...secs4net: Release 1.01: Remove System.Threading.dll(Rx included) dependence. SML releated function was move out.SharePoint 2010 Managed Metadata WebPart: Taxonomy WebPart 0.0.2: Applied fix to support Managed Metadata fields that allow multiple values.SharePoint 2010 Taxonomy Import Utility: TaxonomyBuilder Version 1: Initial ReleaseThis release includes full XML to Term Store import capabilities. See roadmap for more information. Please read release license prio...SharePoint Geographic Data Visualizer: Source Code: Source CodeSOAPI - StackOverflow API Parser/Wrapper Generator: SOAPI Beta 1: Beta 1 release. API parser/generator, JavaScript, C#/Silverlight wrapper libraries. Up to the hour current generated files can be found @ http://s...SSIS Expression Editor & Tester: Expression Editor and Tester: Initial release of expression editor tool and editor control. Download and extract the files to get started, no install required.STS Federation Metadata Editor: Version 0.1 - Initial release: This is the initial release of the editor. It contains all the basic functionallity but doesn't support multiple contact persons and multiple langu...SuperSocket: SuperSocket(0.0.1.53867): This release fixed some bugs and added a new easy sample. The source code of this release include: Source code of SuperSocket A remote process c...thinktecture Starter STS (Community Edition): StarterRP v1.1: Cleaned up version with identity delegation sample (in sync with StarterSTS v1.1)thinktecture Starter STS (Community Edition): StarterSTS v1.1: New stable version. Includes identity delegation support.VCC: Latest build, v2.1.30609.0: Automatic drop of latest buildWatermarker.NET: 0.1.3812: Stability fixWouter's SharePoint Demo Land: Navigation Service with WCF Proxy: A SharePoint 2010 Service Application that uses WCF service proxies to relay commands to the actual service.Xna.Extend: Xna.Extend V1.0: This is the first stable release of the Xna.Extend Library. Source code and Dynamic Link Libraries (DLLs) are available with documentation. This ve...Yet Another GPS: YaGPS Beta 1: Beta 1 Release Fix Installer Default Folder Fix Sound Language Folder Problem Fix SIP Keyboard Focus Error Add Arabic Sound Language Add Fr...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesRhyduino - Arduino and Managed CodeBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleMediaCoder.NETAndrew's XNA Helperssmark C# LibraryRawr

    Read the article

  • Configure PERL DBI and DBD in Linux

    - by Balualways
    I am new to Perl and I work in a Linux OEL 5x server. I am trying to configure the Perl DB modules for Oracle connectivity (DBD and DBI modules). Can anyone help me out in the installation procedure? I had tried CPAN didn't really worked out. Any help would be appreciated. I am not quite sure I need to initialize any variables other than $LD_LIBRARY_PATH and $ORACLE_HOME These are my observations: ISSUE:: I am getting the following issue while using the DBI module to connect to Oracle: install_driver(Oracle) failed: Can't locate loadable object for module DBD::Oracle in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a module that DBD::Oracle requires hasn't been fully installed at connectdb.pl line 57 I had installed the DBD for oracle from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD/DBD-Oracle-1.50 Could you please take a look into the steps and correct me if I am wrong: Observations: $ echo $LD_LIBRARY_PATH /opt/CA/UnicenterAutoSysJM/autosys/lib:/opt/CA/SharedComponents/Csam/SockAdapter/lib:/opt/CA/SharedComponents/ETPKI/lib:/opt/CA/CAlib $ echo $ORACLE_HOME /usr/local/oracle/ORA This is how I tried to install the DBD module: Download the file DBD 1.50 for Oracle Copy to /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD Untar and Makefile.PL . Message: Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Configuring DBD::Oracle for perl 5.008008 on linux (x86_64-linux-thread-multi) Remember to actually *READ* the README file! Especially if you have any problems. Installing on a linux, Ver#2.6 Using Oracle in /opt/oracle/product/10.2 DEFINE _SQLPLUS_RELEASE = "1002000400" (CHAR) Oracle version 10.2.0.4 (10.2) Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms64.mk Found /opt/oracle/product/10.2/rdbms/lib/ins_rdbms.mk Using /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Your LD_LIBRARY_PATH env var is set to '/usr/local/oracle/ORA/lib:/usr/dt/lib:/usr/openwin/lib:/usr/local/oracle/ORA/ows/cartx/wodbc/1.0/util/lib:/usr/local/oracle/ORA/lib:/usr/local/sybase/OCS-12_0/lib:/usr/local/sybase/lib:/home/oracle/jdbc/jdbcoci73/lib:./' WARNING: Your LD_LIBRARY_PATH env var doesn't include '/opt/oracle/product/10.2/lib' but probably needs to. Reading /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Reading /usr/local/oracle/ORA/rdbms/lib/env_rdbms.mk Attempting to discover Oracle OCI build rules sh: make: command not found by executing: [make -f /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk build ECHODO=echo ECHO=echo GENCLNTSH='echo genclntsh' CC=true OPTIMIZE= CCFLAGS= EXE=DBD_ORA_EXE OBJS=DBD_ORA_OBJ.o] WARNING: Oracle build rule discovery failed (32512) Add path to make command into your PATH environment variable. Oracle oci build prolog: [sh: make: command not found] Oracle oci build command: [] WARNING: Unable to interpret Oracle build commands from /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk. (Will continue by using fallback approach.) Please report this to [email protected]. See README for what to include. Found header files in /opt/oracle/product/10.2/rdbms/public. client_version=10.2 DEFINE= -Wall -Wno-comment -DUTF8_SUPPORT -DORA_OCI_VERSION=\"10.2.0.4\" -DORA_OCI_102 Checking for functioning wait.ph System: perl5.008008 linux ca-build9.us.oracle.com 2.6.20-1.3002.fc6xen #1 smp thu apr 30 18:08:39 pdt 2009 x86_64 x86_64 x86_64 gnulinux Compiler: gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm Linker: not found Sysliblist: -ldl -lm -lpthread -lnsl -lirc Oracle makefiles would have used these definitions but we override them: CC: cc CFLAGS: $(GFLAG) $(OPTIMIZE) $(CDEBUG) $(CCFLAGS) $(PFLAGS)\ $(SHARED_CFLAG) $(USRFLAGS) [$(GFLAG) -O3 $(CDEBUG) -m32 $(TRIGRAPHS_CCFLAGS) -fPIC -I/usr/local/oracle/ORA/rdbms/demo -I/usr/local/oracle/ORA/rdbms/public -I/usr/local/oracle/ORA/plsql/public -I/usr/local/oracle/ORA/network/public -DLINUX -D_GNU_SOURCE -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -DSLTS_ENABLE -DSLMXMX_ENABLE -D_REENTRANT -DNS_THREADS -fno-strict-aliasing $(LPFLAGS) $(USRFLAGS)] build: $(CC) $(ORALIBPATH) -o $(EXE) $(OBJS) $(OCISHAREDLIBS) [ cc -L$(LIBHOME) -L/usr/local/oracle/ORA/rdbms/lib/ -o $(EXE) $(OBJS) -lclntsh $(EXPDLIBS) $(EXOSLIBS) -ldl -lm -lpthread -lnsl -lirc -ldl -lm $(USRLIBS) -lpthread] LDFLAGS: $(LDFLAGS32) [-m32 -o $@ -L/usr/local/oracle/ORA/rdbms//lib32/ -L/usr/local/oracle/ORA/lib32/ -L/usr/local/oracle/ORA/lib32/stubs/] Linking with /usr/local/oracle/ORA/rdbms/lib/defopt.o -lclntsh -ldl -lm -lpthread -lnsl -lirc -ldl -lm -lpthread [from $(DEF_OPT) $(OCISHAREDLIBS)] Checking if your kit is complete... Looks good LD_RUN_PATH=/usr/local/oracle/ORA/lib Using DBD::Oracle 1.50. Using DBD::Oracle 1.50. Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Writing Makefile for DBD::Oracle Writing MYMETA.yml and MYMETA.json *** If you have problems... read all the log printed above, and the README and README.help.txt files. (Of course, you have read README by now anyway, haven't you?)

    Read the article

  • Remote Desktop Services Gateway Issue

    - by AVandelay05
    Alright fellow techies here's the rundown. I have installed Server 2008 r2 Remote Dekstop Services on a VM in my network. I installed the following RD role services: RD Session Host, Licensing, Connection Broker, Gateway, Web Access. When I set things up originally, the gateway server and RDWeb worked as it should locally. After getting things running locally (remoteserver.domainname.local) I wanted to test things externally. From the outside, I couldn't get things running (meaning I could connect to rdweb access externally, but when I tried to run an app I would get the message "can't connect/find computer"). Here's my setup for external access The VM has every RD Services role services installed on it, meaning it acts as gateway, rd web access, session host, licensing, the whole bit. I made a self-signed certificate on the gateway server (gateway.domainname.net is the cert name). Internally, I have a secondary forward-lookup zone called domainname.net with an A record gateway pointing to the local IP of the gateway server. On our public DNS (domainname.net) I have an A record gateway. This is to access the RDWeb externally. In IIS I have the following authentication settings RDWeb: All disabled except for anonymous authentication Rpc: All disabled except for basic and windows RpcWithCert: All disbled except for windows authentication I have the necessary web access config in our sonicwall tz210 (https and rdp, external ip pointing to local ip of rds server) RAP and CAP have the correct user and computer groups, authentication, and allowed devices After all of this, here's what happens accessing externally. I can login correctly to RDWeb Access (I've tried a bogus login, I can't login to it so that's working properly). I see the Apps for use. I click on an app, click connect, the credential window opens, I put in the correct user creds, it tries to connect to the gateway server, but then the cred window comes back in view. I tried to reach a limit of failed logins, but never reached one, haha. So from the same external client machine I try to connect to the gateway through a Remote Desktop connection. I put in the correct gateway settings in the RD window, try to connect and get the same results as I did in RDWeb access. I checked the event logs on the RD Services machine and saw the following event IDs around the time I tried to login externally: ID 6037 with the message "The program svchost.exe, with the assigned process ID 2168, could not authenticate locally by using the target name host/gateway.domainname.net. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name. Try a different target name." ID 10 RADWebAccess "RD Web Access was unable to access gateway.domainname.net, which is the server that is specified as running the RemoteApp and Desktop Connection Management service. Ensure that the computer account of the RD Web Access server is a member of the TS Web Access Computers security group on gateway.domainname.net" ID 4625 "An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Administrator Account Domain: gateway.domainname.net Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: USER-LAPTOP Source Network Address: External IP Source Port: 63125 Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols." I don't think the VM has a null SID. The SID of the VM and it's physical host have different SIDS. I can access the blank page for rpc externally using the external gateway name. It seems like authentication is a problem. Also, is it a problem that the external name of the gateway server doesn't match the local name? The external name (which the cert is based on) is gateway.domainname.net and the internal name is remoteserver.domainname.local. That's the only thing I can think of that would be the problem, but the external name has to be different from the local right? Internally, I ping gateway.domainname.net and it gives me the correct local IP of the server. Now, there isn't an actual computer name in AD, but I don't know how I would achieve that? I hope I've been clear....any help would be appreciated. I think I'm close to achieving this. :)

    Read the article

  • Software: Launching League of Legends spectator mode from Command Line (Mac)

    - by Alex Popov
    Background: tl;dr at the end League of Legends has a spectator mode, in which you can watch someone else's game (essentially a replay) with a 3 minute delay. Popular LoL website OP.GG has figured out a clever way of hosting these spectator games on their own servers, thereby making them replayable, as opposed to only being available while the game is on (as Riot does it). If you request a replay from OP.GG, it sends a batch file which looks for where the League is situated and then the magic happens: @start "" "League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1" This works fine on Windows. I'm trying to get it to work on Mac (which has an official client). First I tried running the same command by hand, (split for convenience) /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends 8393 LoLLauncher \ /Applications/ ... /LolClient spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1 Running this, however, just starts the LoLLauncher, which closes all the active League processes. The exactly same thing happens if I just call /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends Next I tried seeing what actually happens when Spectator mode is initiated so I ran $ ps -axf | grep -i lol which showed UID PID PPID C STIME TTY TIME CMD 503 3085 1 0 Wed02pm ?? 0:00.00 (LolClient) 503 24607 1 0 9:19am ?? 0:00.98 /Applications/League of Legends.app/Contents/LOL/RADS/system/UserKernel.app/Contents/MacOS/UserKernel updateandrun lol_launcher LoLLauncher.app 503 24610 24607 0 9:19am ?? 1:08.76 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_launcher/releases/0.0.0.122/deploy/LoLLauncher.app/Contents/MacOS/LoLLauncher 503 24611 24610 0 9:19am ?? 1:23.02 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient -runtime .\ -nodebug META-INF\AIR\application.xml .\ -- 8393 503 24927 24610 0 9:44am ?? 0:03.37 /Applications/League of Legends.app/Contents/LoL/RADS/solutions/lol_game_client_sln/releases/0.0.0.117/deploy/LeagueOfLegends.app/Contents/MacOS/LeagueofLegends 8394 LoLLauncher /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient spectator 216.133.234.17:8088 Yn1oMX/n3LpXNebibzUa1i3Z+s2HV0ul 1400781241 NA1 Of Interest: there is (LolClient) which I cannot kill by it's PID. UserKernel updateandrun lol_launcher LoLLauncher.app is launched first. LoLLauncher is launched by the UserKernel (as we can see from the PPID) The very long command (PID: 24927) is how Spectator mode is launched, and is also launched by UserKernel. Spectator mode is launched in exactly the same way that the OP.GG .bat wanted to, with the only difference that Spectator mode connects to Riot instead of OP.GG's spectate server. I tried attaching GDB to the LolClient, but I couldn't get anything meaningful from it since it's an Adobe AIR application (and I've never used GDB with code other than mine own). Next I ran dtruss -a -b 100m -f -p $PID on everything I could think of: the LolClient, the LolLauncher and the UserKernel and skimmed the half a million lines produced. I found stuff like the GET request used to get the information of the game to spectate, but I could not see any launch of the equivalent of League of Legends.exe with spectator options. Finally, I ran lsof | grep -i lol to see if anything else was opened in the process, but didn't find anything that seemed appropriate. Open were UserKernel, LolLauncher, LolClient, Adobe AIR, LeagueofLegends and then Bugsplat, all of which are expected. None of this seemed especially relevant to figuring out how LeagueofLegends was opened into spectator mode. It obviously can be done, since Spectator mode is accessible from within the client. It seems likely that it can be done from the CLI, since Windows can do it and the clients are supposed to equals. Unless I'm missing something in the difference between how UNIX and Windows handle CLI application launches. My question is if there are any other things I can try to figure out how to launch Spectator mode myself. tl;dr: Trying to get into spectator mode from the CLI. It's possible on Windows (see first code block) but it just restarts League on Mac. What else can I try to find what call is made, and how to reproduce it? PS: Please let me know how I can improve this question or its formatting, I'd love to use StackOverflow/SuperUser, but as the guys said on the podcast this week (Ep. 59) it's very intimidating. Sorry for posting this on StackOverflow the first time :(

    Read the article

  • MSVC Compiler options with mojo-native in Maven

    - by graham.reeds
    I'm trying to set up a test environment with Maven to build VC++ and I am way behind on this. I have 3 source files that I builds a dll (once I get this fixed it should be a simple matter to add the unit-tests): hook.cpp hook.h hook.def This is compiled, on the command line, with the following: C:\Develop\hook\src\main\msvc>cl -o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def" Which produces the expected obj, dll, lib and exp files. So now to get this working in Maven2 with the Mojo-native plugin. With no options Maven w/Mojo gives me this (truncated) output: [INFO] [native:initialize {execution: default-initialize}] [INFO] [native:unzipinc {execution: default-unzipinc}] [INFO] [native:javah {execution: default-javah}] [INFO] [native:compile {execution: default-compile}] [INFO] cmd.exe /X /C "cl -IC:\Develop\hook\src\main\msvc /FoC:\Develop\hook\targ et\objs\Hook.obj -c C:\Develop\hook\src\main\msvc\Hook.cpp" Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.30729.01 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. Hook.cpp [INFO] [native:link {execution: default-link}] [INFO] cmd.exe /X /C "link.exe /out:C:\Develop\hook\target\hook.dll target\objs\ Hook.obj" Microsoft (R) Incremental Linker Version 9.00.30729.01 Copyright (C) Microsoft Corporation. All rights reserved. LINK : fatal error LNK1561: entry point must be defined [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error executing command line. Exit code:1561 mojo-native gives options for manipulating the compiler/linker options but gives no example of usage. No matter what I tweak in these settings I get the error of: [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to configure plugin parameters for: org.codehaus.mojo:native-maven -plugin:1.0-alpha-4 (found static expression: '-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def"' which may act as a default value). Cause: Cannot assign configuration entry 'compilerStartOptions' to 'interface ja va.util.List' from '-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook .def"', which is of type class java.lang.String The relevant part of my pom.xml looks like this: <configuration> <sources> <source> <directory>src/main/msvc</directory> <includes> <include>**/*.cpp</include> </includes> </source> </sources> <compilerProvider>msvc</compilerProvider> <compilerExecutable>cl</compilerExecutable> <!-- cl -o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def" --> <compilerStartOptions>-o hook.dll Hook.cpp /D HOOK_DLLEXPORT /link /DLL /DEF:"Hook.def"</compilerStartOptions> <!-- <compilerMiddleOptions></compilerMiddleOptions> <compilerEndOptions></compilerEndOptions> <linkerStartOptions></linkerStartOptions> <linkerMiddleOptions></linkerMiddleOptions> <linkerEndOptions></linkerEndOptions> --> </configuration> How do I manipulate the compiler to produce a DLL like the command line version does? Or should I give up and just use exec?

    Read the article

  • Why is my spawned process still causing IntelliJ to wait?

    - by itsadok
    I'm trying to start a server as part of an Ant artifact. Here are the relevant lines: <exec dir="." executable="cmd.exe" spawn="true"> <arg line="/c c:\Java\james-2.3.2\bin\debug.bat" /> </exec> If I start it with ant from the command line, a process is spawned and I get a command prompt and everything seems fine. However, if I start it from IntelliJ 6, my IDE, the build stays alive until I kill the server. Here's the line IntelliJ uses to start ant: C:\Java\jdk1.6.0_02\bin\java -Xmx128m -Dant.home=C:\Java\apache-ant-1.7.1 -Dfile.encoding=UTF-8 -classpath "C:\Java\apache-ant-1.7.1\lib\ant-antlr.jar;C:\Java\apache-ant-1.7.1\lib\ant-apache-bcel.jar;C:\Java\apache-ant-1.7.1\lib\ant-apache-bsf.jar;C:\Java\apache-ant-1.7.1\lib\ant-apache-log4j.jar;C:\Java\apache-ant-1.7.1\lib\ant-apache-oro.jar;C:\Java\apache-ant-1.7.1\lib\ant-apache-regexp.jar;C:\Java\apache-ant-1.7.1\lib\ant-apache-resolver.jar;C:\Java\apache-ant-1.7.1\lib\ant-commons-logging.jar;C:\Java\apache-ant-1.7.1\lib\ant-commons-net.jar;C:\Java\apache-ant-1.7.1\lib\ant-jai.jar;C:\Java\apache-ant-1.7.1\lib\ant-javamail.jar;C:\Java\apache-ant-1.7.1\lib\ant-jdepend.jar;C:\Java\apache-ant-1.7.1\lib\ant-jmf.jar;C:\Java\apache-ant-1.7.1\lib\ant-jsch.jar;C:\Java\apache-ant-1.7.1\lib\ant-junit.jar;C:\Java\apache-ant-1.7.1\lib\ant-launcher.jar;C:\Java\apache-ant-1.7.1\lib\ant-netrexx.jar;C:\Java\apache-ant-1.7.1\lib\ant-nodeps.jar;C:\Java\apache-ant-1.7.1\lib\ant-starteam.jar;C:\Java\apache-ant-1.7.1\lib\ant-stylebook.jar;C:\Java\apache-ant-1.7.1\lib\ant-swing.jar;C:\Java\apache-ant-1.7.1\lib\ant-testutil.jar;C:\Java\apache-ant-1.7.1\lib\ant-trax.jar;C:\Java\apache-ant-1.7.1\lib\ant-weblogic.jar;C:\Java\apache-ant-1.7.1\lib\ant.jar;C:\Java\apache-ant-1.7.1\lib\xercesImpl.jar;C:\Java\apache-ant-1.7.1\lib\xml-apis.jar;C:\Java\jdk1.6.0_02\lib\tools.jar;C:\Program Files\JetBrains\IntelliJ IDEA 6.0\lib\idea_rt.jar" com.intellij.rt.ant.execution.AntMain2 -logger com.intellij.rt.ant.execution.IdeaAntLogger2 -inputhandler com.intellij.rt.ant.execution.IdeaInputHandler -buildfile C:\Java\Projects\CcMailer\ccmailer.xml jar I suspect the inputhandler parameter has something to do with the problem, but if I run it myself the problem is not reproduced. Either way, I have only limited control over what IntelliJ does. My question is: how does IntelliJ even know the process is running? The Ant process is long gone. Is there a way to start a subprocess in a more sneaky way, so that IntelliJ won't even know there's anything to wait around for? Here's what I've tried so far: I tried using the start command, like this: <exec dir="." executable="cmd.exe" spawn="true"> <arg line="/c start cmd /c c:\Java\james-2.3.2\bin\debug.bat" /> </exec> I also tried using python, with code like this: import os.path import subprocess subprocess.Popen(["cmd.exe", "/c", "debug.bat"], stdin=open(os.path.devnull), stdout=open(os.path.devnull, "w"), stderr=subprocess.STDOUT) To no avail. The build window always stays up until I kill the server. Any ideas?

    Read the article

  • using 'ar' tool in Alchemy

    - by paleozogt
    I've found that if you specify a path to Alchemy's 'ar' tool, it won't create the 'l.bc' file necessary to link the library. For example, here is the case when I don't specify a path (it works): asimmons-mac:test asimmons$ echo 'int main() { return 42; }' > testmain.cpp asimmons-mac:test asimmons$ echo 'int test1() { return -1; }' > test1.cpp asimmons-mac:test asimmons$ echo 'int test2() { return 1; }' > test2.cpp asimmons-mac:test asimmons$ g++ -c testmain.cpp asimmons-mac:test asimmons$ g++ -c test1.cpp asimmons-mac:test asimmons$ g++ -c test2.cpp asimmons-mac:test asimmons$ ar cr libtest.a test1.o test2.o asimmons-mac:test asimmons$ g++ testmain.cpp libtest.a llvm-ld, "-o=".(98237.achacks.o = "98237.achacks.exe"), -disable-opt -internalize-public-api-list=_start,malloc,free,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__qdivrem,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__qdivrem,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__error /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libc.l.bc /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libstdc++.l.bc, test.l.bc 98237.achacks.o 98237.achacks.swf, 5593510 bytes written asimmons-mac:test asimmons$ ls -l total 10992 -rwxr-xr-x 1 asimmons staff 5593575 Apr 9 17:44 a.exe -rw------- 1 asimmons staff 1284 Apr 9 17:43 libtest.a -rw-r--r-- 1 asimmons staff 672 Apr 9 17:43 test.l.bc -rw-r--r-- 1 asimmons staff 27 Apr 9 17:43 test1.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:43 test1.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:43 test2.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:43 test2.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:43 testmain.cpp -rwxr-xr-x 1 asimmons staff 552 Apr 9 17:43 testmain.o asimmons-mac:test asimmons$ And here is an example where I do specify a path (it doesn't work). I try to tell 'ar' to put the library under 'lib' and then link to lib/libtest.a: asimmons-mac:test asimmons$ mkdir lib asimmons-mac:test asimmons$ echo 'int main() { return 42; }' > testmain.cpp asimmons-mac:test asimmons$ echo 'int test1() { return -1; }' > test1.cpp asimmons-mac:test asimmons$ echo 'int test2() { return 1; }' > test2.cpp asimmons-mac:test asimmons$ g++ -c testmain.cpp asimmons-mac:test asimmons$ g++ -c test1.cpp asimmons-mac:test asimmons$ g++ -c test2.cpp asimmons-mac:test asimmons$ ar cr lib/libtest.a test1.o test2.o asimmons-mac:test asimmons$ g++ testmain.cpp lib/libtest.a llvm-ld, "-o=".(98638.achacks.o = "98638.achacks.exe"), -disable-opt -internalize-public-api-list=_start,malloc,free,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__qdivrem,__adddi3,__anddi3,__ashldi3,__ashrdi3,__cmpdi2,__divdi3,__qdivrem,__fixdfdi,__fixsfdi,__fixunsdfdi,__fixunssfdi,__floatdidf,__floatdisf,__floatunsdidf,__iordi3,__lshldi3,__lshrdi3,__moddi3,__muldi3,__negdi2,__one_cmpldi2,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__subdi3,__ucmpdi2,__udivdi3,__umoddi3,__xordi3,__error /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libc.l.bc /Users/asimmons/Development/alchemy-darwin-v0.5a/avm2-libc/lib/avm2-libstdc++.l.bc, lib/test.l.bc 98638.achacks.o llvm-ld: error: Cannot find linker input 'lib/test.l.bc' asimmons-mac:test asimmons$ ls -l total 56 -rw-r--r-- 1 asimmons staff 552 Apr 9 17:46 98638.achacks.o drwxr-xr-x 3 asimmons staff 102 Apr 9 17:46 lib -rw-r--r-- 1 asimmons staff 27 Apr 9 17:45 test1.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:46 test1.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:45 test2.cpp -rwxr-xr-x 1 asimmons staff 536 Apr 9 17:46 test2.o -rw-r--r-- 1 asimmons staff 26 Apr 9 17:45 testmain.cpp -rwxr-xr-x 1 asimmons staff 552 Apr 9 17:45 testmain.o asimmons-mac:test asimmons$ ls -l lib/ total 8 -rw------- 1 asimmons staff 1284 Apr 9 17:46 libtest.a asimmons-mac:test asimmons$ but the linker errors out because it can't find lib/test.l.bc. Notice how in the first example, 'test.l.bc' was generated alongside libtest.a. But in the second example test.l.bc was not generated. Where did it go? This is a contrived example, but in the project I'm trying to build with alchemy the make scripts generate libraries in full paths and then refer to them that way. It seems that alchemy's 'ar' tool is broken if you try to generate a library anywhere other than '.'. Has anyone else seen this? Is there a workaround? fyi, I've also posted this question on the Alchemy formus.

    Read the article

  • Why is my code returning a Null Object Refrence error when using WatIn?

    - by Fuzz Evans
    I keep getting a Null Object Refrence Error, but can't tell why. I have a CSV file that contains 100 urls. The file is read into an array called "lines". public partial class Form1 : System.Windows.Forms.Form { string[] lines; public Form1() ... private void ReadLinksIntoMemory() { //this reads the chosen csv file into our "lines" array //and splits on comma's and new lines to create new objects within the array using (StreamReader sr = new StreamReader(@"C:\temp.csv")) { //reads everything in our csv into 1 long line string fileContents = sr.ReadToEnd(); //splits the 1 long line read in into multiple objects of the lines array lines = fileContents.Split(new string[] { ",", Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); sr.Dispose(); } } The next part is where I get the null object error. When I try to use WatIn to go to the first item in the lines array it says I'm referencing a null object. private void GoToEditLinks() { for (int i = 0; i < lines.Length; i++) { //go to each link sequentially myIE.GoTo(lines[i].ToString()); //sleep so we can make sure the page loads System.Threading.Thread.Sleep(5000); } } When I debug the code it says that the GoTo request calls lines which is null. It seems like I need to declare the array, but don't I need to tell it an exact size to do that? Example: lines = new string[10] I thought I could use the lines.Length to tell it how big to make the array but that didn't work. What is weird to me is I can use the following code without problem: //returns the accurate number of urls that were in the CSV we read in earlier txtbx1.text = lines.Length; //or //this returns the last entry in the csv, as well as the last entry in the array TextBox2.Text = lines[lines.Length - 1]; I am confused why the array clearly has items in it, they can be called to fill a text box, but when I try to call them in my for loop it says its a null reference? UPDATE: By placing my cursor on both calls to lines and pressing f12 I find they both go to the same instance. The thought next is that I am not calling ReadLinksIntoMemory in time, below is my code: private void button1_Click(object sender, EventArgs e) { button1.Enabled = false; ReadLinksIntoMemory(); GoToEditLinks(); button1.Enabled = true; } Unless I'm mistaken the code says that the ReadLinksIntoMemory method must complete before GoToEditLinks can be called? If ReadLinksIntoMemory didn't finish in time I shouldn't be able to fill my text boxes with the lines array length and/or last entry. UPDATE: Stepping into the method GoToEditLinks() I see that lines is null before it calls: myIE.GoTo(lines[i]); but when it hits the goto command the value changes from null to the url it is suppose to go to, but at that same time it gives me the null object error? UPDATE: I added a IsNullOrEmpty check method and lines array passes it without any issue. I'm beginning to think it is an issue with WatIn and the myIE.GoTo command. I think this is the stack trace/call stack? Program.exe!Program.Form1.GoToEditLinks() Line 284 C# Program.exe!Program.Form1.button1_Click(object sender, System.EventArgs e) Line 191 + 0x8 bytes C# [External Code] Program.exe!Program.Program.Main() Line 18 + 0x1d bytes C# [External Code]

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • jqgrid with asp.net webmethod and json working with sorting, paging, searching and LINQ

    - by aimlessWonderer
    THIS WORKS! Most topics covering jqgrid and asp.net seem to relate to just receiving JSON, or working in the MVC framework, or utilizing other handlers or web services... but not many dealt with actually passing parameters back to an actual webmethod in the codebehind. Furthermore, scarce are the examples that contain successful implementation the AJAX paging, sorting, or searching along with LINQ to SQL for asp.net jqGrid. Below is a working example that may help others who need help to pass parameters to jqGrid in order to have correct paging, sorting, filtering.. it uses pieces from here and there... ================================================== First, THE JAVASCRIPT <script type="text/javascript"> $(document).ready(function() { var grid = $("#list"); $("#list").jqGrid({ // setup custom parameter names to pass to server prmNames: { search: "isSearch", nd: null, rows: "numRows", page: "page", sort: "sortField", order: "sortOrder" }, // add by default to avoid webmethod parameter conflicts postData: { searchString: '', searchField: '', searchOper: '' }, // setup ajax call to webmethod datatype: function(postdata) { mtype: "GET", $.ajax({ url: 'PageName.aspx/getGridData', type: "POST", contentType: "application/json; charset=utf-8", data: JSON.stringify(postdata), dataType: "json", success: function(data, st) { if (st == "success") { var grid = jQuery("#list")[0]; grid.addJSONData(JSON.parse(data.d)); } }, error: function() { alert("Error with AJAX callback"); } }); }, // this is what jqGrid is looking for in json callback jsonReader: { root: "rows", page: "page", total: "totalpages", records: "totalrecords", cell: "cell", id: "id", //index of the column with the PK in it userdata: "userdata", repeatitems: true }, colNames: ['Id', 'First Name', 'Last Name'], colModel: [ { name: 'id', index: 'id', width: 55, search: false }, { name: 'fname', index: 'fname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} }, { name: 'lname', index: 'lname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} } ], rowNum: 10, rowList: [10, 20, 30], pager: jQuery("#pager"), sortname: "fname", sortorder: "asc", viewrecords: true, caption: "Grid Title Here" }).jqGrid('navGrid', '#pager', { edit: false, add: false, del: false }, {}, // default settings for edit {}, // add {}, // delete { closeOnEscape: true, closeAfterSearch: true}, //search {} ) }); </script> ================================================== Second, THE C# WEBMETHOD [WebMethod] public static string getGridData(int? numRows, int? page, string sortField, string sortOrder, bool isSearch, string searchField, string searchString, string searchOper) { string result = null; MyDataContext db = null; try { //--- retrieve the data db = new MyDataContext("my connection string path"); var query = from u in db.TBL_USERs select u; //--- determine if this is a search filter if (isSearch) { searchOper = getOperator(searchOper); // need to associate correct operator to value sent from jqGrid string whereClause = String.Format("{0} {1} {2}", searchField, searchOper, "@" + searchField); //--- associate value to field parameter Dictionary<string, object> param = new Dictionary<string, object>(); param.Add("@" + searchField, searchString); query = query.Where(whereClause, new object[1] { param }); } //--- setup calculations int pageIndex = page ?? 1; //--- current page int pageSize = numRows ?? 10; //--- number of rows to show per page int totalRecords = query.Count(); //--- number of total items from query int totalPages = (int)Math.Ceiling((decimal)totalRecords / (decimal)pageSize); //--- number of pages //--- filter dataset for paging and sorting IQueryable<TBL_USER> orderedRecords = query.OrderBy(sortfield); IEnumerable<TBL_USER> sortedRecords = orderedRecords.ToList(); if (sortorder == "desc") sortedRecords= sortedRecords.Reverse(); sortedRecords= sortedRecords .Skip((pageIndex - 1) * pageSize) //--- page the data .Take(pageSize); //--- format json var jsonData = new { totalpages = totalPages, //--- number of pages page = pageIndex, //--- current page totalrecords = totalRecords, //--- total items rows = ( from row in sortedRecords select new { i = row.USER_ID, cell = new string[] { row.USER_ID.ToString(), row.FNAME.ToString(), row.LNAME } } ).ToArray() }; result = Newtonsoft.Json.JsonConvert.SerializeObject(jsonData); } catch (Exception ex) { Debug.WriteLine(ex); } finally { if (db != null) db.Dispose(); } return result; } ================================================== Third, NECESSITIES In order to have dynamic OrderBy clauses in the LINQ, I had to pull in a class to my AppCode folder called 'Dynamic.cs'. You can retrieve the file from downloading here. You will find the file in the "DynamicQuery" folder. That file will give you the ability to utilized dynamic ORDERBY clause since we don't know what column we're filtering by except on the initial load. To serialize the JSON back from the C-sharp to the JS, I incorporated the James Newton-King JSON.net DLL found here : http://json.codeplex.com/releases/view/37810. After downloading, there is a "Newtonsoft.Json.Compact.dll" which you can add in your Bin folder as a reference Here's my USING's block using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Web.UI.WebControls; using System.Web.Services; using System.Linq.Dynamic; For the Javascript references, I'm using the following scripts in respective order in case that helps some folks: 1) jquery-1.3.2.min.js ... 2) jquery-ui-1.7.2.custom.min.js ... 3) json.min.js ... 4) i18n/grid.locale-en.js ... 5) jquery.jqGrid.min.js For the CSS, I'm using jqGrid's necessities as well as the jQuery UI Theme: 1) jquery_theme/jquery-ui-1.7.2.custom.css ... 2) ui.jqgrid.css The key to getting the parameters from the JS to the WebMethod without having to parse an unserialized string on the backend or having to setup some JS logic to switch methods for different numbers of parameters was this block postData: { searchString: '', searchField: '', searchOper: '' }, Those parameters will still be set correctly when you actually do a search and then reset to empty when you "reset" or want the grid to not do any filtering Hope this helps some others!!!! Please reply if you find major issues or ways of refactoring or doing better that I haven't considered.

    Read the article

  • WiX 3 Tutorial: Understanding main WXS and WXI file

    - by Mladen Prajdic
    In the previous post we’ve taken a look at the WiX solution/project structure and project properties. We’re still playing with our super SuperForm application and today we’ll take a look at the general parts of the main wxs file, SuperForm.wxs, and the wxi include file. For wxs file we’ll just go over the general description of what each part does in the code comments. The more detailed descriptions will be in future posts about features themselves. WXI include file Include files are exactly what their name implies. To include a wxi file into the wxs file you have to put the wxi at the beginning of each .wxs file you wish to include it in. If you’ve ever worked with C++ you can think of the include files as .h files. For example if you include SuperFormVariables.wxi into the SuperForm.wxs, the variables in the wxi won’t be seen in FilesFragment.wxs or RegistryFragment.wxs. You’d have to include it manually into those two wxs files too. For preprocessor variable $(var.VariableName) to be seen by every file in the project you have to include them in the WiX project properties->Build->“Define preprocessor variables” textbox. This is why I’ve chosen not to go this route because in multi developer teams not everyone has the same directory structure and having a single variable would mean each developer would have to checkout the wixproj file to edit the variable. This is pretty much unacceptable by my standards. This is why we’ve added a System Environment variable named SuperFormFilesDir as is shown in the previous Wix Tutorial post. Because the FilesFragment.wxs is autogenerated on every project build we don’t want to change it manually each time by adding the include wxi at the beginning of the file. This way we couldn’t recreate it in each pre-build event. <?xml version="1.0" encoding="utf-8"?><Include> <!-- Versioning. These have to be changed for upgrades. It's not enough to just include newer files. --> <?define MajorVersion="1" ?> <?define MinorVersion="0" ?> <?define BuildVersion="0" ?> <!-- Revision is NOT used by WiX in the upgrade procedure --> <?define Revision="0" ?> <!-- Full version number to display --> <?define VersionNumber="$(var.MajorVersion).$(var.MinorVersion).$(var.BuildVersion).$(var.Revision)" ?> <!-- Upgrade code HAS to be the same for all updates. Once you've chosen it don't change it. --> <?define UpgradeCode="YOUR-GUID-HERE" ?> <!-- Path to the resources directory. resources don't really need to be included in the project structure but I like to include them for for clarity --> <?define ResourcesDir="$(var.ProjectDir)\Resources" ?> <!-- The name of your application exe file. This will be used to kill the process when updating and creating the desktop shortcut --> <?define ExeProcessName="SuperForm.MainApp.exe" ?></Include> For now there’s no way to tell WiX in Visual Studio to have a wxi include file available to the whole project, so you have to include it in each file separately. Only variables set in “Define preprocessor variables” or System Environment variables are accessible to the whole project for now. The main WXS file: SuperForm.wxs We’ll only take a look at the general structure of the main SuperForm.wxs and not its the details. We’ll cover the details in future posts. The code comments should provide plenty info about what each part does in general. Basically there are 5 major parts. The update part, the conditions and actions part, the UI install sequence, the directory structure and the features we want to include. <?xml version="1.0" encoding="UTF-8"?><!-- Add xmlns:util namespace definition to be able to use stuff from WixUtilExtension dll--><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:util="http://schemas.microsoft.com/wix/UtilExtension"> <!-- This is how we include wxi files --> <?include $(sys.CURRENTDIR)Includes\SuperFormVariables.wxi ?> <!-- Id="*" is to enable upgrading. * means that the product ID will be autogenerated on each build. Name is made of localized product name and version number. --> <Product Id="*" Name="!(loc.ProductName) $(var.VersionNumber)" Language="!(loc.LANG)" Version="$(var.VersionNumber)" Manufacturer="!(loc.ManufacturerName)" UpgradeCode="$(var.UpgradeCode)"> <!-- Define the minimum supported installer version (3.0) and that the install should be done for the whole machine not just the current user --> <Package InstallerVersion="300" Compressed="yes" InstallScope="perMachine"/> <Media Id="1" Cabinet="media1.cab" EmbedCab="yes" /> <!-- Upgrade settings. This will be explained in more detail in a future post --> <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion OnlyDetect="yes" Minimum="$(var.VersionNumber)" IncludeMinimum="no" Property="NEWER_VERSION_FOUND" /> <UpgradeVersion Minimum="0.0.0.0" IncludeMinimum="yes" Maximum="$(var.VersionNumber)" IncludeMaximum="no" Property="OLDER_VERSION_FOUND" /> </Upgrade> <!-- Reference the global NETFRAMEWORK35 property to check if it exists --> <PropertyRef Id="NETFRAMEWORK35"/> <!-- Startup conditions that checks if .Net Framework 3.5 is installed or if we're running the OS higher than Windows XP SP2. If not the installation is aborted. By doing the (Installed OR ...) property means that this condition will only be evaluated if the app is being installed and not on uninstall or changing --> <Condition Message="!(loc.DotNetFrameworkNeeded)"> <![CDATA[Installed OR NETFRAMEWORK35]]> </Condition> <Condition Message="!(loc.AppNotSupported)"> <![CDATA[Installed OR ((VersionNT >= 501 AND ServicePackLevel >= 2) OR (VersionNT >= 502))]]> </Condition> <!-- This custom action in the InstallExecuteSequence is needed to stop silent install (passing /qb to msiexec) from going around it. --> <CustomAction Id="NewerVersionFound" Error="!(loc.SuperFormNewerVersionInstalled)" /> <InstallExecuteSequence> <!-- Check for newer versions with FindRelatedProducts and execute the custom action after it --> <Custom Action="NewerVersionFound" After="FindRelatedProducts"> <![CDATA[NEWER_VERSION_FOUND]]> </Custom> <!-- Remove the previous versions of the product --> <RemoveExistingProducts After="InstallInitialize"/> <!-- WixCloseApplications is a built in custom action that uses util:CloseApplication below --> <Custom Action="WixCloseApplications" Before="InstallInitialize" /> </InstallExecuteSequence> <!-- This will ask the user to close the SuperForm app if it's running while upgrading --> <util:CloseApplication Id="CloseSuperForm" CloseMessage="no" Description="!(loc.MustCloseSuperForm)" ElevatedCloseMessage="no" RebootPrompt="no" Target="$(var.ExeProcessName)" /> <!-- Use the built in WixUI_InstallDir GUI --> <UIRef Id="WixUI_InstallDir" /> <UI> <!-- These dialog references are needed for CloseApplication above to work correctly --> <DialogRef Id="FilesInUse" /> <DialogRef Id="MsiRMFilesInUse" /> <!-- Here we'll add the GUI logic for installation and updating in a future post--> </UI> <!-- Set the icon to show next to the program name in Add/Remove programs --> <Icon Id="SuperFormIcon.ico" SourceFile="$(var.ResourcesDir)\Exclam.ico" /> <Property Id="ARPPRODUCTICON" Value="SuperFormIcon.ico" /> <!-- Installer UI custom pictures. File names are made up. Add path to your pics. –> <!-- <WixVariable Id="WixUIDialogBmp" Value="MyAppLogo.jpg" /> <WixVariable Id="WixUIBannerBmp" Value="installBanner.jpg" /> --> <!-- the default directory structure --> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLLOCATION" Name="!(loc.ProductName)" /> </Directory> </Directory> <!-- Set the default install location to the value of INSTALLLOCATION (usually c:\Program Files\YourProductName) --> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLLOCATION" /> <!-- Set the components defined in our fragment files that will be used for our feature --> <Feature Id="SuperFormFeature" Title="!(loc.ProductName)" Level="1"> <ComponentGroupRef Id="SuperFormFiles" /> <ComponentRef Id="cmpVersionInRegistry" /> <ComponentRef Id="cmpIsThisUpdateInRegistry" /> </Feature> </Product></Wix> For more info on what certain attributes mean you should look into the WiX Documentation.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • NoSQL with MongoDB, NoRM and ASP.NET MVC

    - by shiju
     In this post, I will give an introduction to how to work on NoSQL and document database with MongoDB , NoRM and ASP.Net MVC 2. NoSQL and Document Database The NoSQL movement is getting big attention in this year and people are widely talking about document databases and NoSQL along with web application scalability. According to Wikipedia, "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases. These data stores may not require fixed table schemas, usually avoid join operations and typically scale horizontally. Academics and papers typically refer to these databases as structured storage". Document databases are schema free so that you can focus on the problem domain and don't have to worry about updating the schema when your domain is evolving. This enables truly a domain driven development. One key pain point of relational database is the synchronization of database schema with your domain entities when your domain is evolving.There are lots of NoSQL implementations are available and both CouchDB and MongoDB got my attention. While evaluating both CouchDB and MongoDB, I found that CouchDB can’t perform dynamic queries and later I picked MongoDB over CouchDB. There are many .Net drivers available for MongoDB document database. MongoDB MongoDB is an open source, scalable, high-performance, schema-free, document-oriented database written in the C++ programming language. It has been developed since October 2007 by 10gen. MongoDB stores your data as binary JSON (BSON) format . MongoDB has been getting a lot of attention and you can see the some of the list of production deployments from here - http://www.mongodb.org/display/DOCS/Production+Deployments NoRM – C# driver for MongoDB NoRM is a C# driver for MongoDB with LINQ support. NoRM project is available on Github at http://github.com/atheken/NoRM. Demo with ASP.NET MVC I will show a simple demo with MongoDB, NoRM and ASP.NET MVC. To work with MongoDB and  NoRM, do the following steps Download the MongoDB databse For Windows 32 bit, download from http://downloads.mongodb.org/win32/mongodb-win32-i386-1.4.1.zip  and for Windows 64 bit, download  from http://downloads.mongodb.org/win32/mongodb-win32-x86_64-1.4.1.zip . The zip contains the mongod.exe for run the server and mongo.exe for the client Download the NorM driver for MongoDB at http://github.com/atheken/NoRM Create a directory call C:\data\db. This is the default location of MongoDB database. You can override the behavior. Run C:\Mongo\bin\mongod.exe. This will start the MongoDb server Now I am going to demonstrate how to program with MongoDb and NoRM in an ASP.NET MVC application.Let’s write a domain class public class Category {            [MongoIdentifier]public ObjectId Id { get; set; } [Required(ErrorMessage = "Name Required")][StringLength(25, ErrorMessage = "Must be less than 25 characters")]public string Name { get; set;}public string Description { get; set; }}  ObjectId is a NoRM type that represents a MongoDB ObjectId. NoRM will automatically update the Id becasue it is decorated by the MongoIdentifier attribute. The next step is to create a mongosession class. This will do the all interactions to the MongoDB. internal class MongoSession<TEntity> : IDisposable{    private readonly MongoQueryProvider provider;     public MongoSession()    {        this.provider = new MongoQueryProvider("Expense");    }     public IQueryable<TEntity> Queryable    {        get { return new MongoQuery<TEntity>(this.provider); }    }     public MongoQueryProvider Provider    {        get { return this.provider; }    }     public void Add<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Insert(item);    }     public void Dispose()    {        this.provider.Server.Dispose();     }    public void Delete<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Delete(item);    }     public void Drop<T>()    {        this.provider.DB.DropCollection(typeof(T).Name);    }     public void Save<T>(T item) where T : class,new()    {        this.provider.DB.GetCollection<T>().Save(item);                }  }    The MongoSession constrcutor will create an instance of MongoQueryProvider that supports the LINQ expression and also create a database with name "Expense". If database is exists, it will use existing database, otherwise it will create a new databse with name  "Expense". The Save method can be used for both Insert and Update operations. If the object is new one, it will create a new record and otherwise it will update the document with given ObjectId.  Let’s create ASP.NET MVC controller actions for CRUD operations for the domain class Category public class CategoryController : Controller{ //Index - Get the category listpublic ActionResult Index(){    using (var session = new MongoSession<Category>())    {        var categories = session.Queryable.AsEnumerable<Category>();        return View(categories);    }} //edit a single category[HttpGet]public ActionResult Edit(ObjectId id) {     using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == id)              .FirstOrDefault();         return View("Save",category);    } }// GET: /Category/Create[HttpGet]public ActionResult Create(){    var category = new Category();    return View("Save", category);}//insert or update a category[HttpPost]public ActionResult Save(Category category){    if (!ModelState.IsValid)    {        return View("Save", category);    }    using (var session = new MongoSession<Category>())    {        session.Save(category);        return RedirectToAction("Index");    } }//Delete category[HttpPost]public ActionResult Delete(ObjectId Id){    using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == Id)              .FirstOrDefault();        session.Delete(category);        var categories = session.Queryable.AsEnumerable<Category>();        return PartialView("CategoryList", categories);    } }        }  You can easily work on MongoDB with NoRM and can use with ASP.NET MVC applications. I have created a repository on CodePlex at http://mongomvc.codeplex.com and you can download the source code of the ASP.NET MVC application from here

    Read the article

  • Installing PIL (Python Imaging Library) in Win7 64 bits, Python 2.6.4

    - by Rafael Almeida
    I'm trying to install said library for use with Python. I tried downloading the executable installer for Windows, which runs, but says it doesn't find a Python installation. Then tried registering (http://effbot.org/zone/python-register.htm) Python, but the script says it can't register (although the keys appear in my register). Then I tried downloading the source package: I run the setup.py build and it works, but when I run setup.py install it says the following: running install running build running build_py running build_ext building '_imaging' extension error: Unable to find vcvarsall.bat What can I do?

    Read the article

  • Django/Mod_WSGI error: UnboundLocalError: local variable 'resolver' referenced before assignment

    - by ycseattle
    Hello, I've setup the Django with mod_wsgi and run into this error. I thought maybe the sys.path was not setup correctly but I tried everything I could think of with no luck. Any suggestions? The following is the apache2 log for the error: mod_wsgi (pid=2579): Exception occurred processing WSGI script '/home/myapp/myapp.wsgi'. Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 142, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) UnboundLocalError: local variable 'resolver' referenced before assignment The following is the content in the myapp.wsgi: import os import sys # put the Django project on sys.path sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../"))) os.environ["DJANGO_SETTINGS_MODULE"] = "photopier.settings" #os.environ["PYTHONPATH"]="/home" from django.core.handlers.wsgi import WSGIHandler application = WSGIHandler()

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >