Search Results

Search found 12072 results on 483 pages for 'x86 servers'.

Page 95/483 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Setting up an efficient and effective development process

    - by christopher-mccann
    I am in the midst of setting up the development environment (PHP/MySQL) for my start-up. We use three sets of servers: LIVE - the servers which provide the actual application TEST - providing a testing version before it is actually released DEV - the development servers The development servers run SVN with each developer checking out their local copy. At the end of each day completed fixes are checked in and then we use Hudson to automate our build process and then transfer it over to TEST. We then check the application still functions correctly using a tester and then if everything is fine move it to LIVE. I am happy with this process but I do have two questions: How would you recommend we do local testing - as each developer adds new pages or changes functionality I want them to be able to test what they are doing. Would you just setup local Apache and a local database and have them test locally on their own machine? How would you recommend dealing with data layer changes? Is there anything else you would recommend doing to really make our development process as easy and efficient as possible? Thanks in advance

    Read the article

  • VS 2008 Service Pack 1 problem

    - by Compiler
    Hi, My OPS is XP and service pack 3 installed.I cant install vs2008 service pack1,In log file i see 'Visual C++ 2008 SP1 Design-Time Components for x86 - KB947888' cant be installed. Error code is 1603.Last part of Installation file is here. Returning IDOK. INSTALLMESSAGE_ERROR [Error 1335. The cabinet file 'patch.cab' required for this installation is corrupt and cannot be used. This could indicate a network error, an error reading from the CD-ROM, or a problem with this package.] [1/12/2009, 10:14:50] (IronSpigot::MsiExternalUiHandler::UiHandler) Returning IDOK. INSTALLMESSAGE_ACTIONSTART [Action 10:14:50: Rollback. Rolling back action:] [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) Patch (C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VS90sp1-KB945140-X86-ENU.msp; C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VC90sp1-KB947888-x86-enu.msp) install failed on product (Microsoft Visual Studio 2008 Professional Edition - ENU). Msi Log: Microsoft Visual Studio 2008 SP1_20090112_100005671-Microsoft Visual Studio 2008 Professional Edition - ENU-MSP0.txt [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) MsiApplyMultiplePatches returned 0x643

    Read the article

  • Convert.ToDateTime causes FormatException on afternoon date/time values

    - by Sam
    We have an application parsing date/time values in the following format: 2009-10-10 09:19:12.124 2009-10-10 12:13:14.852 2009-10-10 13:00:00 2009-10-10 15:23:32.022 One particular server all of the sudden (today) started failing to parse any times 13:00:00 or later. This particular client has five servers and only one has the problem. We have dozens of other clients with a total of hundreds of servers without the problem. System.FormatException: String was not recognized as a valid DateTime. at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles) at System.DateTime.Parse(String s, IFormatProvider provider) at System.Convert.ToDateTime(String value, IFormatProvider provider) at System.String.System.IConvertible.ToDateTime(IFormatProvider provider) at System.Convert.ToDateTime(Object value) I ran a test using DateTime.Parse(s, CultureInfo.CurrentCulture) comapred to DateTime.Parse(s, CultureInfo.InvariantCulture) and the problem only shows up with CurrentCulture. However, CurrentCulture is "en-US" just like all the other servers and there's nothing different that I can find in regional or language settings. Has anyone seen this before? Suggestions related to what I can look into? EDIT: Thank you for the answers so far. However, I'm looking for suggestions on what configuration to look into that could have caused this to suddenly change behavior and stop working when it's worked for years and works on hundreds of other servers. I've already changed it for the next version, but I'm looking for a configuration change to fix this in the interim for the current installation.

    Read the article

  • Problem with non blocking fifo in bash

    - by timdel
    Hi! I'm running a few Team Fortress 2 servers and I want to write a little management script. Basically the TF2 servers are a fg process which provides a server console, so I can start the server, type status and get an answer from it: ***@purple:~/tf2$ ./start_server_testing Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. Console initialized. [bla bla bla] Connection to Steam servers successful. VAC secure mode is activated. status hostname: Team Fortress version : 1.0.6.1/15 3883 secure udp/ip : ***.***.133.31:27600 map : ctf_2fort at: 0 x, 0 y, 0 z players : 0 (2 max) # userid name uniqueid connected ping loss state adr Great, now I want to create a script which sends the command sm_reloadadmins to all my servers. The best way I found to do this is using a fifo named pipe. Now what I want to do is having this pipe readonly and non blocking to the server process, so I can write into the pipe and the server executes it, but still I want to write via console one the server, so if I switch back to the fg process of the server and I type status I want an answer printed. I tried this (assuming serverfifo is mkfifo serverfifo): ./start_server_testing < serverfifo Not working, the server won't start until something is written to the pipe. ./start_server_testing <> serverfifo Thats actually working pretty good, I can see the console output of the server and I can write to the fifo and the server executes the commands, but I can't write via console to the server anymore. Also, if I write 'exit' to the pipe (which should end the server) and I'm running it in a screen the screen window is getting killed for some reason (wtf why?). I only need the server to read the fifo without blocking AND all my keyboard input on the server itself should be send to the server AND all server ouput should be written to the console. Is that possible? If yes, how?

    Read the article

  • Removing padding from structure in kernel module

    - by dexkid
    I am compiling a kernel module, containing a structure of size 34, using the standard command. make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules The sizeof(some_structure) is coming as 36 instead of 34 i.e. the compiler is padding the structure. How do I remove this padding? Running make V=1 shows the gcc compiler options passed as make -I../inc -C /lib/modules/2.6.29.4-167.fc11.i686.PAE/build M=/home/vishal/20100426_eth_vishal/organised_eth/src modules make[1]: Entering directory `/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE' test -e include/linux/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/linux/autoconf.h or include/config/auto.conf are missing."; \ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions ; rm -f /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions/* make -f scripts/Makefile.build obj=/home/vishal/20100426_eth_vishal/organised_eth/src gcc -Wp,-MD,/home/vishal/20100426_eth_vishal/organised_eth/src/.eth_main.o.d -nostdinc -isystem /usr/lib/gcc/i586-redhat-linux/4.4.0/include -Iinclude -I/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE/arch/x86/include -include include/linux/autoconf.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Os -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i686 -mtune=generic -Wa,-mtune=generic32 -ffreestanding -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Iarch/x86/include/asm/mach-generic -Iarch/x86/include/asm/mach-default -Wframe-larger-than=1024 -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -g -pg -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -fno-dwarf2-cfi-asm -DTX_DESCRIPTOR_IN_SYSTEM_MEMORY -DRX_DESCRIPTOR_IN_SYSTEM_MEMORY -DTX_BUFFER_IN_SYSTEM_MEMORY -DRX_BUFFER_IN_SYSTEM_MEMORY -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -O0 -Wall -DT_ETH_1588_051 -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -DNETHERNET_INTERRUPTS -DETH_IEEE1588_TESTS -DSNAPTYPSEL_TMSTRENA_TEVENTENA_TESTS -DT_ETH_1588_140_147 -DLOW_DEBUG_PRINTS -DMEDIUM_DEBUG_PRINTS -DHIGH_DEBUG_PRINTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(eth_main)" -D"KBUILD_MODNAME=KBUILD_STR(conxt_eth)" -c -o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.c

    Read the article

  • Build failed question - maven - jre or jdk problem

    - by Gandalf StormCrow
    Hi all, I have my JAVA_HOME set to C:\Program Files (x86)\Java\jdk1.6.0_18 After I run maven install I get this message from eclipse: Reason: Unable to locate the Javac Compiler in: C:\Program Files (x86)\Java\jre6\..\lib\tools.jar Please ensure you are using JDK 1.4 or above and not a JRE (the com.sun.tools.javac.Main class is required). In most cases you can change the location of your Java installation by setting the JAVA_HOME environment variable. I'm certain that this is the tricky part Please ensure you are using JDK 1.4 or above and not a JRE When I run configuration its set to JRE6, how do I change it to JDK 1.6 which I have already installed EDIT I even tried to modify the plugin : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> <executable>C:\Program Files (x86)\Java\jdk1.6.0_18\bin</executable> </configuration> </plugin> Still I get the same error Maybe I forgot to say I use eclipse maven plugin .. how can I change from JRE to JDK in eclipse ?

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Capistrano fails for multiple host deployments

    - by morris082
    I be at a loss here, and after scouring the seas (read: internet) for solutions I am left with none other than to hit up the stack. any help appreciated. I have capistrano running locally for deployments onto several different environments. (I'm on windows 7, fwiw). All was well until I needed to deploy to multiple :app servers during a single deployment. Usually I'm prompted for my ssh passphrase once when I call 'cap deploy'. I have ssh-agent running (git never pesters for my pass) but despite this Capistrano has always bugged me once each deployment. Regardless, it always worked when deploying to ONE host. Now, when I attempt to deploy to multiple servers at once, it asks for my passphrase what appears to be multiple times: (ips removed by ME) servers: ["redacted", "redacted"]<br /> Enter passphrase for ~/.ssh/id_rsa: Enter passphrase for ~/.ssh/id_rsa: So with the above I enter my passphrase but this doesn't work. It waits as little while, then spits out this error: connection failed for: <one of the server ips> (NoMethodError: undefined method `overwrite' for nil:NilClass) And that's the end of that. I can "passwordless" ssh into the servers I'm deploying on just fine. I'm pretty certain the ssh-agent is running since I can hit Git w/out entering my passphrase every time Using 'forward_agent' setting in cap deploy did not work. This is my role: role :app, "ip 1 removed", "ip 2 removed" If i set default_run_options[:max_hosts] = 1, it works OK but it asks for my passphrase for every single connection to each host I'm deploying to.. which ends up being a lot. Essentially I'm looking for any of the below (but not limited to): - "You're never going to fix that on windows" - "This is how you get REAL passwordless deployment in capistrano" - "Have you overlooked this setting/feature?" - "I have a rock that can fix anything, you may borrow it" Thanks!

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Problems running XNA game on 64-bit Windows 7

    - by Tesserex
    I'm having problems getting my game engine to run on my brother's machine, which is running 64-bit Windows 7. I'm developing on 32-bit XP SP2. My app uses XNA, FMOD.NET, and another dll I wrote entirely in C#. Everything is targeted to x86, not AnyCPU. I've read that this is required for XNA to work because there is no 64-bit xna framework. I recompiled FMOD.NET as x86 as well and made sure to be using the 32-bit version of the native dll. So I don't see any problems there. However when he tries to run my app, it gives an error which is mysterious, but not unheard of. A FileNotFoundException with an empty file name, and the top of the stack trace is in my main form constructor. The message is The specified module could not be found. (Exception from HRESULT: 0x8007007E) I found some threads online about this error, all with very vague, mixed, and fuzzy responses that don't really help me. Most remind people to target x86. Some say check that they have all the dlls necessary. I gave my brother Microsoft.Xna.Framework.dll, but does he need to install the entire XNA redistributable package? When I take everything I sent him and stick it in a random directory, it still runs fine for me. I developed the game in VS2008, not in game studio, using XNA 3.0 and a Windows Forms control that uses XNA drawing which I found in an msdn tutorial. I would also like to avoid requiring a full installer if possible. Any insight? Please?

    Read the article

  • Flash/uploadify filereference.upload not working, unknown reason

    - by BotskoNet
    I've been using the uploadify jquery for a long time - this tool has been working on several different servers for quite a while. Recently, on rackspace cloud sites accounts, the uploads stopped working. There's been no change to our code and our code works perfectly on other servers. I'm having trouble finding the issue, here's what happens: The filereference.upload is called, we provide all of the proper info - url, filename, etc The URL is http, and is correct. When visited through a web browser it works All debugging and tracing show that our code is running correctly The DataEvent.UPLOAD_COMPLETE_DATA evens never trigger The ProgressEvent.PROGRESS events never trigger. Server access logs and application traffic logs never show any requests coming to the upload script. Based on all of this info, it seems that: Our script works fine, file.upload is called properly Event.OPEN is fired. Traffic never reaches the server No io/http errors are called. file.upload does return false, but it also returns false on servers that the upload works, and the complete events are called. We're not using any firewalls or proxies on our end, and these tools work perfectly on other servers. But, these sites that are on the rackspace cloud sites all share this problem. However, a site that was in development a few months ago worked just fine on the same rackspace ckoud sites account. Trying the flash build from that version still won't work now. How can I go about finding out what's breaking between the file.upload and the reaching the server?

    Read the article

  • Trouble using termextraction ge

    - by mathee
    I'm trying to install this gem: http://github.com/alexrabarts/term_extraction. It required nokogiri, which I tried installing. I'm getting this as the output: >gem install nokogiri Successfully installed nokogiri-1.4.2.1-x86-mswin32 1 gem installed Installing ri documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options Installing RDoc documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options I was able to install the termextraction gem (per the README on git repo): >gem install alexrabarts-term_extraction -s http://gems.github.com Successfully installed alexrabarts-term_extraction-0.1.4 1 gem installed Installing ri documentation for alexrabarts-term_extraction-0.1.4... Installing RDoc documentation for alexrabarts-term_extraction-0.1.4... The issue is that I'm trying to test it out, but I'm getting an "uninitialized constant" error when I use it: ActionView::TemplateError (uninitialized constant ApplicationHelper::TermExtraction) on line #3 of app/views/questions/new. haml: 1: %h1 New question 2: -msg = "testing this context thing let's see what it gives me" 3: -getTerms(msg) 4: #new-question-form 5: .box-background 6: -form_for(@question) do |f| app/helpers/application_helper.rb:42:in `getTerms' app/views/questions/new.haml:3:in `_run_haml_app47views47questions47new46haml' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/questions_controller.rb:91:in `new' haml (2.2.23) lib/sass/plugin/rails.rb:20:in `process' Here is application_helper.rb: module ApplicationHelper def getTerms(context) yahoo = TermExtraction::Yahoo.new(:api_key => 'myAPIkey', :context => context) end end I'm not sure what the issue is. Any insight would be greatly appreciated!

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • What are the best practices for storing PHP session data in a database?

    - by undefined
    I have developed a web application that uses a web server and database hosted by a web host (on the ground) and a server running on Amazon Web Services EC2. Both servers may be used by a user during a session and both will need to know some session information about a user. I don't want to POST the information that is needed by both servers because I dont want it to be visible to browsers / Firebug etc. So I need my session data to persist across servers. And I think that this means that the best option is to store all / some of the data that I need in the database rather than in a session. The easiest thing to do seems to be to keep the sessions but to POST the session_id between servers and use this as the key to lookup the data I need from a 'user_session_data' table in the database. I have looked at Tony Marston's article "Saving PHP Session Data to a database" - should I use this or will a table with the session data that I need and session_id as key suffice? What would be the downside of creating my own table and set of methods for storing the data I need in the database?

    Read the article

  • Executing an exe with arguments using Powershell

    - by Chris Charge
    This is what I want to execute: c:\Program Files (x86)\SEQUEL ViewPoint\viewpoint.exe /Setvar((POSTSTR $POSTSTR)(POSTEND $POSTEND)) /G:C:\viewpointfile.vpt /D:C:($BEGDATE to $TODDATE).xls This is what I have tried: $a = "/Setvar((POSTSTR $POSTSTR)(POSTEND $POSTEND))" $b = "/G:C:\viewpointfile.vpt" $c = "/D:C:($BEGDATE to $TODDATE).xls" $Viewpoint = "c:\Program Files (x86)\SEQUEL ViewPoint\viewpoint.exe" &$Viewpoint $a $b $c When I execute this I receive an error stating: File C:\viewpointfile.vpt "/D:C:($BEGDATE to $TODDATE).xls" not found! I'm not sure where it gets the extra quotes from. If I run the command with just $a and $b it runs fine. Any help would be greatly appreciated. Thanks! :) Update manojlds suggested echoargs so here it the output from it: &./echoargs.exe $viewpoint $a $b $c Arg 0 is C:\Program Files (x86)\SEQUEL ViewPoint\viewpoint.exe Arg 1 is /Setvar((POSTSTR 20101123)(POSTEND 20111123)) Arg 2 is /G:C:\viewpointfile.vpt Arg 3 is /D:C:(2010-11-23 to 2011-11-23 PM).xls It appears that all the arguments are being passed properly. When I run this as a command in cmd.exe it executes perfectly. So something on Powershells end must be messing up the output. Is there any other way to go about executing this command using Powershell?

    Read the article

  • trouble connecting to MySql DB (PHP)

    - by user332817
    Hi I have the following PHP code to connect to my db. <?php ob_start(); $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); ?> however I get the following error: Warning: mysql_connect() [function.mysql-connect]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 I am able to add a db/tables via phpmyadmin but I cant connect using php. here is a screenshot of my phpmyadmin page: http://img294.imageshack.us/img294/1589/sqls.jpg any help would be appreciated, thanks in advance.

    Read the article

  • How to configure Server Topology for exposing an internal application for external access?

    - by ronaldwidha
    Hi All, I guess this question is bordering to a Server Fault question. I'd like to know the best configuration for exposing an internal application (in this case a load balanced Asp.Net MVC application) for external access. More details about the situation: The Asp.Net MVC Application is currently running on 2 servers The 2 servers are behind a Windows Network Load Balancer All the servers are on premise/internal network I'm thinking of introducing an F5 Load balancer on off premise DMZ to replace the Windows Network Load Balancer. F5 will act as the public traffic gateway and load balancers to the 2 servers. However, I'd like the internal users to not have go through the Internet to access the app. The idea that I have so far is to keep both Windows Network Load Balancer and the F5. Each appliance will have its own IP and will have its own domain name. External users can use the public domain name which will hit F5, whereas internal users can use the internal domain name which will hit the Windows Network Load Balancer. Is this a good idea? Or is there a better way of doing this?

    Read the article

  • Web services or shared database for (game) server communication?

    - by jaaronfarr
    We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database). We're considering several approaches for inter-server communication: Just share the MySQL databases between clusters (introduce SQL to the game servers) Sharing data in a distributed key-value store like Memcache, Redis, etc. Use an RPC technology like Google ProtoBufs or Apache Thrift Using RESTful web services (the game server would POST back to the web servers, for example) At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail. Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

    Read the article

  • Help Installing phpMyAdmin on Linux Server

    - by Joe Majewski
    I constantly get this error when attempting to view index.php in the phpMyAdmin folder I have setup: Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly. I am using subversion with three co-workers and I am trying to install this on my repository, which is at /svni3/myusername/intranet/ on our linux development server. I do have shell access so I would be able to install it somewhere else if that may be causing the problem. My config.inc.php file looks like this (only the things I changed): $cfg['blowfish_secret'] = 'blah'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; //$cfg['Servers'][$i]['user'] = 'username'; //$cfg['Servers'][$i]['password'] = 'password'; $cfg['Servers'][$i]['host'] = 'localhost'; Other helpful information??? When I ping localhost it works. When I run the command "ls -l /var/lib/php": drwxrwx--- 2 root apache 4096 2009-04-17 02:31 session If there's anything else that may be helpful let me know; this has been plaguing me for a few hours now.

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >