Search Results

Search found 18460 results on 739 pages for 'terminal services'.

Page 255/739 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • (GUIDE) How to install and configure Mariadb on Ubuntu 12.10+

    - by Myh Yazid
    First of all open terminal and type this sudo apt-get install python-software-properties I recommend you to use MariaDB version 10.0.4 Alpha because when im installed it i've got no errors compare with 5.5 version. 2: Put this commands in terminal sudo apt-get install software-properties-common sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db sudo add-apt-repository 'deb http://download.nus.edu.sg/mirror/mariadb/repo/10.0/ubuntu quantal main' If you're using other version please change the "quantal" to your ubuntu version codename eg : 13.10 saucy 13.04 raring 12.10 quantal (im using this version) 12.04 precise` 3: Type in this commands sudo apt-get update sudo apt-get install mariadb-server 4: after finished installed mariadb you need to run this sudo mysql_install_db sudo /usr/bin/mysql_secure_installation If have problem consider look at the end of this post for solution 5: You're done!! Problems solving In step 3 if you get problem, like unmet dependencies, Go to /etc/apt/preferences.d then create new file called "mariadb" Then,Consider to put the below code in the mariadb file that you just created Package: * Pin: origin <mirror-domain> Pin-Priority: 1000 In step 4 you may get 2 errors first occur when you run sudo mysql_install_db Solution : open another terminal and do this killall mysqld 2.second eror may occur when you running sudo /usr/bin/mysql_secure_installationcommand Solution : try doing this cd /etc/init.d and try run ./mysqld start if ./mysqld doesnt exists use ./mysql start as in my case That's All Thank you for reading. I wrote this based on my experience installing this.Any errors you get community here can help you if i cant..tq .

    Read the article

  • Removed Java replaced with newest "Sun Java", disc won't boot, and won't let me re-install grub using boot repair disc

    - by Al Rowe
    Had a minor problem with my Stock market platform. Set-up screen would freeze program. Called their tech support, got their "Linux guy", who advised remove all Java and replace, not with synaptic version, but newest Sun Java. After removing, computer auto rebooted, and went to blue mem-test screen. Showed no errors, but couldn't get back in. Tried two versions of boot repair disc from iso (checked md5sum, showed good.), but fix aborted, giving apt-error detected. Opened a terminal and typed (or copy/paste): sudo chroot "/mnt/boot-sav/sda1" apt-get -f install. My system is Ubuntu 12.04. Had a few very minor issues from install, all fixed. Also added some of my favorite gnome tricks just to make life easier, but none that could have caused this. Added script to add shortcuts to desktop, open terminal in any menu from inside it, access root terminal, etc. System was firewalled and using avast antivirus (o.k., I'm paranoid. Used to do Windows sys-op and security.) But relative newbie to Linux.

    Read the article

  • Issues with Cinnamon?

    - by Corrodie
    I just recently switched my system over to Ubuntu 12.10, and decided on Cinnamon as my environment-- it all worked fine, at first. But I was poorly educated and started using Compiz and Emerald along with it--Setting both as replacements in startup processes. I now know, that's a big, big mistake. Now when loading Cinnamon, I am greeted to my background image, and only that. My only options seem to be to open a terminal. I was advised to attempt muffin --replace and mutter --replace Neither to any avail, the terminal closes, and I cannot load another one unless I completely reload. I went back to Unity, purged and autoremoved Cinnamon, emerald, and compizconfig, and attempted to reinstall Cinnamon, thinking that would solve the problem--no, it came back just as broken as before. So, I reinstalled ubuntu, then cinnamon---still broken. I'm assuming I must find a way to remove the replace commands-- but as I have no menu, I'm not positive I can do that. Is there any way I can access the startup processes via terminal? I'd think though, if I completely removed Cinnamon, all configurations would be gone too, so, it's just not making much sense. Is there some kind of reset I could possibly do? I've been browsing forums and questions here, all leading to things I'd already done, so, it can't hurt to ask for myself. I apologize if you would rather I have posted this over at mint. Next time, I will definitely check compatibility instead of assuming something just has to work. Any help is greatly appreciated, thanks! ****EDIT***** It seems although it didn't allow me to do it before, I was now allowed to access the settings and startup processes for Cinnamon via Unity, and, after quickly removing aforementioned processes- I'm up and running again.

    Read the article

  • running adb via wifi on ubuntu 13.04

    - by Blaze Tama
    First, im new on linux so please bear with me. I have a rooted android phone and i was able to run the adb via wifi network on windows, where i just need to go to the adb's directory and type adb connect. However, i can't just do that in my ubuntu. Everytime i enter adb connect, the terminal always said that i dont have the adb and i must install it. When i check the ADT Bundle (I downloaded the bundled one from here), i can see the adb is there inside the platform-tools folder. I already tried to change the directory to the platform-tools and run the adb connect from there, but its still not working. Do i need to install the adb again via terminal? Or did i miss something? You may be wondering why dont i just download the adb (again) via terminal and do trial and error? The answer is because i dont have a good internet connection, so i want to avoid unnecessary downloads. Thanks for your time :D

    Read the article

  • Massive Xubuntu desktop malfunction

    - by viktiglemma
    I'm using Xubuntu 11.04. Before everything worked fine, but when booting today the following happened: 1) Window focus does not leave the first-opened program. This means that if I keep Firefox open, and open a terminal, window focus will never be transferred to the terminal. (EDIT: if I open a terminal first, and then open Firefox, Firefox steals focus) 2) Window menus have disappeared. The maximize, minimize, etc., buttons and menu are gone. 3) In Xfce Settings Manager, the "Window Manager" settings window is empty. There is just a gray screen there, so I cannot modify any window settings. 4) The keyboard shortcuts I had previously defined using the Settings Manager do no longer work. Further, ALT-TAB no longer works for cycling between windows. 5) The mouse pointer does not show when I first log in. I have to log out and log in again (with an invisible pointer) before the mouse shows itself. EDIT: 6) I cannot resize or move the Thunderbird window, but I can move the Firefox window What can I do to troubleshoot this?

    Read the article

  • receiving "command not found" error messages after fresh reinstall of Lubuntu 14.04

    - by user236378
    Lubuntu 14.04 was working really great. . .until I messed up and had to do a complete fresh reinstall. Now I receive error messages when I input commands into the Terminal, even after immediately completing the fresh install. For example I type: sudo leafpad ?/etc/default/ or sudo leafpad ?/etc/default/grub I get: sudo: leafpad: command not found I type: sudo update-initramfs ?-u or sudo update-grub I get: sudo: update-initramfs: command not found or sudo: update-grub: command not found If I use the command mkdir I get: mkdir: command not found I also get this same exact error message, command not found, with sudo apt-get and wget In other words I can't do anything that I was able to do when inputting commands into the terminal. So I cannot add any repositories or update anything at all. I am not really sure what is causing the problem(s). It appeared to me that Lubuntu installed and booted up OK. However just as soon as I enter anything into the Terminal I immediately get the above error messages. I have tried to do the reinstall three times, same error messages. If anyone can suggest any fixes I would really appreciate it very much. Thank you!

    Read the article

  • I can't get grub menu to show up during boot

    - by wim
    After trying (and failing) to install better ATI drivers in 11.10, I've somehow lost my grub menu at boot time. The screen does change to the familiar purple colour, but instead of a list of boot options it's just blank solid colour, and then disappears quickly and boots into the default entry normally. How can I get the bootloader back? I've tried sudo update-grub and also various different combinations of resolutions and colour depths in startupmanager application with no success (640x480, 1024x768, 1600x1200, 16 bits, 8 bits, 10 second delay, 7 second delay, 2 second delay...) edit: I have already tried holding down Shift during bootup and it does not seem to change the behaviour. I get the message "GRUB Loading" in the terminal, but then the place where the grub menu normally appears I get a solid blank magenta screen for a while. Here are the contents of /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=" vga=798 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • how do I uninstall old kernel options listed in Grub2? [closed]

    - by user12809
    Possible Duplicate: Is there a way to remove/hide old kernel versions? I installed Ubuntu Tweak in Ubuntu 11.10, went to Janitor, and selected and removed old kernels that appeared there (3.0.0-12). Now, the only installed linux-image that appears as 'Installed' in SPM is the most recent one (3.0.0-13), which is the one I want. It did not however eliminate the kernel listing in Grub 2. At boot: However, at boot, in Grub-2, the following options still appear: 3.0.0-13-generic 3.0.0-13-generic (recovery mode) 3.0.0-12 (generic) (on /dev/sde5) 3.0.0-12 (generic (recovery mode) (on /dev/sde5) And, in Terminal, when I change directory (cd) to /boot, and then list (ls), I get the following listed kernels: 3.0.0-13 2.6.38-12 2.6.38-8 (al There is no change when I sudo update-grub in Terminal 1) what is /dev/sde5, and where is it located in the file system, so i can delete it? 2) why the differences between what appears as installed in SPM, what appears at boot in Grub2, and what shows when I list the contents of Grub2 in Terminal? Ultimately, I simply want to remove the 3.0.0-12 kernel options at boot in Grub2. How do I best and simplest do that? Thanks again donofrij is online now Report Post Edit/Delete Message Reply With Quote Multi-Quote This Message Quick reply to this message

    Read the article

  • When trying to install Wine on 12.10, 'Sudo' command will not let me type in a password.

    - by Nocturnus
    As the title explains, I have been attempting to install Wine on my laptop which is running 12.10. When I access the command terminal and entered "sudo add-apt-repository ppa:ubuntu-wine/ppa" I was of course met by a password block, when I attempted to enter my password, it flat out wouldn't let me type anything, the only key that got a response from the terminal was "enter" which was met by "incorrect password". To bypass this issue I backed out and used the 'Gksudo' command, this new dialogue box seemed to give me access to sudo commands. I then entered "sudo apt-get update" and "sudo apt-get install wine1.5". Up until the installation everything went fine, but after entering the final command (still using gksudo) The terminal read "the following packages have unmet dependencies" and proceeded to list a bunch of "recommends" So my guess is that Wine hasn't been updated to run on 12.10... Is this true, and is there any other way to open .exe's? Also what was with that funky password misshap? I'm totally new to Ubuntu so I've just been using support pages and tutorials, sorry if I'm a bit naive in these matters...

    Read the article

  • How can I disable the purple bootloader splash at boot?

    - by wim
    This question has been answered before, but neither of the methods in the accepted answer worked for me on 11.10. First I tried editing in /etc/default/grub, and then running sudo update-grub. But after that I still got a blank, plain, purple screen while the kernel is loading. The screen has no boot options, and it obscures those dmesg that I want to see going in the terminal. Next I tried removing the plymouth-theme-*, but that just broke my gnome-shell theme looks, and the purple screen still remains. I have also tried configuring it with startupmanager package, but nothing seems to get rid of that darned purple splash. here are the contents of my /etc/default/grub file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT="0" GRUB_HIDDEN_TIMEOUT="0" GRUB_HIDDEN_TIMEOUT_QUIET="true" GRUB_TIMEOUT="0" GRUB_DISTRIBUTOR="`lsb_release -i -s 2> /dev/null || echo Debian`" #GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) # GRUB_TERMINAL="console" # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE="640x480" # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID="true" # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" GRUB_DISABLE_OS_PROBER="true"

    Read the article

  • How do I create the "Gnome-Desktop-Item-Edit" program's launch icon with root privileges and more?

    - by GanZ
    I personally dont prefer running commands in terminal to achieve a task and prefer apps to execute the job. Creating launcher for apps is one such command where I prefer the gnome-desktop-item-edit application for creating launchers. If the gnome package is installed, just searching "create launcher" opens the app. But, it doesnt serve any purpose, because for starters the application cannot create launchers for various apps without root permission and the location where the apps have to be created. Usually the launcher apps with root permission can be created at /usr/share/applications and without root permission at /.local/share/applications. I dont prefer the latter location as it is vulnerable to deletion. Hence, in order to create the launchers through gnome with root, everytime I am forced to open this through terminal using the below command! $ sudo gnome-desktop-item-edit ~/.local/share/applications --create-new I dont want to open terminal everytime I want to create an application launcher on unity! I am able to lock the "Create Launcher" App in the Launcher, but not with root privileges So I want to be able to create the "Create Launcher" app shortcut on unity with default root privileges and for the app to create the launchers at usr/share/applications by default! Please help! P.S. I dont have enough rep points to add screenshots to help with the question!

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • build error with boost spirit grammar (boost 1.43 and g++ 4.4.1)

    - by lurscher
    I'm having issues getting a small spirit/qi grammar to compile. The build stack trace is fugly enought to not make any sense to me (despite some assertion_failed i could notice in there but that didn't brought much information) the input grammar header: inputGrammar.h #include <boost/config/warning_disable.hpp> #include <boost/spirit/include/qi.hpp> #include <boost/spirit/include/phoenix_core.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_fusion.hpp> #include <boost/spirit/include/phoenix_stl.hpp> #include <boost/fusion/include/adapt_struct.hpp> #include <boost/variant/recursive_variant.hpp> #include <boost/foreach.hpp> #include <iostream> #include <fstream> #include <string> #include <vector> namespace sp = boost::spirit; namespace qi = boost::spirit::qi; using namespace boost::spirit::ascii; //using namespace boost::spirit::arg_names; namespace fusion = boost::fusion; namespace phoenix = boost::phoenix; using phoenix::at_c; using phoenix::push_back; template< typename Iterator , typename ExpressionAST > struct InputGrammar : qi::grammar<Iterator, ExpressionAST(), space_type> { InputGrammar() : InputGrammar::base_type( block ) { tag = sp::lexeme[+(alpha) [sp::_val += sp::_1]];//[+(char_ - '<') [_val += _1]]; block = sp::lit("block") [ at_c<0>(sp::_val) = sp::_1] >> "(" >> *instruction[ push_back( at_c<1>(sp::_val) , sp::_1 ) ] >> ")"; command = tag [ at_c<0>(sp::_val) = sp::_1] >> "(" >> *instruction [ push_back( at_c<1>(sp::_val) , sp::_1 )] >> ")"; instruction = ( command | tag ) [sp::_val = sp::_1]; } qi::rule< Iterator , std::string() , space_type > tag; qi::rule< Iterator , ExpressionAST() , space_type > block; qi::rule< Iterator , ExpressionAST() , space_type > function_def; qi::rule< Iterator , ExpressionAST() , space_type > command; qi::rule< Iterator , ExpressionAST() , space_type > instruction; }; the test build program: i seems the build fails at qi::phrase_parse, i am using boost 1.43 and g++ 4.4.1 #include <iostream> #include <string> #include <vector> using namespace std; //my grammar #include <InputGrammar.h> struct MockExpressionNode { std::string name; std::vector< MockExpressionNode > operands; typedef std::vector< MockExpressionNode >::iterator iterator; typedef std::vector< MockExpressionNode >::const_iterator const_iterator; iterator begin() { return operands.begin(); } const_iterator begin() const { return operands.begin(); } iterator end() { return operands.end(); } const_iterator end() const { return operands.end(); } bool is_leaf() const { return ( operands.begin() == operands.end() ); } }; BOOST_FUSION_ADAPT_STRUCT( MockExpressionNode, (std::string, name) (std::vector<MockExpressionNode>, operands) ) int const tabsize = 4; void tab(int indent) { for (int i = 0; i < indent; ++i) std::cout << ' '; } template< typename ExpressionNode > struct ExpressionNodePrinter { ExpressionNodePrinter(int indent = 0) : indent(indent) { } void operator()(ExpressionNode const& node) const { cout << " tag: " << node.name << endl; for (int i=0 ; i < node.operands.size() ; i++ ) { tab( indent ); cout << " arg "<<i<<": "; ExpressionNodePrinter(indent + 2)( node.operands[i]); cout << endl; } } int indent; }; int test() { MockExpressionNode root; InputGrammar< string::const_iterator , MockExpressionNode > g(); std::string litA = "litA"; std::string litB = "litB"; std::string litC = "litC"; std::string litD = "litD"; std::string litE = "litE"; std::string litF = "litF"; std::string source = litA+"( "+litB+" ,"+litC+" , "+ litD+" ( "+litE+", "+litF+" ) "+ " )"; string::const_iterator iter = source.begin(); string::const_iterator end = source.end(); bool r = qi::phrase_parse( iter , end , g , root , space ); ExpressionNodePrinter< MockExpressionNode > np; np( root ); }; int main() { test(); } finally, the build error is the following: /usr/bin/make -f nbproject/Makefile-linux_amd64_devel.mk SUBPROJECTS= .build-conf make[1]: se ingresa al directorio `/home/mineq/NetBeansProjects/InputParserTests' /usr/bin/make -f nbproject/Makefile-linux_amd64_devel.mk dist/linux_amd64_devel/GNU-Linux-x86/vpuinputparsertests make[2]: se ingresa al directorio `/home/mineq/NetBeansProjects/InputParserTests' mkdir -p build/linux_amd64_devel/GNU-Linux-x86 rm -f build/linux_amd64_devel/GNU-Linux-x86/tests_main.o.d g++ `llvm-config --cxxflags` `pkg-config --cflags unittest-cpp` `pkg-config --cflags boost-1.43` `pkg-config --cflags boost-coroutines` -c -g -I../InputParser -MMD -MP -MF build/linux_amd64_devel/GNU-Linux-x86/tests_main.o.d -o build/linux_amd64_devel/GNU-Linux-x86/tests_main.o tests_main.cpp from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:16, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp: In function ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/detail/parse_auto.hpp:14, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:99: error: no matching function for call to ‘assertion_failed(mpl_::failed************ (boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::error_invalid_expression::************)(InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (*)()))’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:100: error: no matching function for call to ‘assertion_failed(mpl_::failed************ (boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::error_invalid_expression::************)(MockExpressionNode))’ from /home/mineq/third_party/boost_1_43_0/boost/proto/proto.hpp:12, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/support/meta_compiler.hpp:17, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/meta_compiler.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/action/action.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/action.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:14, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/proto/detail/expr0.hpp: At global scope: /home/mineq/third_party/boost_1_43_0/boost/proto/proto_fwd.hpp: In instantiation of ‘boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>’: In file included from /home/mineq/third_party/boost_1_43_0/boost/proto/core.hpp:13, /home/mineq/third_party/boost_1_43_0/boost/utility/enable_if.hpp:59: instantiated from ‘boost::disable_if<boost::proto::result_of::is_expr<boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>, void>, void>’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/support/meta_compiler.hpp:200: instantiated from ‘boost::spirit::result_of::compile<boost::spirit::qi::domain, InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), boost::fusion::unused_type, void>’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:107: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/proto/detail/expr0.hpp:64: error: field ‘boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()>, 0l>::child0’ invalidly declared function type from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:16, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp: In function ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/detail/parse_auto.hpp:14, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:107: error: request for member ‘parse’ in ‘boost::spirit::compile [with Domain = boost::spirit::qi::domain, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()](((InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (&)())((InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode> (*)())expr)))’, which is of non-class type ‘InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>()’ from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi.hpp:15, from /home/mineq/third_party/boost_1_43_0/boost/spirit/include/qi.hpp:16, from ../InputParser/InputGrammar.h:12, from tests_main.cpp:14: /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/skip_over.hpp: In function ‘void boost::spirit::qi::skip_over(Iterator&, const Iterator&, const T&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, T = boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]::skipper_type]’: In file included from /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/auto/auto.hpp:19, /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:112: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, boost::spirit::qi::skip_flag::enum_type, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::proto::exprns_::expr<boost::proto::tag::terminal, boost::proto::argsns_::term<boost::spirit::tag::char_code<boost::spirit::tag::space, boost::spirit::char_encoding::ascii> >, 0l>]’ /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/parse.hpp:125: instantiated from ‘bool boost::spirit::qi::phrase_parse(Iterator&, Iterator, const Expr&, const Skipper&, Attr&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Expr = InputGrammar<__gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, MockExpressionNode>(), Skipper = MockExpressionNode, Attr = const boost::spirit::ascii::space_type]’ tests_main.cpp:206: instantiated from here /home/mineq/third_party/boost_1_43_0/boost/spirit/home/qi/skip_over.hpp:27: error: ‘const struct MockExpressionNode’ has no member named ‘parse’ make[2]: *** [build/linux_amd64_devel/GNU-Linux-x86/tests_main.o] Error 1 make[2]: se sale del directorio `/home/mineq/NetBeansProjects/InputParserTests' make[1]: *** [.build-conf] Error 2 make[1]: se sale del directorio `/home/mineq/NetBeansProjects/InputParserTests' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1m 48s)

    Read the article

  • How can I close a Window using the OS-X ScriptingBridge framework, from Perl?

    - by Gavin Brock
    Problem... Since MacPerl is no longer supported on 64bit perl, I am trying alternative frameworks to control Terminal.app. I am trying the ScriptingBridge, but have run into a problem passing an enumerated string to the closeSaving method using the PerlObjCBridge. I want to call: typedef enum { TerminalSaveOptionsYes = 'yes ' /* Save the file. */, TerminalSaveOptionsNo = 'no ' /* Do not save the file. */, TerminalSaveOptionsAsk = 'ask ' /* Ask the user whether or not to save the file. */ } TerminalSaveOptions; - (void) closeSaving:(TerminalSaveOptions)saving savingIn:(NSURL *)savingIn; // Close a document. Attempted Solution... I have tried: #!/usr/bin/perl use strict; use warnings; use Foundation; # Load the ScriptingBridge framework NSBundle->bundleWithPath_('/System/Library/Frameworks/ScriptingBridge.framework')->load; @SBApplication::ISA = qw(PerlObjCBridge); # Set up scripting bridge for Terminal.app my $terminal = SBApplication->applicationWithBundleIdentifier_("com.apple.terminal"); # Open a new window, get back the tab my $tab = $terminal->doScript_in_('exec sleep 60', undef); warn "Opened tty: ".$tab->tty->UTF8String; # Yes, it is a tab # Now try to close it # Simple idea eval { $tab->closeSaving_savingIn_('no ', undef) }; warn $@ if $@; # Try passing a string ref my $no = 'no '; eval { $tab->closeSaving_savingIn_(\$no, undef) }; warn $@ if $@; # Ok - get a pointer to the string my $pointer = pack("P4", $no); eval { $tab->closeSaving_savingIn_($pointer, undef) }; warn $@ if $@; eval { $tab->closeSaving_savingIn_(\$pointer, undef) }; warn $@ if $@; # Try a pointer decodes as an int, like PerlObjCBridge uses my $int_pointer = unpack("L!", $pointer); eval { $tab->closeSaving_savingIn_($int_pointer, undef) }; warn $@ if $@; eval { $tab->closeSaving_savingIn_(\$int_pointer, undef) }; warn $@ if $@; # Aaarrgghhhh.... As you can see, all my guesses at how to pass the enumerated string fail. Before you flame me... I know that I could use another language (ruby, python, cocoa) to do this but that would require translating the rest of the code. I might be able to use CamelBones, but I don't want to assume my users have it installed. I could also use the NSAppleScript framework (assuming I went to the trouble of finding the Tab and Window IDs) but it seems odd to have to resort to it for just this one call.

    Read the article

  • Dependency Injection in ASP.NET Web API using Autofac

    - by shiju
    In this post, I will demonstrate how to use Dependency Injection in ASP.NET Web API using Autofac in an ASP.NET MVC 4 app. The new ASP.NET Web API is a great framework for building HTTP services. The Autofac IoC container provides the better integration with ASP.NET Web API for applying dependency injection. The NuGet package Autofac.WebApi provides the  Dependency Injection support for ASP.NET Web API services. Using Autofac in ASP.NET Web API The following command in the Package Manager console will install Autofac.WebApi package into your ASP.NET Web API application. PM > Install-Package Autofac.WebApi The following code block imports the necessary namespaces for using Autofact.WebApi using Autofac; using Autofac.Integration.WebApi; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The following code in the Bootstrapper class configures the Autofac. 1: public static class Bootstrapper 2: { 3: public static void Run() 4: { 5: SetAutofacWebAPI(); 6: } 7: private static void SetAutofacWebAPI() 8: { 9: var configuration = GlobalConfiguration.Configuration; 10: var builder = new ContainerBuilder(); 11: // Configure the container 12: builder.ConfigureWebApi(configuration); 13: // Register API controllers using assembly scanning. 14: builder.RegisterApiControllers(Assembly.GetExecutingAssembly()); 15: builder.RegisterType<DefaultCommandBus>().As<ICommandBus>() 16: .InstancePerApiRequest(); 17: builder.RegisterType<UnitOfWork>().As<IUnitOfWork>() 18: .InstancePerApiRequest(); 19: builder.RegisterType<DatabaseFactory>().As<IDatabaseFactory>() 20: .InstancePerApiRequest(); 21: builder.RegisterAssemblyTypes(typeof(CategoryRepository) 22: .Assembly).Where(t => t.Name.EndsWith("Repository")) 23: .AsImplementedInterfaces().InstancePerApiRequest(); 24: var services = Assembly.Load("EFMVC.Domain"); 25: builder.RegisterAssemblyTypes(services) 26: .AsClosedTypesOf(typeof(ICommandHandler<>)) 27: .InstancePerApiRequest(); 28: builder.RegisterAssemblyTypes(services) 29: .AsClosedTypesOf(typeof(IValidationHandler<>)) 30: .InstancePerApiRequest(); 31: var container = builder.Build(); 32: // Set the WebApi dependency resolver. 33: var resolver = new AutofacWebApiDependencyResolver(container); 34: configuration.ServiceResolver.SetResolver(resolver); 35: } 36: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The RegisterApiControllers method will scan the given assembly and register the all ApiController classes. This method will look for types that derive from IHttpController with name convention end with “Controller”. The InstancePerApiRequest method specifies the life time of the component for once per API controller invocation. The GlobalConfiguration.Configuration provides a ServiceResolver class which can be use set dependency resolver for ASP.NET Web API. In our example, we are using AutofacWebApiDependencyResolver class provided by Autofac.WebApi to set the dependency resolver. The Run method of Bootstrapper class is calling from Application_Start method of Global.asax.cs. 1: protected void Application_Start() 2: { 3: AreaRegistration.RegisterAllAreas(); 4: RegisterGlobalFilters(GlobalFilters.Filters); 5: RegisterRoutes(RouteTable.Routes); 6: BundleTable.Bundles.RegisterTemplateBundles(); 7: //Call Autofac DI configurations 8: Bootstrapper.Run(); 9: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Autofac.Mvc4 The Autofac framework’s integration with ASP.NET MVC has updated for ASP.NET MVC 4. The NuGet package Autofac.Mvc4 provides the dependency injection support for ASP.NET MVC 4. There is not any syntax change between Autofac.Mvc3 and Autofac.Mvc4 Source Code I have updated my EFMVC app with Autofac.WebApi for applying dependency injection for it’s ASP.NET Web API services. EFMVC app also updated to Autofac.Mvc4 for it’s ASP.NET MVC 4 web app. The above code sample is taken from the EFMVC app. You can download the source code of EFMVC app from http://efmvc.codeplex.com/

    Read the article

  • Where did I hide my TSQL mojo?

    - by fatherjack
    LiveJournal Tags: How To,SQL Server,Tips and Tricks,TSQL,Reporting Services A little while ago I wrote a piece about finding database objects that rely on other objects that no longer exist - OK, I have my database ready, now what's missing? . This is linked to that sort of process. Many SQL Server installations are associated in some way with a Reporting Services installation, it's a very logical way to distribute your database contents to system users so they can work effectively. Databases,...(read more)

    Read the article

  • Silverlight Cream for April 03, 2010 -- #829

    - by Dave Campbell
    In this Issue: Scott Marlowe, Nokola, SilverLaw, Brad Abrams, Jeff Wilcox, Jesse Liberty, Alexey Zakharov, ondrejsv, Ward Bell, and David Anson. Shoutouts: Bart Czernicki has a post up about the latest with HTML5: HTML 5 is Born Old - Quake in HTML 5 I was sent a link to shoebox360 a while back and had to sign up to see the Silverlight use, but it does work very nice. I like the panoramic carousel in the viewer: shoebox360 Jeff Handley has a post up on RIA Services - Documentation Guidance and Community Samples... the team is looking for feedback from all of us Shawn Wildermuth posted his My MIX Talks' Source Code Laurent Bugnion posted his Sample code and slides for my TechDays10 (Belgium) talks From SilverlightCream.com: Silverlight to WCF Cross Domain SecurityException Scott Marlowe wrote an article about an often-encountered security exception having to do with cross-domain policies. He details the problem, the response, the solution, and yet another problem/solution associated... good stuff, Scott! Simple Functions for HTML Interop You've seen Nokola's graphic work... how about some HTML Interop from him? He's exposing the code he uses in his work. New Video: ChildWindow Styling - Silverlight 3 SilverLaw has a new video tutorial on Silerlight 3 ChildWindow Styling up - in German - but the video is language-agnostic :) Silverlight 4 + RIA Services - Ready for Business: Exposing WCF (SOAP\WSDL) Services Brad Abrams' continuation in his RIA series is this one demonstrating exposing RIA Services as a Soap\WSDL service Silverlight 4: New parser implementation. New parser features. Jeff Wilcox has a post up highlighting some of the new features in Silverlight 4 such as a new parser implementation with new XAML features. New Video Series – Getting Started With Silverlight Jesse Liberty is starting a new video tutorial series that's going to build out to be a "complete survey of Silverlight programming". The first two are in this post and are Getting Started and Adding Controls to a Silverlight App... looks like good material, Jesse, and all the source is there for the taking as well. Silverlight layout hack: Centered content with fixed maxwidth Alexey Zakharov has a quick tip up on creating centered content with fixed maxwidth. He calls it a dirty trick... looks like code to me :) Silverlight DataForm’s autogenerated fields send empty strings to database ondrejsv points up a problem he had with the Toolkit's DataForm, and his solution to it... with code for all of us following along behind :) DevForce Extensibility With MEF InheritedExport Ward Bell has a post up describing how they got DevForce MEF'd up, and looks like a good post to get you all excited about MEF as well... lots of external links and good info. Tip: Read-only custom DependencyProperties don't exist in Silverlight, but can be closely approximated David Anson's latest Tip is about Read-only custom DependencyProperties in Silverlight -- which strictly is not possible, but he has a code example up that gets close. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • BUILD 2013 - Microsoft Set to Unveil It&rsquo;s Reinvention

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/24/153211.aspxSome thoughts as we head into BUILD this week… This week in San Francisco Microsoft will be hosting the BUILD conference. They’ll be talking up Windows 8.1 (Windows Blue), more Azure, some Windows Phone, XBox, Office 365… actually, they told us on the original BUILD announcement site what we’d be seeing:           While looking at this, consider a recent article from The Verge that talks about the speculation of a huge shake up at Microsoft . From the article: All Things D quotes one insider as saying they're "titanic" changes, noting they might be attached to Ballmer's legacy at the company. "It’s the first time in a long time that it feels like that there will be some major shifts, including some departures," says the alleged insider. Considering Ballmer let Sinofsky go right after the Windows 8 launch, the idea of Microsoft cutting loose some executives doesn’t seem to be big news. But the next piece of the article frames things more interestingly: Ballmer is reportedly considering a new structure that would create four separate divisions: enterprise business, hardware, applications and services, and an operating systems group. This statement got me thinking…what would this new structure look like? Below is one possibility: At a recent (this year or last year, I can’t recall which) Microsoft shareholder’s meeting, Ballmer made the statement that Microsoft is now a products and services company. At the time I don’t think I really let that statement sink in. Partially because I really liked the Microsoft of my professional youth – the one that was a software and platform company. In Canada, Microsoft has been pushing three platform areas: Lync, Azure, and SQL Server. I would expect those to change moving forward as Microsoft continues to look for Partners that will help them increase their Services revenue through solutions that incorporate/are based on Azure, Office 365, Lync, and Dynamics. I also wonder if we’re not seeing a culling of partners through changes to the Microsoft Partner Program. In addition to the changing certification requirements that align more to Microsoft’s goals (i.e. There is no desktop development based MCSD, only Windows 8 Store Apps), competencies that partners can qualify for are being merged, requirements changed, and licenses provided reduced. Ballmer warned as much at the last WPC though that they were looking for partners who were “all in” with Microsoft, and these programs seem to support that sentiment. Heading into BUILD this week, I’ll be looking to answer one question – what does it mean to be a Microsoft developer here in the 2010’s? What is the future of the Microsoft development platform? Sure, Visual Studio is still alive and well and Microsoft realizes that there’s a huge install base of .NET developers actively working on solutions. But they’ve ratcheted down the messaging around their development stack and instead focussed on promoting development for their platforms and services. Last year at BUILD with the release of Windows 8, Microsoft just breached the walls of its cocoon. After this BUILD and the organizational change announcements in July, we’ll see what Microsoft looks like fully emerged from its metamorphosis.

    Read the article

  • SOA, Cloud and Service Technology Symposium a super success!

    - by JuergenKress
    SOA, Cloud and Service Technology Symposium in London was a huge success. More than 600 international attendees participated in it. Our SOA & BPM Community had a great presence there. At joint booth with the Specialized partners link consulting, eProseed and Griffiths Waite, we presented the latest product updates and had many interesting discussions with customers and speakers. Special thanks to our HQ product management team Demed, Tim, Manas for coming over right before OOW. Also a very big Thank to Matthias Ziegler from Accenture for presenting our joint presentation individually! If you missed the conference here are the key presentations links for your reference: Big Data and its impact on SOA Demed L'Her [View PDF] Building 21st Century Service-Oriented Airports Shyam Kumar [View PDF] Building Cloudy Services Anne Thomas Manes [View PDF] Community Management: The Next Wave of SOA Governance and API Management Tim E. Hall [View PDF] Elastic SOA in the Cloud Steve Millidge [View PDF] Governing Shared Services: On-Premise & In the Cloud Thomas Erl [View Video] Introducing the Cloud Computing Design Patterns Catalogue Thomas Erl and Amin Naserpour [CloudPatterns.org] Lost in Translation - Common Mistakes Interpreting Patterns Mark Simpson [View PDF] Moving Applications to the Cloud: Migration Options Anne Thomas Manes [View PDF] New Paradigms for Application Architecture: From Applications to IT Services Anne Thomas Manes [View PDF] NoSQL for Data Services, Data Virtualization & Big Data Guido Schmutz [View PDF] A Pragmatic Approach to Cloud Computing Andrea Morena [View PDF] The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts and Examples Clemens Utschig and Manas Deb [View PDF] Service Modeling & BPM Business Value Patterns Matthias Ziegler [View PDF] [Podcast] SOA Adoption in the Brazilian Ministry of Health - Case Study Ricardo Puttini and Andre Toffanello [PDF Coming Soon] SOA Environments are a Big Data Problem Markus Zirn, Splunk and Maciej Barcz [View PDF] SOA Governance at EDP: A Global Energy Company Manuel Rosa [View PDF] For all presentations please visit the SOA, Cloud and Service Technology Symposium Website SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • links for 2010-04-28

    - by Bob Rhubart
    Guido Schmutz: Oracle BPM11g available! Oracle ACE Director Guido Schmutz shares his impressions after attending a hands-on workshop conducted by Masons of SOA member Clemens Utschig-Utschig. (tags: oracle otn oracleace bpm soa soasuite) Elena Zannoni : 2010 Collaboration Summit Impressions Elena Zannoni has collected her thoughts on #C10 and shares them in this great blog post. (tags: oracle otn linux architecture collaborate2010) Hajo Normann: BPMN 2.0 in Oracle BPM Suite: The future of BPM starts now "The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends." -- Oracle ACE Director and Masons of SOA member Hajo Normann (tags: oracle otn oracleace bpm soa sca) Brain Dirking: AIIM Best Practice Awards to Two Oracle Customers Brian Dirking's great write-up of the AIIM Awards Banquet, at which the Bureau of Indian Affairs and the Charles Town Police Department were among the winners of the 2010 Carl E. Nelson Best Practices Awards. (tags: oracle otn aiim bpm ecm enterprise2.0) Mark Wilcox: Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Directory Services Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time (tags: oracle otn webcast security identitymanagement) Celine Beck: Introducing AutoVue Document Print Service Celine Beck offers a detailed overview of Oracle AutoVue. (tags: oracle otn enatarch visualization printing) Vikas Jain: What's new in OWSM 11gR1 PS2 (11.1.1.3.0) ? Vikas Jain shares links to resources relevant to the recently releases patch set for Oracle Web Services Manager 11gR1. (tags: oracle otn soa webservices oswm) @theovanarem: Oracle SOA Suite 11g Release 1 Patch Set 2 Theo Van Arem shares links to several resources relevant to the release of the latest patch set for Oracle SOA Suite 11g. (tags: oracle otn soa soasuite middleware) @vambenepe: Analyzing the VMforce announcement "The new thing is that force.com now supports an additional runtime, in addition to Apex. That new runtime uses the Java language, with the constraint that it is used via the Spring framework. Which is familiar territory to many developers. That’s it." -- William Vambenepe (tags: oracle otn cloud paas)

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • links for 2010-04-27

    - by Bob Rhubart
    @oracletechnet: Oracle Technology Network Newsletters Revisited "You may find this hard to believe, but some analysts contend that email newsletters are still among the most preferred methods of "information awareness" by developers today. And in our experience, the numbers back it up: subscriptions to Oracle Technology Network newsletters grow organically by 15% every year, even after you take continual list cleanup into account. " -- Justin Kestelyn (tags: oracle otn newsletters developers architects) Sylvain Duloutre: Directory Services as a Web Service Sylvain Duloutre shares a WSDL file he created to deal with issues involved in XML binding generation. (tags: oracle sun wsdl webservices DSEE netbeans jdeveloper) Nick Wooler: Iron-Clad Cloud: Secure Cloud Computing "One solution to the security problem with cloud services can be overcome using Service Oriented Security. The Oracle approach to using Service Oriented Security allows developers to pull from a centralized, authoritative source of identity services. This allows developers to build security into every application from the inside-out. This is critical to ensuring this is done in a standardized manner and most importantly it allows developers to develop without being security experts." -- Nick Wooler (tags: oracle sun security cloud saas) Andy Mulholland: A week of visits; Cisco, HP, Oracle, SAP and VMware (in alphabetical order!) "I now am considering that we should be thinking about ‘clouds’ in virtual way, by which I mean that a succession of virtual ‘clouds’ will need to exist, each possessing specific characteristics that suit certain types of services. Really it’s no different to what we see with servers today. Adding a hypervisor to a server adds new flexibility, but creating a virtualised environment means much more. What I suspect will happen is that we will start to use vendor specific approaches to building what I will term a physical cloud solution using their technology and approach to supporting a specific objective, but with time we will find these physical clouds will interoperate as a fully virtualised cloud environment." -- Andy Mulholland (tags: entarch enterprisearchitecture cloudcomputing virtualization) @fteter: Highlights From The Bright Lights - Tuesday #c10 Oracle Ace Director Floyd Teter of JPL with one last wrap-up of Collaborate 10. (tags: oracle otn collaborate2010 las vegas) Rittman Mead India – Call for very good Oracle BI Developers/Architects "Now that we have an office in India and if you are interested in joining us, do drop us a line at [email protected], and we will be glad to have technical discussions with you. If you are also an Oracle BI, DW or EPM customer looking for help on projects in the Asia-Pacific region, again we’ll be pleased to hear from you and to let you know how we can help." -- Venkatakrishnan J (tags: otn oracle jobs india developers architects software)

    Read the article

  • Silverlight Cream for April 17, 2010 -- #839

    - by Dave Campbell
    In this Issue: ITLackey, SilverLaw, Max Paulousky, Alex Yakhnin, Paul Sheriff, Douglas, Jeremy Likness, Tomasz Janczuk, Anoop Madhusudanan, Adam Kinney, and Ashish Shetty. Shoutout: If you haven't already seen it, CrocusGirl did a great job of summarizing Day 2 of DevConnections with her Silverlight 4 Launch Notes From SilverlightCream.com: RIA Services - IIS6 Virtual Directory Deployment ITLackey has a post up building on his previous post on Windows Authentication with RIA Services and discusses deploying to an IIS Virtual Directory. How To: Determine ChildWindow Position At Runtime - Silverlight 3 SilverLaw has a post up about determining the position of a ChildWindow at run-time, for example after the user moves it. Modularity in Silverlight Applications - An Issue With ModuleInitializeException – Part 2 Max Paulousky has part 2 of his series up on Modularity in Silverlight... he discusses using XAML as a catalog and registering modules at runtime, and compares to WPF. Creating LINQ Data Provider for WP7 (Part 1) Alex Yakhnin has a first cut at a LINQ Data Provider for WP7 ... I was expecting this to hit pretty soon, because we're all going to want it... check out the code and d/l the project. Synchronize Data between a Silverlight ListBox and a User Control Paul Sheriff demonstrates databinding in XAML between local data in a ListBox and a UserControl. The beginnings of Silverlight development with Expression Blend Douglas has a good post up on beginning your Silverlight development with Expression Blend. He covers a lot of ground in this post. Converting Silverlight 3 to Silverlight 4 Jeremy Likness has a video up demonstrating converting Silverlight 3 to Silverlight 4 with download links and also using commanding on buttons. Debugging WCF RIA Services with WCF traces Tomasz Janczuk has a post up discussing the use of WCF RIA Services traces to help diagnose and debug problems in a deployed service. Bing Maps + oData + Windows Phone 7 - Nerd Dinner Client For Windows Phone 7 Check out what Anoop Madhusudanan has provided... Nerd Dinner for WP7, including OData and BingMaps... just very cool! A few cool new features added in Expression Blend 4 RC Adam Kinney announced the availability of the new Expression Blend and highlights some of the new features... like MakeLayoutPath... FTW! Of Crashing and Sometimes Burning Ashish Shetty has a discourse posted about where the causes of errors might come from, what to expect from the platform, where to find crash dumps, and links to more reading. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Adding AjaxOnly Filter in ASP.NET Web API

    - by imran_ku07
            Introduction:                     Currently, ASP.NET MVC 4, ASP.NET Web API and ASP.NET Single Page Application are the hottest topics in ASP.NET community. Specifically, lot of developers loving the inclusion of ASP.NET Web API in ASP.NET MVC. ASP.NET Web API makes it very simple to build HTTP RESTful services, which can be easily consumed from desktop/mobile browsers, silverlight/flash applications and many different types of clients. Client side Ajax may be a very important consumer for various service providers. Sometimes, some HTTP service providers may need some(or all) of thier services can only be accessed from Ajax. In this article, I will show you how to implement AjaxOnly filter in ASP.NET Web API application.         Description:                     First of all you need to create a new ASP.NET MVC 4(Web API) application. Then, create a new AjaxOnly.cs file and add the following lines in this file, public class AjaxOnlyAttribute : System.Web.Http.Filters.ActionFilterAttribute { public override void OnActionExecuting(System.Web.Http.Controllers.HttpActionContext actionContext) { var request = actionContext.Request; var headers = request.Headers; if (!headers.Contains("X-Requested-With") || headers.GetValues("X-Requested-With").FirstOrDefault() != "XMLHttpRequest") actionContext.Response = request.CreateResponse(HttpStatusCode.NotFound); } }                     This is an action filter which simply checks X-Requested-With header in request with value XMLHttpRequest. If X-Requested-With header is not presant in request or this header value is not XMLHttpRequest then the filter will return 404(NotFound) response to the client.                      Now just register this filter, [AjaxOnly] public string GET(string input)                     You can also register this filter globally, if your Web API application is only targeted for Ajax consumer.         Summary:                       ASP.NET WEB API provide a framework for building RESTful services. Sometimes, you may need your certain API services can only be accessed from Ajax. In this article, I showed you how to add AjaxOnly action filter in ASP.NET Web API. Hopefully you will enjoy this article too.

    Read the article

  • Edinburgh this Thurs (25th) - Rob Carrol talks about how to build a high performance, scalable repor

    - by tonyrogerson
    Scottish Area SQL Server User Group Meeting, Edinburgh - Thursday 25th March An evening of SQL Server 2008 Reporting Services Scalability and Performance with Rob Carrol, see how to build a high performance, scalable reporting platform and the tuning techniques required to ensure that report performance remains optimal as your platform grows. Pizza and drinks will be provided! Register at http://www.sqlserverfaq.com/events/221/SQL-Server-2008-Reporting-Services-Scalability-and-Performance.aspx...(read more)

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >