Search Results

Search found 5922 results on 237 pages for 'boost ptr container'.

Page 83/237 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Event-based interaction between two custom classes

    - by Antenka
    Hello everybody. I have such problem: I have 2 custom components, which have their own nesting hierarchy ... One is container for another. I have to "familiarize them" with each other. The way I'm trying to achieve that is using global events (one side is firing and the other one is catching): Application.application.addEventListener("Hello", function (data:Event):void{ // .. some actions }); //and Application.application.dispatchEvent(new Event(Hello)); Everything is pretty good, but there's one thingy .. when I'm trying to catch the event, I can't access the class, who is catching it. E.g.: Container fires the event. Child caughts it. Then should be created the connection between container and it's child. BUT, the only thing I could acheive is passing a reference to the Container in the DynamicEvent. Is there any chance that I could access the child at the event-handler function. Or maybe there's more elegant way to solve this problem ... Any help would be greately appreciated :)

    Read the article

  • Can anyone help with this (Javascript arrays)?

    - by Rich
    Hi I am new to Netui and Javascript so go easy on me please. I have a form that is populated with container.item data retuned from a database. I am adding a checkbox beside each repeater item returned and I want to add the container item data to an array when one of the checkboxes is checked for future processing. The old code used Anchor tag to capture the data but that does not work for me. <!--netui:parameter name="lineupNo" value="{container.item.lineupIdent.lineupNo}" /> here is my checkbox that is a repeater. <netui:checkBox dataSource="{pageFlow.checkIsSelected}" onClick="checkBoxClicked()" tagId="pceChecked"/> this is my Javascript function so far but I want to a way to store the container.item.lineupIdent.lineupNo in the array. function checkBoxClicked() { var checkedPce = []; var elem = document.getElementById("PceList").elements; for (var i = 0; i < elem.length; i ++) { if (elem[i].name == netui_names.pceChecked) { if (elem[i].checked == true) { //do some code. } } } } I hope this is enough info for someone to help me. I have searched the web but could not find any examples. Thanks.

    Read the article

  • Basic CSS question regarding background images for divs

    - by Mike
    I'm a programmer trying to learn some css and I've already run into a stumbling block. I have the following HTML: <div class="container"> <div class="span-24 last"> Header </div> <div class="span-4"> Left sidebar </div> <div class="span-16"> <div class="span-8"> Box1 </div> <div class="span-4"> Box2 </div> <div class="span-4 last"> Box3 </div> <div class="span-16 last"> Main content </div> </div> <div class="span-4 last"> Right sidebar </div> <div class="span-24 last"> Footer </div> </div> In my css I have the following: body { background-color:#FFFFFF; } div.container { background:url(/images/bck.jpg); } I just want to display an image for the background area for the container div but nothing shows up. If I remove the background section from the css and add background-color:#000000; then I see a black background for the container div. What am I overlooking?

    Read the article

  • Opera bug with JS autoselecting text (if more than 1 div)

    - by E L
    Here is HTML code. It supposed to select all text in "Container" div <B onclick="SelectText(document.getElementById('Container'));">select all text</B> <Div id="Container"> <Div>123456</Div> <Div>123456</Div> <Div onclick="SelectText();">123456</Div> </Div> here is JS code of the SelectText() function function SelectText(target){ if(target==null){ var e = window.event || e; if (!e) var e = window.event; var target=e.target || e.srcElement; } var rng, sel; if ( document.createRange ) { rng = document.createRange(); rng.selectNode( target ); sel = window.getSelection(); sel.removeAllRanges(); sel.addRange( rng ); } else { var rng = document.body.createTextRange(); rng.moveToElementText( target ); rng.select(); } } Problem is that in Opera 12.02 when "select all text" is clicked, all text seems like selected, but it's not selected (I can't rightclick it and copy). (terrific, but IE works fine with it) Why not in Opera?!!! And what can I do to make Opera 12.02 believe that all text in "Container" is selected?

    Read the article

  • for loop vs std::for_each with lambda

    - by Andrey
    Let's consider a template function written in C++11 which iterates over a container. Please exclude from consideration the range loop syntax because it is not yet supported by the compiler I'm working with. template <typename Container> void DoSomething(const Container& i_container) { // Option #1 for (auto it = std::begin(i_container); it != std::end(i_container); ++it) { // do something with *it } // Option #2 std::for_each(std::begin(i_container), std::end(i_container), [] (typename Container::const_reference element) { // do something with element }); } What are pros/cons of for loop vs std::for_each in terms of: a) performance? (I don't expect any difference) b) readability and maintainability? Here I see many disadvantages of for_each. It wouldn't accept a c-style array while the loop would. The declaration of the lambda formal parameter is so verbose, not possible to use auto there. It is not possible to break out of for_each. In pre- C++11 days arguments against for were a need of specifying the type for the iterator (doesn't hold any more) and an easy possibility of mistyping the loop condition (I've never done such mistake in 10 years). As a conclusion, my thoughts about for_each contradict the common opinion. What am I missing here?

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

  • WatiN screenshot saver

    - by Brian Schroer
    In addition to my automated unit, system and integration tests for ASP.NET projects, I like to give my customers something pretty that they can look at and visually see that the web site is behaving properly. I use the Gallio test runner to produce a pretty HTML report, and WatiN (Web Application Testing In .NET) to test the UI and create screenshots. I have a couple of issues with WatiN’s “CaptureWebPageToFile” method, though: It blew up the first (and only) time I tried it, possibly because… It scrolls down to capture the entire web page (I tried it on a very long page), and I usually don’t need that Also, sometimes I don’t need a picture of the whole browser window - I just want a picture of the element that I'm testing (for example, proving that a button has the correct caption). I wrote a WatiN screenshot saver helper class with these methods: SaveBrowserWindowScreenshot(Watin.Core.IE ie)  / SaveBrowserWindowScreenshot(Watin.Core.Element element) saves a screenshot of the browser window SaveBrowserWindowScreenshotWithHighlight(Watin.Core.Element element) saves a screenshot of the browser window, with the specified element scrolled into view and highlighted SaveElementScreenshot(Watin.Core.Element element) saves a picture of only the specified element The element highlighting improves on the built-in WatiN method (which just gives the element a yellow background, and makes the element pretty much unreadable when you have a light foreground color) by adding the ability to specify a HighlightCssClassName that points to a style in your site’s stylesheet. This code is specifically for testing with Internet Explorer (‘cause that’s what I have to test with at work), but you’re welcome to take it and do with it what you want… using System; using System.Drawing; using System.Drawing.Imaging; using System.IO; using System.Reflection; using System.Runtime.InteropServices; using System.Text; using System.Threading; using SHDocVw; using WatiN.Core; using mshtml; namespace BrianSchroer.TestHelpers { public static class WatinScreenshotSaver { public static void SaveBrowserWindowScreenshotWithHighlight (Element element, string screenshotName) { HighlightElement(element, true); SaveBrowserWindowScreenshot(element, screenshotName); HighlightElement(element, false); } public static void SaveBrowserWindowScreenshotWithHighlight(Element element) { HighlightElement(element, true); SaveBrowserWindowScreenshot(element); HighlightElement(element, false); } public static void SaveBrowserWindowScreenshot(Element element, string screenshotName) { SaveScreenshot(GetIe(element), screenshotName, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(Element element) { SaveScreenshot(GetIe(element), null, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(IE ie, string screenshotName) { SaveScreenshot(ie, screenshotName, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(IE ie) { SaveScreenshot(ie, null, SaveBitmapForCallbackArgs); } public static void SaveElementScreenshot(Element element, string screenshotName) { // TODO: Figure out how to get browser window "chrome" size and not have to go to full screen: var iex = (InternetExplorerClass) GetIe(element).InternetExplorer; bool fullScreen = iex.FullScreen; if (!fullScreen) iex.FullScreen = true; ScrollIntoView(element); SaveScreenshot(GetIe(element), screenshotName, args => SaveElementBitmapForCallbackArgs(element, args)); iex.FullScreen = fullScreen; } public static void SaveElementScreenshot(Element element) { SaveElementScreenshot(element, null); } private static void SaveScreenshot(IE browser, string screenshotName, Action<ScreenshotCallbackArgs> screenshotCallback) { string fileName = string.Format("{0:000}{1}{2}.jpg", ++_screenshotCount, (string.IsNullOrEmpty(screenshotName)) ? "" : " ", screenshotName); string path = Path.Combine(ScreenshotDirectoryName, fileName); Console.WriteLine(); // Gallio HTML-encodes the following display, but I have a utility program to // remove the "HTML===" and "===HTML" and un-encode the rest to show images in the Gallio report: Console.WriteLine("HTML===<div><b>{0}:</br></b><img src=\"{1}\" /></div>===HTML", screenshotName, new Uri(path).AbsoluteUri); MakeBrowserWindowTopmost(browser); try { var args = new ScreenshotCallbackArgs { InternetExplorerClass = (InternetExplorerClass)browser.InternetExplorer, ScreenshotPath = path }; Thread.Sleep(100); screenshotCallback(args); } catch (Exception ex) { Console.WriteLine(ex.Message); } } public static void HighlightElement(Element element, bool doHighlight) { if (!element.Exists) return; if (string.IsNullOrEmpty(HighlightCssClassName)) { element.Highlight(doHighlight); return; } string jsRef = element.GetJavascriptElementReference(); if (string.IsNullOrEmpty(jsRef)) return; var sb = new StringBuilder("try { "); sb.AppendFormat(" {0}.scrollIntoView(false);", jsRef); string format = (doHighlight) ? "{0}.className += ' {1}'" : "{0}.className = {0}.className.replace(' {1}', '')"; sb.AppendFormat(" " + format + ";", jsRef, HighlightCssClassName); sb.Append("} catch(e) {}"); string script = sb.ToString(); GetIe(element).RunScript(script); } public static void ScrollIntoView(Element element) { string jsRef = element.GetJavascriptElementReference(); if (string.IsNullOrEmpty(jsRef)) return; var sb = new StringBuilder("try { "); sb.AppendFormat(" {0}.scrollIntoView(false);", jsRef); sb.Append("} catch(e) {}"); string script = sb.ToString(); GetIe(element).RunScript(script); } public static void MakeBrowserWindowTopmost(IE ie) { ie.BringToFront(); SetWindowPos(ie.hWnd, HWND_TOPMOST, 0, 0, 0, 0, TOPMOST_FLAGS); } public static string HighlightCssClassName { get; set; } private static int _screenshotCount; private static string _screenshotDirectoryName; public static string ScreenshotDirectoryName { get { if (_screenshotDirectoryName == null) { var asm = Assembly.GetAssembly(typeof(WatinScreenshotSaver)); var uri = new Uri(asm.CodeBase); var fileInfo = new FileInfo(uri.LocalPath); string directoryName = fileInfo.DirectoryName; _screenshotDirectoryName = Path.Combine( directoryName, string.Format("Screenshots_{0:yyyyMMddHHmm}", DateTime.Now)); Console.WriteLine("Screenshot folder: {0}", _screenshotDirectoryName); Directory.CreateDirectory(_screenshotDirectoryName); } return _screenshotDirectoryName; } set { _screenshotDirectoryName = value; _screenshotCount = 0; } } [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool SetWindowPos(IntPtr hWnd, IntPtr hWndInsertAfter, int X, int Y, int cx, int cy, uint uFlags); private static readonly IntPtr HWND_TOPMOST = new IntPtr(-1); private const UInt32 SWP_NOSIZE = 0x0001; private const UInt32 SWP_NOMOVE = 0x0002; private const UInt32 TOPMOST_FLAGS = SWP_NOMOVE | SWP_NOSIZE; private static IE GetIe(Element element) { if (element == null) return null; var container = element.DomContainer; while (container as IE == null) container = container.DomContainer; return (IE)container; } private static void SaveBitmapForCallbackArgs(ScreenshotCallbackArgs args) { InternetExplorerClass iex = args.InternetExplorerClass; SaveBitmap(args.ScreenshotPath, iex.Left, iex.Top, iex.Width, iex.Height); } private static void SaveElementBitmapForCallbackArgs(Element element, ScreenshotCallbackArgs args) { InternetExplorerClass iex = args.InternetExplorerClass; Rectangle bounds = GetElementBounds(element); SaveBitmap(args.ScreenshotPath, iex.Left + bounds.Left, iex.Top + bounds.Top, bounds.Width, bounds.Height); } /// <summary> /// This method is used instead of element.NativeElement.GetElementBounds because that /// method has a bug (http://sourceforge.net/tracker/?func=detail&aid=2994660&group_id=167632&atid=843727). /// </summary> private static Rectangle GetElementBounds(Element element) { var ieElem = element.NativeElement as WatiN.Core.Native.InternetExplorer.IEElement; IHTMLElement elem = ieElem.AsHtmlElement; int left = elem.offsetLeft; int top = elem.offsetTop; for (IHTMLElement parent = elem.offsetParent; parent != null; parent = parent.offsetParent) { left += parent.offsetLeft; top += parent.offsetTop; } return new Rectangle(left, top, elem.offsetWidth, elem.offsetHeight); } private static void SaveBitmap(string path, int left, int top, int width, int height) { using (var bitmap = new Bitmap(width, height)) { using (Graphics g = Graphics.FromImage(bitmap)) { g.CopyFromScreen( new Point(left, top), Point.Empty, new Size(width, height) ); } bitmap.Save(path, ImageFormat.Jpeg); } } private class ScreenshotCallbackArgs { public InternetExplorerClass InternetExplorerClass { get; set; } public string ScreenshotPath { get; set; } } } }

    Read the article

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • How to Control Screen Layouts in LightSwitch

    - by ChrisD
    Visual Studio LightSwitch has a bunch of screen templates that you can use to quickly generate screens. They give you good starting points that you can customize further. When you add a new screen to your project you see a set of screen templates that you can choose from. These templates lay out all the related data you choose to put on a screen automatically for you. And don’t under estimate them; they do a great job of laying out controls in a smart way. For instance, a tab control will be used when you select more than one related set of data to display on a screen. However, you’re not limited to taking the layout as is. In fact, the screen designer is pretty flexible and allows you to create stacks of controls in a variety of configurations. You just need to visualize your screen as a series of containers that you can lay out in rows and columns. You then place controls or stacks of controls into these areas to align the screen exactly how you want. If you’re new in Visual Studio LightSwitch, you can see this tutorial. OK, Let’s start with a simple example. I have already designed my data entities for a simple order tracking system similar to the Northwind database. I also have added a Search Data  Screen to search my Products already. Now I will add a new Details Screen for my Products and make it the default screen via the “Add New Screen” dialog: The screen designer picks a simple layout for me based on the single entity I chose, in this case Product. Hit F5 to run the application, select a Product on the search screen to open the Product Details Screen. Notice that it’s pretty simple because my entity is simple. Click the “Customize” button in the top right of the screen so we can start tweaking it. The left side of the screen shows the containership of controls and data bindings (called the content tree) and the right side shows the live preview with data. Notice that we have a simple layout of two rows but only one row is populated (with a vertical stack of controls in this case). The bottom row is empty. You can envision the screen like this: Each container will display a group of data that you select. For instance in the above screen, the top row is set to a vertical stack control and the group of data to display is coming from Product. So when laying out screens you need to think in terms of containers of controls bound to groups of data. To change the data to which a container is bound, select the data item next to the container: You can select the “New Group” item in order to create more containers (or controls) within the current container. For instance to totally control the layout, select the Product in the top row and hit the delete key. This will delete the vertical stack and therefore all the controls on the screen. The content tree will still have two rows, but the rows are now both empty. If you want a layout of four containers (two rows and two columns) then select “New Group” for the data item and then change the vertical stack control to “Two Columns” for both of the rows as shown here: You can keep going on and on by selecting new groups and choosing between rows or columns. Here’s a layout with 8 containers, 4 rows and 2 columns: And here is a layout with 7 content areas; one row across the top of the screen and three rows with two columns below that: When you select Choose Content and select a data item like Product it will populate all the controls within the container (row or column in a vertical stack) however you have complete control on what to display within each group. You can delete fields you don’t want to display and/or change their controls. You can also change the size of controls and how they display by changing the settings in the properties window. If you are in the Screen Designer (and not the customization mode like we are here) you can also drag-drop data items from the left-hand side of the screen to the content tree. Note, however, that not all areas of the tree will allow you to drop a data item if there is a binding already set to a different set of data. For instance you can’t drop a Customer ID into the same group as a Product if they originate from different entities. To get around this, all you need to do is create a new group and content area as shown above. Let’s take a more complex example that deals with more than just product. I want to design a complex screen that displays Products and their Category, as well as all the OrderDetails for which that product is selected. This time I will create a new screen and select List and Details, select the Products screen data, and include the related OrderDetails. However I’m going to totally change the layout so that a Product grid is at the top left and below that is the selected Product detail. Below that will be the Category text fields and image in two columns below. On the right side I want the OrderDetails grid to take up the whole right side of the screen. All this can be done in customization mode while you’re debugging the application. To do this, I first deleted all the content items in the tree and then re-created the content tree as shown in the image below. I also set the image to be larger and the description textbox to be 5 rows using the property window below the live preview. I added the green lines to indicate the containers and show how it maps to the content tree (click to enlarge): I hope this demystifies the screen designer a little bit. Remember that screen templates are excellent starting points – you can take them as-is or customize them further. It takes a little fooling around with customizing screens to get them to do exactly what you want but there are a ton of possibilities once you get the hang of it. Stay tuned for more information on how to create your own screen templates that show up in the “Add New Screen” dialog. Enjoy! The tutorial that might be interested: Adding Custom Control In LightSwitch

    Read the article

  • java.lang.ArrayIndexOutOfBoundsException

    - by thefonso
    Here is the code. import java.applet.Applet; import java.awt.Button; import java.awt.Color; import java.awt.Graphics; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; public class GuessingGame extends Applet{ /** * */ private static final long serialVersionUID = 1L; private final int START_X = 20; private final int START_Y = 40; private final int ROWS = 4; private final int COLS = 4; private final int BOX_WIDTH = 20; private final int BOX_HEIGHT = 20; //this is used to keep track of boxes that have been matched. private boolean matchedBoxes[][]; //this is used to keep track of two boxes that have been clicked. private MaskableBox chosenBoxes[]; private MaskableBox boxes[][]; private Color boxColors[][]; private Button resetButton; public void init() { boxes = new MaskableBox[ROWS][COLS]; boxColors = new Color[ROWS][COLS]; resetButton = new Button("Reset Colors"); resetButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { randomizeColors(); buildBoxes(); repaint(); } }); add(resetButton); //separate building colors so we can add a button later //to re-randomize them. randomizeColors(); buildBoxes(); } public void paint(Graphics g) { for (int row =0; row < boxes.length; row ++) { for (int col = 0; col < boxes[row].length; col++) { if(boxes[row][col].isClicked()) { //boxes[row][col].setMaskColor(Color.black); //boxes[row][col].setMask(!boxes[row][col].isMask()); //boxes[row][col].setClicked(false); //} if (!matchedBoxes[row][col]) { gameLogic(boxes[row][col]); //boxes[row][col].draw(g); } } } } //loop through the boxes and draw them. for (int row = 0; row < boxes.length; row++) { for (int col = 0; col < boxes[row].length; col++) { boxes[row][col].draw(g); } } } public void gameLogic(MaskableBox box) { if ((chosenBoxes[0] != null)&&(chosenBoxes[1] != null)) { if(chosenBoxes[0].getBackColor() == chosenBoxes[1].getBackColor()) { for (int i=0; 0 <= chosenBoxes.length; ++i ) { for(int row = 0; row < boxes.length; row++) { for(int col = 0; col < boxes[row].length; col++) { if( boxes[row][col] == chosenBoxes[i] ) { System.out.println("boxes [row][col] == chosenBoxes[] at index: " + i ); matchedBoxes[row][col] = true; break; } } } } }else { chosenBoxes[0].setMask(true); chosenBoxes[1].setMask(true); } chosenBoxes = new MaskableBox[2]; }else { if (chosenBoxes[0] == null) { chosenBoxes[0] = box; chosenBoxes[0].setMask(false); return; }else{ if (chosenBoxes[1] == null) { chosenBoxes[1] = box; chosenBoxes[1].setMask(false); } } } } private void removeMouseListeners() { for(int row = 0; row < boxes.length; row ++) { for(int col = 0; col < boxes[row].length; col++) { removeMouseListener(boxes[row][col]); } } } private void buildBoxes() { // need to clear any chosen boxes when building new array. chosenBoxes = new MaskableBox[2]; // create a new matchedBoxes array matchedBoxes = new boolean [ROWS][COLS]; removeMouseListeners(); for(int row = 0; row < boxes.length; row++) { for(int col = 0; col < boxes[row].length; col++) { boxes[row][col] = new MaskableBox(START_X + col * BOX_WIDTH, START_Y + row * BOX_HEIGHT, BOX_WIDTH, BOX_HEIGHT, Color.gray, boxColors[row][col], true, true, this); addMouseListener(boxes[row][col]); } } } private void randomizeColors() { int[] chosenColors = {0,0,0,0,0,0,0,0}; Color[] availableColors = {Color.red, Color.blue, Color.green, Color.yellow, Color.cyan, Color.magenta, Color.pink, Color.orange }; for(int row = 0; row < boxes.length; row++) { for (int col = 0; col < boxes[row].length; col++) { for (;;) { int rnd = (int) (Math.random() * 8); if (chosenColors[rnd]< 2) { chosenColors[rnd]++; boxColors[row][col] = availableColors[rnd]; break; } } } } } } here is the second batch of code containing maskablebox import java.awt.Color; import java.awt.Container; import java.awt.Graphics; public class MaskableBox extends ClickableBox { private boolean mask; private Color maskColor; Container parent; public MaskableBox(int x, int y, int width, int height, Color borderColor, Color backColor, boolean drawBorder, boolean mask, Container parent ) { super(x, y, width, height, borderColor, backColor, drawBorder, parent); this.parent = parent; this.mask = mask; } public void draw(Graphics g) { if(mask=false) { super.draw(g); // setOldColor(g.getColor()); // g.setColor(maskColor); // g.fillRect(getX(),getY(),getWidth(), getHeight()); // if(isDrawBorder()) { // g.setColor(getBorderColor()); // g.drawRect(getX(),getY(),getWidth(),getHeight()); // } // g.setColor(getOldColor()); }else { if(mask=true) { //super.draw(g); setOldColor(g.getColor()); g.setColor(maskColor); g.fillRect(getX(),getY(),getWidth(), getHeight()); if(isDrawBorder()) { g.setColor(getBorderColor()); g.drawRect(getX(),getY(),getWidth(),getHeight()); } g.setColor(getOldColor()); } } } public boolean isMask() { return mask; } public void setMask(boolean mask) { this.mask = mask; } public Color getMaskColor() { return maskColor; } public void setMaskColor(Color maskColor) { this.maskColor = maskColor; } } I keep getting these error messages. I'm going nuts trying to figure this out. can anyone tell me what I'm doing wrong? boxes [row][col] == chosenBoxes[] at index: 0 boxes [row][col] == chosenBoxes[] at index: 1 Exception in thread "AWT-EventQueue-1" java.lang.ArrayIndexOutOfBoundsException: 2 at GuessingGame.gameLogic(GuessingGame.java:77) at GuessingGame.paint(GuessingGame.java:55) at java.awt.Container.update(Container.java:1801) at sun.awt.RepaintArea.updateComponent(RepaintArea.java:239) at sun.awt.RepaintArea.paint(RepaintArea.java:216) at sun.awt.windows.WComponentPeer.handleEvent(WComponentPeer.java:306) at java.awt.Component.dispatchEventImpl(Component.java:4706) at java.awt.Container.dispatchEventImpl(Container.java:2099) at java.awt.Component.dispatchEvent(Component.java:4460) at java.awt.EventQueue.dispatchEvent(EventQueue.java:599) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)

    Read the article

  • Use of for_each on map elements

    - by Antonio
    I have a map where I'd like to perform a call on every data type object member function. I yet know how to do this on any sequence but, is it possible to do it on an associative container? The closest answer I could find was this: Boost.Bind to access std::map elements in std::for_each. But I cannot use boost in my project so, is there an STL alternative that I'm missing to boost::bind? If not possible, I thought on creating a temporary sequence for pointers to the data objects and then, call for_each on it, something like this: class MyClass { public: void Method() const; } std::map<int, MyClass> Map; //... std::vector<MyClass*> Vector; std::transform(Map.begin(), Map.end(), std::back_inserter(Vector), std::mem_fun_ref(&std::map<int, MyClass>::value_type::second)); std::for_each(Vector.begin(), Vector.end(), std::mem_fun(&MyClass::Method)); It looks too obfuscated and I don't really like it. Any suggestions?

    Read the article

  • Find max integer size that a floating point type can handle without loss of precision

    - by Checkers
    Double has range more than a 64-bit integer, but its precision is less dues to its representation (since double is 64-bit as well, it can't fit more actual values). So, when representing larger integers, you start to lose precision in the integer part. #include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; } This prints: 2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000 Note how max int can fit into a double without loss of precision and boost::intmax_t (64-bit in this case) cannot. float can't even hold an int. Now, the question: is there a way in C++ to check if the entire range of a given integer type can fit into a loating point type without loss of precision? Preferably, it would be a compile-time check that can be used in a static assertion, and would not involve enumerating the constants the compiler should know or can compute.

    Read the article

  • What is the rationale to non allow overloading of C++ conversions operator with non-member functio

    - by Vicente Botet Escriba
    C++0x has added explicit conversion operators, but they must always be defined as members of the Source class. The same applies to the assignment operator, it must be defined on the Target class. When the Source and Target classes of the needed conversion are independent of each other, neither the Source can define a conversion operator, neither the Target can define a constructor from a Source. Usually we get it by defining a specific function such as Target ConvertToTarget(Source& v); If C++0x allowed to overload conversion operator by non member functions we could for example define the conversion implicitly or explicitly between unrelated types. template < typename To, typename From operator To(const From& val); For example we could specialize the conversion from chrono::time_point to posix_time::ptime as follows template < class Clock, class Duration operator boost::posix_time::ptime( const boost::chrono::time_point& from) { using namespace boost; typedef chrono::time_point time_point_t; typedef chrono::nanoseconds duration_t; typedef duration_t::rep rep_t; rep_t d = chrono::duration_cast( from.time_since_epoch()).count(); rep_t sec = d/1000000000; rep_t nsec = d%1000000000; return posix_time::from_time_t(0)+ posix_time::seconds(static_cast(sec))+ posix_time::nanoseconds(nsec); } And use the conversion as any other conversion. So the question is: What is the rationale to non allow overloading of C++ conversions operator with non-member functions?

    Read the article

  • Lucene setboost doesn't work

    - by Keven
    Hi all, OUr team just upgrade lucene from 2.3 to 3.0 and we are confused about the setboost and getboost of document. What we want is just set a boost for each document when add them into index, then when search it the documents in the response should have different order according to the boost I set. But it seems the order is not changed at all, even the boost of each document in the search response is still 1.0. Could some one give me some hit? Following is our code: String[] a = new String[] { "schindler", "spielberg", "shawshank", "solace", "sorcerer", "stone", "soap", "salesman", "save" }; List strings = Arrays.asList(a); AutoCompleteIndex index = new Index(); IndexWriter writer = new IndexWriter(index.getDirectory(), AnalyzerFactory.createAnalyzer("en_US"), true, MaxFieldLength.LIMITED); float i = 1f; for (String string : strings) { Document doc = new Document(); Field f = new Field(AutoCompleteIndexFactory.QUERYTEXTFIELD, string, Field.Store.YES, Field.Index.NOT_ANALYZED); doc.setBoost(i); doc.add(f); writer.addDocument(doc); i += 2f; } writer.close(); IndexReader reader2 = IndexReader.open(index.getDirectory()); for (int j = 0; j < reader2.maxDoc(); j++) { if (reader2.isDeleted(j)) { continue; } Document doc = reader2.document(j); Field f = doc.getField(AutoCompleteIndexFactory.QUERYTEXTFIELD); System.out.println(f.stringValue() + ":" + f.getBoost() + ", docBoost:" + doc.getBoost()); doc.setBoost(j); }

    Read the article

  • noncopyable static const member class in template class

    - by Dukales
    I have a non-copyable (inherited from boost::noncopyable) class that I use as a custom namespace. Also, I have another class, that uses previous one, as shown here: #include <boost/utility.hpp> #include <cmath> template< typename F > struct custom_namespace : boost::noncopyable { F sqrt_of_half(F const & x) const { using std::sqrt; return sqrt(x / F(2.0L)); } // ... maybe others are not so dummy const/constexpr methods }; template< typename F > class custom_namespace_user { static ::custom_namespace< F > const custom_namespace_; public : F poisson() const { return custom_namespace_.sqrt_of_half(M_PI); } static F square_diagonal(F const & a) { return a * custom_namespace_.sqrt_of_half(1.0L); } }; template< typename F > ::custom_namespace< F > const custom_namespace_user< F >::custom_namespace_(); this code leads to the next error (even without instantiation): error: no 'const custom_namespace custom_namespace_user::custom_namespace_()' member function declared in class 'custom_namespace_user' The next way is not legitimate: template< typename F ::custom_namespace< F const custom_namespace_user< F ::custom_namespace_ = ::custom_namespace< F (); What should I do to declare this two classes (first as noncopyable static const member class of second)? Is this feaseble?

    Read the article

  • What is the difference between Inversion of Control and Dependency injection in C++?

    - by rlbond
    I've been reading recently about DI and IoC in C++. I am a little confused (even after reading related questions here on SO) and was hoping for some clarification. It seems to me that being familiar with the STL and Boost leads to use of dependency injection quite a bit. For example, let's say I made a function that found the mean of a range of numbers: template <typename Iter> double mean(Iter first, Iter last) { double sum = 0; size_t number = 0; while (first != last) { sum += *(first++); ++number; } return sum/number; }; Is this dependency injection? Inversion of control? Neither? Let's look at another example. We have a class: class Dice { public: typedef boost::mt19937 Engine; Dice(int num_dice, Engine& rng) : n_(num_dice), eng_(rng) {} int roll() { int sum = 0; for (int i = 0; i < num_dice; ++i) sum += boost::uniform_int<>(1,6)(eng_); return sum; } private: Engine& eng_; int n_; }; This seems like dependency injection. But is it inversion of control? Also, if I'm missing something, can someone help me out?

    Read the article

  • Why do you need "extern C" for in C++ callbacks to C functions?

    - by Artyom
    Hello, I find such examples in Boost code. namespace boost { namespace { extern "C" void *thread_proxy(void *f) { .... } } // anonymous void thread::thread_start(...) { ... pthread_create(something,0,&thread_proxy,something_else); ... } } // boost Why do you actually need this extern "C"? It is clear that thread_proxy function is private internal and I do not expect that it would be mangled as "thread_proxy" because I actually do not need it mangled at all. In fact in all my code that I had written and that runs on may platforms I never used extern "C" and this had worked as-as with normal functions. Why extern "C" is added? My problem is that extern "C" function pollute global name-space and they do not actually hidden as author expects. This is not duplicate! I'm not talking about mangling and external linkage. It is obvious in this code that external linkage is unwanted!

    Read the article

  • How do I enforce the order of qmake library dependencies?

    - by James Oltmans
    I'm getting a lot of errors because qmake is improperly ordering the boost libraries I'm using. Here's what .pro file looks like QT += core gui TARGET = MyTarget TEMPLATE = app CONFIG += no_keywords \ link_pkgconfig SOURCES += file1.cpp \ file2.cpp \ file3.cpp PKGCONFIG += my_package \ sqlite3 LIBS += -lsqlite3 \ -lboost_signals \ -lboost_date_time HEADERS += file1.h\ file2.h\ file3.h FORMS += mainwindow.ui RESOURCES += Resources/resources.qrc This produces the following command: g++ -Wl,-O1 -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/lib/x86_64-linux-gnu -lboost_signals -lboost_date_time -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lQtGui -lQtCore Note: mylib1 and mylib2 are statically compiled by another project, placed in /usr/local/lib with an appropriate pkg-config .pc file pointing there. The .pro file references them via my_package in PKGCONFIG. The problem is not with pkg-config's output but with Qt's ordering. Here's the .pc file: prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: my_package Description: My component package Version: 0.1 URL: http://example.com Libs: -L${libdir} -lmylib1 -lmylib2 Cflags: -I${includedir}/my_package/ The linking stage fails spectacularly as mylib1 and mylib2 come up with a lot of undefined references to boost libraries that both the app and mylib1 and mylib2 are using. We have another build method using scons and it properly orders things for the linker. It's build command order is below. g++ -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lboost_signals -lboost_date_time -lQtGui -lQtCore Note that the principle difference is the order of the boost libs. Scons puts them at the end just before QtGui and QtCore while qmake puts them first. The other differences in the compile commands are unimportant as I have hand modified the qmake produced make file and the simple reordering fixed the problem. So my question is, how do I enforce the right order in my .pro file despite what qmake thinks they should be?

    Read the article

  • Why do you need "extern C" for C++ callbacks to C functions?

    - by Artyom
    Hello, I find such examples in Boost code. namespace boost { namespace { extern "C" void *thread_proxy(void *f) { .... } } // anonymous void thread::thread_start(...) { ... pthread_create(something,0,&thread_proxy,something_else); ... } } // boost Why do you actually need this extern "C"? It is clear that thread_proxy function is private internal and I do not expect that it would be mangled as "thread_proxy" because I actually do not need it mangled at all. In fact in all my code that I had written and that runs on may platforms I never used extern "C" and this had worked as-as with normal functions. Why extern "C" is added? My problem is that extern "C" function pollute global name-space and they do not actually hidden as author expects. This is not duplicate! I'm not talking about mangling and external linkage. It is obvious in this code that external linkage is unwanted! Answer: Calling convention of C and C++ functions are not necessary the same, so you need to create one with C calling convention. See 7.5 (p4) of C++ standard.

    Read the article

  • How to negate a predicate function using operator ! in C++?

    - by Chan
    Hi, I want to erase all the elements that do not satisfy a criterion. For example: delete all the characters in a string that are not digit. My solution using boost::is_digit worked well. struct my_is_digit { bool operator()( char c ) const { return c >= '0' && c <= '9'; } }; int main() { string s( "1a2b3c4d" ); s.erase( remove_if( s.begin(), s.end(), !boost::is_digit() ), s.end() ); s.erase( remove_if( s.begin(), s.end(), !my_is_digit() ), s.end() ); cout << s << endl; return 0; } Then I tried my own version, the compiler complained :( error C2675: unary '!' : 'my_is_digit' does not define this operator or a conversion to a type acceptable to the predefined operator I could use not1() adapter, however I still think the operator ! is more meaningful in my current context. How could I implement such a ! like boost::is_digit() ? Any idea? Thanks, Chan Nguyen

    Read the article

  • Including typedef of child in parent class

    - by Baz
    I have a class which looks something like this. I'd prefer to have the typedef of ParentMember in the Parent class and rename it Member. How might this be possible? The only way I can see is to have std::vector as a public member instead of using inheritance. typedef std::pair<std::string, boost::any> ParentMember; class Parent: public std::vector<ParentMember> { public: template <typename T> std::vector<T>& getMember(std::string& s) { MemberFinder finder(s); std::vector<ParentMember>::iterator member = std::find_if(begin(), end(), finder); boost::any& container = member->second; return boost::any_cast<std::vector<T>&>(container); } private: class Finder { ... }; };

    Read the article

  • In plesk 9.3.0, which php.ini is in use?

    - by Gaia
    I have 3 (actually 4, but the 4th one is for installatron) php.ini files in my Virtuozzo Container running RHEL 5.x /vz/root/1003/usr/local/psa/admin/conf/php.ini /vz/root/1003/etc/php.ini /vz/root/1003/etc/etc/php.ini which one do I use to change the MEMORY_LIMIT for a wordpress app running in the container 1003? Thanks!

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Identify OpenVZ virtual machine from inside

    - by Alfred Godoy
    Is there any way for me to identify which OpenVZ container I am in, from inside the container? I am working on a setup where OpenVZ machines shall boot the same (read-only) disk image, so I can not configure them individually in the file system. I need a unique identification for each of the virtual servers, to be used by scripts running inside the OpenVZ containers. (I'm running Debian Lenny, BTW.)

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >