Search Results

Search found 12224 results on 489 pages for 'map editor'.

Page 466/489 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • WCF Endpoints & Binding Configuration Issues

    - by CodeAbundance
    I am running into a very strange issue here folks. For simplicity I created a project for the sole purpose of testing the issue outside the framework of a larger application and still encountered what is either a bug in WCF within Visual Studio 2010 or something related to my WCF newbie skill set : ) Here is the issue: I have a WCF endpoint I created running inside of an MVC3 project called "SimpleMethod". The method runs inside of a .svc file on the root of the application and it returns a bool. Using the "WCF Service Configuration Editor" I have added the endpoint to my Web.Config along with a called "LargeImageBinding". Here is the service: [OperationContract] public bool SimpleMethod() { return true; } And the Web.Config generated by the Config Tool: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="LargeImageBinding" closeTimeout="00:10:00" /> </wsHttpBinding> </bindings> <services> <service name="WCFEndpoints.ServiceTestOne"> <endpoint address="/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="LargeImageBinding" contract="WCFEndpoints.IServiceTestOne" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> The service renders fine and you can see the endpoint when you navigate to: http://localhost:57364/ServiceTestOne.svc - Now the issue occurs when I create a separate project to consume the service. I add a service reference to a running instance of the above project, point it to: http://localhost:57364/ServiceTestOne.svc Here is the weird part. The service automatically generates just fine but In the Web.Config the endpoint that is generated looks like this: <client> <endpoint address="http://localhost:57364/ServiceTestOne.svc/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IServiceTestOne" contract="ServiceTestOne.IServiceTestOne" name="WSHttpBinding_IServiceTestOne"> As you can see it lists the "ServiceTestOne.svc" portion of the address twice! When I make a call to the the service I get the following error: The remote server returned an error: (404) Not Found. I tried removing the extra "/ServiceTestOne.svc" at the end of the endpoint address in the above config, and I get the same exact error. Now what DOES work is if I go back to the WCF application and remove the custom endpoint and binding references in the Web.Config (everything in the "services" and "bindings" tags) then go back to the consumer application, update the reference to the service and make the call to SimpleMethod()....BOOM works like a charm and I get back a bool set to true. The thing is, I need to make custom binding configurations in order to allow for access to the service outside of the defaults, and from what I can tell, any attempt to create custom bindings makes the endpoints seem to run fine, but fail when an actual method call is made. Can anyone see any flaw in how I am putting this together? Thank you for your time - I have been running in circles with this for about a week!

    Read the article

  • Simplest way to flatten document to a view in RavenDB

    - by degorolls
    Given the following classes: public class Lookup { public string Code { get; set; } public string Name { get; set; } } public class DocA { public string Id { get; set; } public string Name { get; set; } public Lookup Currency { get; set; } } public class ViewA // Simply a flattened version of the doc { public string Id { get; set; } public string Name { get; set; } public string CurrencyName { get; set; } // View just gets the name of the currency } I can create an index that allows client to query the view as follows: public class A_View : AbstractIndexCreationTask<DocA, ViewA> { public A_View() { Map = docs => from doc in docs select new ViewA { Id = doc.Id, Name = doc.Name, CurrencyName = doc.Currency.Name }; Reduce = results => from result in results group on new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName } into g select new ViewA { Id = g.Key.Id, Name = g.Key.Name, CurrencyName = g.Key.CurrencyName }; } } This certainly works and produces the desired result of a view with the data transformed to the structure required at the client application. However, it is unworkably verbose, will be a maintenance nightmare and is probably fairly inefficient with all the redundant object construction. Is there a simpler way of creating an index with the required structure (ViewA) given a collection of documents (DocA)? FURTHER INFORMATION The issue appears to be that in order to have the index hold the data in the transformed structure (ViewA), we have to do a Reduce. It appears that a Reduce must have both a GROUP ON and a SELECT in order to work as expected so the following are not valid: INVALID REDUCE CLAUSE 1: Reduce = results => from result in results group on new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName } into g select g.Key; This produces: System.InvalidOperationException: Variable initializer select must have a lambda expression with an object create expression Clearly we need to have the 'select new'. INVALID REDUCE CLAUSE 2: Reduce = results => from result in results select new ViewA { Id = result.Id, Name = result.Name, CurrencyName = result.CurrencyName }; This prduces: System.InvalidCastException: Unable to cast object of type 'ICSharpCode.NRefactory.Ast.IdentifierExpression' to type 'ICSharpCode.NRefactory.Ast.InvocationExpression'. Clearly, we also need to have the 'group on new'. Thanks for any assistance you can provide. (Note: removing the type (ViewA) from the constructor calls has no effect on the above)

    Read the article

  • Launching a file using ACTION_VIEW Intent Action

    - by Sneha
    I have the following code to launch a file : try { path = fileJsonObject.getString("filePath"); if (path.indexOf("/") == 0) { path = path.substring(1, path.length()); } path = root + path; final File fileToOpen = new File(path); if (fileToOpen.exists()) { if (fileToOpen.isFile()) { Intent myIntent = new Intent(android.content.Intent.ACTION_VIEW); myIntent.setData(Uri.parse(path)); final String pathToCheck = new String(path); pathToCheck.toLowerCase(); if (pathToCheck.endsWith(".wav") || pathToCheck.endsWith(".ogg") || pathToCheck.endsWith(".mp3") || pathToCheck.endsWith(".mid") || pathToCheck.endsWith(".midi") || pathToCheck.endsWith(".amr")) { myIntent.setType("audio/*"); } else if (pathToCheck.endsWith(".mpg") || pathToCheck.endsWith(".mpeg") || pathToCheck.endsWith(".3gp") || pathToCheck.endsWith(".mp4")) { myIntent.setType("video/*"); } else if (pathToCheck.endsWith(".jpg") || pathToCheck.endsWith(".jpeg") || pathToCheck.endsWith(".gif") || pathToCheck.endsWith(".png") || pathToCheck.endsWith(".bmp")) { myIntent.setType("image/*"); } else if (pathToCheck.endsWith(".txt") || pathToCheck.endsWith(".csv") || pathToCheck.endsWith(".xml")) { Log.i("txt","Text fileeeeeeeeeeeeeeeeeeeeeeeeee"); myIntent.setType("text/*"); } else if (pathToCheck.endsWith(".gz") || pathToCheck.endsWith(".rar") || pathToCheck.endsWith(".zip")) { myIntent.setType("package/*"); } else if (pathToCheck.endsWith(".apk")) { myIntent.setType("application/vnd.android.package-archive"); } ((Activity) context).startActivityForResult(myIntent, RequestCodes.LAUNCH_FILE_CODE); } else { errUrl = resMsgHandler.errMsgResponse(fileJsonObject, "Incorrect path provided. please give correct path of file"); return errUrl; } } else { errUrl = resMsgHandler.errMsgResponse(fileJsonObject,"Incorrect path provided. please give correct path of file"); return errUrl; } } catch (Exception e) { e.printStackTrace(); Log.i("err","Unable to launch file" + " " + e.getMessage()); errUrl = resMsgHandler.errMsgResponse(fileJsonObject, "Unable to launch file" + " " + e.getMessage()); return errUrl; } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); try { if (requestCode == RequestCodes.LAUNCH_FILE_CODE) { if (resultCode == RESULT_CANCELED) { Log.i("err","errrrrrrrrrrrrrrrrrrrrrrrrrrrrrr"); String errUrl = responseMsgHandler.errMsgResponse(FileHandler.fileJsonObject, "Unable to launch file"); mWebView.loadUrl(errUrl); } else if (resultCode == RESULT_OK) { String successUrl = responseMsgHandler.launchfileResponse(FileHandler.fileJsonObject); mWebView.loadUrl(successUrl); } Amd the result ctrl is at "if (resultCode == RESULT_CANCELED)". So how to successfully launch this? May be in short i am doing this: final File fileToOpen = new File(path); if (fileToOpen.exists()) { if (fileToOpen.isFile()) { Intent myIntent = new Intent(android.content.Intent.ACTION_VIEW); myIntent.setData(Uri.parse(path)); if (pathToCheck.endsWith(".txt") || pathToCheck.endsWith(".csv") || pathToCheck.endsWith(".xml")) { Log.i("txt","Text fileeeeeeeeeeeeeeeeeeeeeeeeee"); myIntent.setType("text/*"); startActivityForResult(myIntent, RequestCodes.LAUNCH_FILE_CODE); and @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); if (requestCode == RequestCodes.LAUNCH_FILE_CODE) { if (resultCode == RESULT_CANCELED) { Log.i ("err","errrrrrrrrrrrrrrrrrrrrrrrrrrrrrr"); String errUrl = responseMsgHandler.errMsgResponse(FileHandler.fileJsonObject, "Unable to launch file"); mWebView.loadUrl(errUrl); } else if (resultCode == RESULT_OK) { String successUrl = responseMsgHandler.launchfileResponse(FileHandler.fileJsonObject); mWebView.loadUrl(successUrl); } My err log: 04-04 10:53:57.077: ERROR/AndroidRuntime(6861): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.tf.thinkdroid.sstablet/com.tf.thinkdroid.write.editor.WriteEditorActivity}: java.lang.NullPointerException 04-04 10:53:57.077: ERROR/AndroidRuntime(6861): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 04-04 10:53:57.077: ERROR/AndroidRuntime(6861): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) ..... 04-04 10:53:57.077: ERROR/AndroidRuntime(6861): Caused by: java.lang.NullPointerException 04-04 10:53:57.077: ERROR/AndroidRuntime(6861): at com.tf.thinkdroid.common.app.TFActivity.storeDataToFileIfNecessary(Unknown Source) 04-04 10:53:57.077: ERROR/AndroidRuntime(6861): at com.tf.thinkdroid.common.app.TFActivity.onPostCreate(Unknown Source) ... Thanks Sneha

    Read the article

  • Windows splash screen using GDI+

    - by Luther
    The eventual aim of this is to have a splash screen in windows that uses transparency but that's not what I'm stuck on at the moment. In order to create a transparent window, I'm first trying to composite the splash screen and text on an off screen buffer using GDI+. At the moment I'm just trying to composite the buffer and display it in response to a 'WM_PAINT' message. This isn't working out at the moment; all I see is a black window. I imagine I've misunderstood something with regards to setting up render targets in GDI+ and then rendering them (I'm trying to render the screen using straight forward GDI blit) Anyway, here's the code so far: //my window initialisation code void MyWindow::create_hwnd(HINSTANCE instance, const SIZE &dim) { DWORD ex_style = WS_EX_LAYERED ; //eventually I'll be making use of this layerd flag m_hwnd = CreateWindowEx( ex_style, szFloatingWindowClass , L"", WS_POPUP , 0, 0, dim.cx, dim.cy, null, null, instance, null); SetWindowLongPtr(m_hwnd ,0, (__int3264)(LONG_PTR)this); m_display_dc = GetDC(NULL); //This was sanity check test code - just loading a standard HBITMAP and displaying it in WM_PAINT. It worked fine //HANDLE handle= LoadImage(NULL , L"c:\\test_image2.bmp", IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE); m_gdip_offscreen_bm = new Gdiplus::Bitmap(dim.cx, dim.cy); m_gdi_dc = Gdiplus::Graphics::FromImage(m_gdip_offscreen_bm);//new Gdiplus::Graphics(m_splash_dc );//window_dc ;m_splash_dc //this draws the conents of my splash screen - this works if I create a GDI+ context for the window, rather than for an offscreen bitmap. //For all I know, it might actually be working but when I try to display the contents on screen, it shows a black image draw_all(); //this is just to show that drawing something simple on the offscreen bit map seems to have no effect Gdiplus::Pen pen(Gdiplus::Color(255, 0, 0, 255)); m_gdi_dc->DrawLine(&pen, 0,0,100,100); DWORD last_error = GetLastError(); //returns '0' at this stage } And here's the snipit that handles the WM_PAINT message: ---8<----------------------- //Paint message snippit case WM_PAINT: { BITMAP bm; PAINTSTRUCT ps; HDC hdc = BeginPaint(vg->m_hwnd, &ps); //get the HWNDs DC HDC hdcMem = vg->m_gdi_dc->GetHDC(); //get the HDC from our offscreen GDI+ object unsigned int width = vg->m_gdip_offscreen_bm->GetWidth(); //width and height seem fine at this point unsigned int height = vg->m_gdip_offscreen_bm->GetHeight(); BitBlt(hdc, 0, 0, width, height, hdcMem, 0, 0, SRCCOPY); //this blits a black rectangle DWORD last_error = GetLastError(); //this was '0' vg->m_gdi_dc->ReleaseHDC(hdcMem); EndPaint(vg->m_hwnd, &ps); //end paint return 1; } ---8<----------------------- My apologies for the long post. Does anybody know what I'm not quite understanding regarding how you write to an offscreen buffer using GDI+ (or GDI for that matter)and then display this on screen? Thank you for reading.

    Read the article

  • dijit/form/Select broken in Internet Explorer using Esri Javascript 3.7

    - by disuse
    After developing a web map app in Firefox, I tested my code in Internet Explorer (company standard) to discover that the dijit/form/Select is misbehaving using the latest Esri JavaScript v3.7. The issue I am seeing is that the Select will not update/change from the first option in the list when using v3.7. If I bump the version down to 3.6, it works as expected. I've tried IE browser modes from 7 to 10 and am experiencing the same behavior between all of them. Can someone confirm they are experiencing the same thing? Example in 3.7 - http://jsbin.com/aVIsApO/1/edit Example in 3.6 - http://jsbin.com/odIxETu/7/edit Codeblock var url = "http://services.arcgis.com/V6ZHFr6zdgNZuVG0/ArcGIS/rest/services/Street_Trees/FeatureServer/0"; var frmTrees; require([ "esri/tasks/query", "esri/tasks/QueryTask", "dojo/dom-construct", "dijit/form/Select", "dojo/parser", "dijit/registry", "dojo/on", "dojo/ready", "dojo/_base/connect", "dojo/domReady!" ], function( Query, QueryTask, domConstruct, Select, parser, registry, on, ready, connect ) { ready(function() { frmTrees = registry.byId("trees"); var qt = new QueryTask(url); var query = new Query(); query.where = "FID < 25"; query.orderByFields = ["qSpecies"]; query.returnGeometry = false; query.outFields = ["qSpecies", "TreeID"]; query.groupByFieldsForStatistics = ["qSpecies"]; //query.returnDistinctValues = true; qt.execute(query, function(results) { //var frm_domain_area = dom.byId("domain_area"); var testVals = {}; for (var i = 0; i < results.features.length; i++) { var id = results.features[i].attributes.TreeID; var desc = results.features[i].attributes.qSpecies; if (!testVals[id]) { testVals[id] = true; var selectElem = domConstruct.create("option",{ label: desc + " (" + id + ")", value: id }); frmTrees.addOption(selectElem); } } }); frmTrees.on("change", function() { console.debug(frmTrees.get("value")); }); }); });

    Read the article

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • using LoadControl with object initializer to create properties

    - by lloydphillips
    In the past I've used UserControls to create email templates which I can fill properties on and then use LoadControl and then RenderControl to get the html for which to use for the body text of my email. This was within asp.net webforms. I'm in the throws of building an mvc website and wanted to do something similar. I've actually considered putting this functionality in a seperate class library and am looking into how I can do this so that in my web layer I can just call EmailTemplate.SubscriptionEmail() which will then generate the html from my template with properties in relevant places (obviously there needs to be parameters for email address etc in there). I wanted to create a single Render control method for which I can pass a string to the path of the UserControl which is my template. I've come across this on the web that kind of suits my needs: public static string RenderUserControl(string path, string propertyName, object propertyValue) { Page pageHolder = new Page(); UserControl viewControl = (UserControl)pageHolder.LoadControl(path); if (propertyValue != null) { Type viewControlType = viewControl.GetType(); PropertyInfo property = viewControlType.GetProperty(propertyName); if (property != null) property.SetValue(viewControl, propertyValue, null); else { throw new Exception(string.Format( "UserControl: {0} does not have a public {1} property.", path, propertyName)); } } pageHolder.Controls.Add(viewControl); StringWriter output = new StringWriter(); HttpContext.Current.Server.Execute(pageHolder, output, false); return output.ToString(); } My issue is that my UserControl(s) may have multiple and differing properties. So SubscribeEmail may require FirstName and EmailAddress where another email template UserControl (lets call it DummyEmail) would require FirstName, EmailAddress and DateOfBirth. The method above only appears to carry one parameter for propertyName and propertyValue. I considered an array of strings that I could put the varying properties into but then I thought it'd be cool to have an object intialiser so I could call the method like this: RenderUserControl("EmailTemplates/SubscribeEmail.ascs", new object() { Firstname="Lloyd", Email="[email protected]" }) Does that make sense? I was just wondering if this is at all possible in the first place and how I'd implement it? I'm not sure if it would be possible to map the properties set on 'object' to properties on the loaded user control and if it is possible where to start in doing this? Has anyone done something like this before? Can anyone help? Lloyd

    Read the article

  • Winforms controls and "generic" events handlers. How can I do this?

    - by Yanko Hernández Alvarez
    In the demo of the ObjectListView control there is this code (in the "Complex Example" tab page) to allow for a custom editor (a ComboBox) (Adapted to my case and edited for clarity): EventHandler CurrentEH; private void ObjectListView_CellEditStarting(object sender, CellEditEventArgs e) { if (e.Column == SomeCol) { ISomeInterface M = (e.RowObject as ObjectListView1Row).SomeObject; //(1) ComboBox cb = new ComboBox(); cb.Bounds = e.CellBounds; cb.DropDownStyle = ComboBoxStyle.DropDownList; cb.DataSource = ISomeOtherObjectCollection; cb.DisplayMember = "propertyName"; cb.DataBindings.Add("SelectedItem", M, "ISomeOtherObject", false, DataSourceUpdateMode.Never); e.Control = cb; cb.SelectedIndexChanged += CurrentEH = (object sender2, EventArgs e2) => M.ISomeOtherObject = (ISomeOtherObject)((ComboBox)sender2).SelectedValue; //(2) } } private void ObjectListView_CellEditFinishing(object sender, CellEditEventArgs e) { if (e.Column == SomeCol) { // Stop listening for change events ((ComboBox)e.Control).SelectedIndexChanged -= CurrentEH; // Any updating will have been down in the SelectedIndexChanged // event handler. // Here we simply make the list redraw the involved ListViewItem ((ObjectListView)sender).RefreshItem(e.ListViewItem); // We have updated the model object, so we cancel the auto update e.Cancel = true; } } I have too many other columns with combo editors inside objectlistviews to use a copy& paste strategy (besides, copy&paste is a serious source of bugs), so I tried to parameterize the code to keep the code duplication to a minimum. ObjectListView_CellEditFinishing is a piece of cake: HashSet<OLVColumn> cbColumns = new HashSet<OLVColumn> (new OLVColumn[] { SomeCol, SomeCol2, ...}; private void ObjectListView_CellEditFinishing(object sender, CellEditEventArgs e) { if (cbColumns.Contains(e.Column)) ... but ObjectListView_CellEditStarting is the problematic. I guess in CellEditStarting I will have to discriminate each case separately: private void ObjectListView_CellEditStarting(object sender, CellEditEventArgs e) { if (e.Column == SomeCol) // code to create the combo, put the correct list as the datasource, etc. else if (e.Column == SomeOtherCol) // code to create the combo, put the correct list as the datasource, etc. And so on. But how can I parameterize the "code to create the combo, put the correct list as the datasource, etc."? Problem lines are (1) Get SomeObject. the property NAME varies. (2) Set ISomeOtherObject, the property name varies too. The types vary too, but I can cover those cases with a generic method combined with a not so "typesafe" API (for instance, the cb.DataBindings.Add and cb.DataSource both use an object) Reflection? more lambdas? Any ideas? Any other way to do the same? PS: I want to be able to do something like this: private void ObjectListView_CellEditStarting(object sender, CellEditEventArgs e) { if (e.Column == SomeCol) SetUpCombo<ISomeInterface>(ISomeOtherObjectCollection, "propertyName", SomeObject, ISomeOtherObject); else if (e.Column == SomeOtherCol) SetUpCombo<ISomeInterface2>(ISomeOtherObject2Collection, "propertyName2", SomeObject2 ISomeOtherObject2); and so on. Or something like that. I know, parameters SomeObject and ISomeOtherObject are not real parameters per see, but you get the idea of what I want. I want not to repeat the same code skeleton again and again and again. One solution would be "preprocessor generics" like C's DEFINE, but I don't thing c# has something like that. So, does anyone have some alternate ideas to solve this?

    Read the article

  • Creating AST for shared and local variables

    - by Rizwan Abbasi
    Here is my grammar grammar simulator; options { language = Java; output = AST; ASTLabelType=CommonTree; } //imaginary tokens tokens{ SHARED; LOCALS; BOOL; RANGE; ARRAY; } parse : declaration+ ; declaration :variables ; variables : locals ; locals : (bool | range | array) ; bool :ID 'in' '[' ID ',' ID ']' ('init' ID)? -> ^(BOOL ID ID ID ID?) ; range : ID 'in' '[' INT '..' INT ']' ('init' INT)? -> ^(RANGE ID INT INT INT?) ; array :ID 'in' 'array' 'of' '[' INT '..' INT ']' ('init' INT)? -> ^(ARRAY ID INT INT INT?) ; ID : (('a'..'z' | 'A'..'Z'|'_')('a'..'z' | 'A'..'Z'|'0'..'9'|'_'))* ; INT : ('0'..'9')+ ; WHITESPACE : ('\t' | ' ' | '\r' | '\n' | '\u000C')+ {$channel = HIDDEN;} ; INPUT flag in [down, up] init down pc in [0..7] init 0 CA in array of [0..5] init 0 AST It is having a small problem. Variables (bool, range or array) can be of two abstract types 1. locals (each object will have it's own variable) 2. shared (think of static in java, same for all object) Now the requirements are changed. I want the user to input like this NEW INPUT domains: upDown [up,down] possibleStates [0-7] booleans [true,false] locals: pc in possibleStates init 0 flag in upDown init down flag1 in upDown init down map in array of booleans init false shared: pcs in possibleStates init 0 flag in upDown init down flag1 in upDown init down maps in array of booleans init false Again, all the variables can be of two types (of any domain sepecified) 1. Local 2. Shared In Domains: upDown [up,down] possibleStates [0-7] upDown, up, down and possibleStates are of type ID (ID is defined in my above grammar), 0 and 7 are of type INT Can any body help me how to convert my current grammar to meet new specifications.

    Read the article

  • Why is CDATA needed and not working everywhere the same way?

    - by baptx
    In Firefox's and Chrome's consoles, this works (alerts script content): var script = document.createElement("script"); script.textContent = ( function test() { var a = 1; } ); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); Using this code as a Greasemonkey script for Firefox works too. Now, if want to add a "private method" do() to test() It is not working anymore, in neither Firefox/Chrome console nor in a Greasemonkey script: var script = document.createElement("script"); script.textContent = ( function test() { var a = 1; var do = function () { var b = 2; }; } ); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); To make this work in a Greasemonkey script, I have to put all the code in a CDATA tag block: var script = document.createElement("script"); script.textContent = (<![CDATA[ function test() { var a = 1; var do = function() { var b = 2; }; } ]]>); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); This is only works in a Greasemonkey script; it throws an error from the Firefox/Chrome console. I don't understand why I should use a CDATA tag, I have no XML rules to respect here because I'm not using XHTML. To make it work in Firefox console (or Firebug), I need to do put CDATA into tags like <> and </>: var script = document.createElement("script"); script.textContent = (<><![CDATA[ function test() { var a = 1; var do = function() { var b = 2; }; } ]]></>); document.getElementsByTagName("head")[0].appendChild(script); alert(document.getElementsByTagName("head")[0].lastChild.textContent); This doesn't working from the Chrome console. I've tried adding .toString() at the end like many people are doing (]]></>).toString();), but it's useless. I tried to replace <> and </> with a tag name <foo> </foo> but that didn't work either. Why doesn't my first code snippet work if I define var do = function(){} inside another function? Why should I use CDATA as a workaround even if I'm not using XHTML? And why should I add <> </> for Firefox console if it's working without in a Greasemonkey script? Finally, what is the solution for Chrome and other browsers? EDIT: My bad, I've never used do-while in JS and I've created this example in a simple text editor, so I didn't see "do" was a reserved keyword :p But problem is still here, I've not initialized the Javascript class in my examples. With this new example, CDATA is needed for Greasemonkey, Firefox need CDATA between E4X <> </> and Chrome fails: var script = document.createElement("script"); script.textContent = ( <><![CDATA[var aClass = new aClass(); function aClass() { var a = 1; var aPrivateMethod = function() { var b = 2; alert(b); }; this.aPublicMethod = function() { var c = 3; alert(c); }; } aClass.aPublicMethod();]]></> ); document.getElementsByTagName("head")[0].appendChild(script); Question: why?

    Read the article

  • How can I provide values for non-grouped columns in NHibernate?

    - by ddc0660
    I have a criteria query: Session.CreateCriteria<Sell043Report>() .SetProjection(.ProjectionList() .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.location)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.agent)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.cusip)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.SettlementDate)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.salePrice)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.foreignFx)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.batchNumber)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.origSaleDate)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.planName)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.dateTimeAdded)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.shares)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.netMoney)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.grossMoney)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.taxWithheld)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.fees))) .List<Sell043Report>(); that generates the following SQL: SELECT this_.location as y0_, this_.agent as y1_, this_.cusip as y2_, this_.SettlementDate as y3_, this_.salePrice as y4_, this_.foreignFx as y5_, this_.batchNumber as y6_, this_.origSaleDate as y7_, this_.planName as y8_, this_.dateTimeAdded as y9_, sum(this_.shares) as y10_, sum(this_.netMoney) as y11_, sum(this_.grossMoney) as y12_, sum(this_.taxWithheld) as y13_, sum(this_.fees) as y14_ FROM MIS_IPS_Sell043Report this_ GROUP BY this_.location, this_.agent, this_.cusip, this_.SettlementDate, this_.salePrice, this_.foreignFx, this_.batchNumber, this_.origSaleDate, this_.planName, this_.dateTimeAdded however the Sell043Report table has additional columns than those listed in the SELECT statement so I'm receiving this error when attempting to get a list of Sell043Reports: System.ArgumentException: The value "System.Object[]" is not of type "xyz.Sell043Report" and cannot be used in this generic collection. I suspect the problem is that I'm not selecting all of the columns for a Sell043Report and so it doesn't know how to map the dataset to the object. I'm trying to achieve something like this: SELECT this_.location as y0_, this_.agent as y1_, this_.cusip as y2_, this_.SettlementDate as y3_, this_.salePrice as y4_, this_.foreignFx as y5_, this_.batchNumber as y6_, this_.origSaleDate as y7_, this_.planName as y8_, this_.dateTimeAdded as y9_, sum(this_.shares) as y10_, sum(this_.netMoney) as y11_, sum(this_.grossMoney) as y12_, sum(this_.taxWithheld) as y13_, sum(this_.fees) as y14_, '' as Address1, '' as Address2 // etc FROM MIS_IPS_Sell043Report this_ GROUP BY this_.location, this_.agent, this_.cusip, this_.SettlementDate, this_.salePrice, this_.foreignFx, this_.batchNumber, this_.origSaleDate, this_.planName, this_.dateTimeAdded How can I do this using NHibernate?

    Read the article

  • FluentNHibernate - AutoMappings producing incorrect one-to-many column key

    - by Alberto
    Hi I'm new to NHibernate and FNH and am trying to map these simple classes by using FluentNHibernate AutoMappings feature: public class TVShow : Entity { public virtual string Title { get; set;} public virtual ICollection<Season> Seasons { get; protected set; } public TVShow() { Seasons = new HashedSet<Season>(); } public virtual void AddSeason(Season season) { season.TVShow = this; Seasons.Add(season); } public virtual void RemoveSeason(Season season) { if (!Seasons.Contains(season)) { throw new InvalidOperationException("This TV Show does not contain the given season"); } season.TVShow = null; Seasons.Remove(season); } } public class Season : Entity { public virtual TVShow TVShow { get; set; } public virtual int Number { get; set; } public virtual IList<Episode> Episodes { get; set; } public Season() { Episodes = new List<Episode>(); } public virtual void AddEpisode(Episode episode) { episode.Season = this; Episodes.Add(episode); } public virtual void RemoveEpisode(Episode episode) { if (!Episodes.Contains(episode)) { throw new InvalidOperationException("Episode not found on this season"); } episode.Season = null; Episodes.Remove(episode); } } I'm also using a couple of conventions: public class MyForeignKeyConvention : IReferenceConvention { #region IConvention<IManyToOneInspector,IManyToOneInstance> Members public void Apply(FluentNHibernate.Conventions.Instances.IManyToOneInstance instance) { instance.Column("fk_" + instance.Property.Name); } #endregion } The problem is that FNH is generating the section below for the Seasons property mapping: <bag name="Seasons"> <key> <column name="TVShow_Id" /> </key> <one-to-many class="TVShowsManager.Domain.Season, TVShowsManager.Domain, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> The column name above should be fk_TVShow rather than TVShow_Id. If amend the hbm files produced by FNH then the code works. Does anyone know what it's wrong? Thanks in advance.

    Read the article

  • Writing a managed wrapper for unmanaged (C++) code - custom types/structs

    - by Bobby
    faacEncConfigurationPtr FAACAPI faacEncGetCurrentConfiguration( faacEncHandle hEncoder); I'm trying to come up with a simple wrapper for this C++ library; I've never done more than very simple p/invoke interop before - like one function call with primitive arguments. So, given the above C++ function, for example, what should I do to deal with the return type, and parameter? FAACAPI is defined as: #define FAACAPI __stdcall faacEncConfigurationPtr is defined: typedef struct faacEncConfiguration { int version; char *name; char *copyright; unsigned int mpegVersion; unsigned long bitRate; unsigned int inputFormat; int shortctl; psymodellist_t *psymodellist; int channel_map[64]; } faacEncConfiguration, *faacEncConfigurationPtr; AFAIK this means that the return type of the function is a reference to this struct? And faacEncHandle is: typedef struct { unsigned int numChannels; unsigned long sampleRate; ... SR_INFO *srInfo; double *sampleBuff[MAX_CHANNELS]; ... double *freqBuff[MAX_CHANNELS]; double *overlapBuff[MAX_CHANNELS]; double *msSpectrum[MAX_CHANNELS]; CoderInfo coderInfo[MAX_CHANNELS]; ChannelInfo channelInfo[MAX_CHANNELS]; PsyInfo psyInfo[MAX_CHANNELS]; GlobalPsyInfo gpsyInfo; faacEncConfiguration config; psymodel_t *psymodel; /* quantizer specific config */ AACQuantCfg aacquantCfg; /* FFT Tables */ FFT_Tables fft_tables; int bitDiff; } faacEncStruct, *faacEncHandle; So within that struct we see a lot of other types... hmm. Essentially, I'm trying to figure out how to deal with these types in my managed wrapper? Do I need to create versions of these types/structs, in C#? Something like this: [StructLayout(LayoutKind.Sequential)] struct faacEncConfiguration { uint useTns; ulong bitRate; ... } If so then can the runtime automatically "map" these objects onto eachother? And, would I have to create these "mapped" types for all the types in these return types/parameter type hierarchies, all the way down until I get to all primitives? I know this is a broad topic, any advice on getting up-to-speed quickly on what I need to learn to make this happen would be very much appreciated! Thanks!

    Read the article

  • Hibernate3: Self-Referencing Objects

    - by monojohnny
    Need some help on understanding how to do this; I'm going to be running recursive 'find' on a file system and I want to keep the information in a single DB table - with a self-referencing hierarchial structure: This is my DB Table structure I want to populate. DirObject Table: id int NOT NULL, name varchar(255) NOT NULL, parentid int NOT NULL); Here is the proposed Java Class I want to map (Fields only shown): public DirObject { int id; String name; DirObject parent; ... For the 'root' directory was going to use parentid=0; real ids will start at 1, and ideally I want hibernate to autogenerate the ids. Can somebody provide a suggested mapping file for this please; as a secondary question I thought about doing the Java Class like this instead: public DirObject { int id; String name; List<DirObject> subdirs; Could I use the same data model for either of these two methods ? (With a different mapping file of course). --- UPDATE: so I tried the mapping file suggested below (thanks!), repeated here for reference: <hibernate-mapping> <class name="my.proj.DirObject" table="category"> ... <set name="subDirs" lazy="true" inverse="true"> <key column="parentId"/> <one-to-many class="my.proj.DirObject"/> </set> <many-to-one name="parent" class="my.proj.DirObject" column="parentId" cascade="all" /> </class> ...and altered my Java class to have BOTH 'parentid' and 'getSubDirs' [returning a 'HashSet']. This appears to work - thanks, but this is the test code I used to drive this - I think I'm not doing something right here, because I thought Hibernate would take care of saving the subordinate objects in the Set without me having to do this explicitly ? DirObject dirobject=new DirObject(); dirobject.setName("/files"); dirobject.setParent(dirobject); DirObject d1, d2; d1=new DirObject(); d1.setName("subdir1"); d1.setParent(dirobject); d2=new DirObject(); d2.setName("subdir2"); d2.setParent(dirobject); HashSet<DirObject> subdirs=new HashSet<DirObject>(); subdirs.add(d1); subdirs.add(d2); dirobject.setSubdirs(subdirs); session.save(dirobject); session.save(d1); session.save(d2);

    Read the article

  • [N]Hibernate: view-like fetching properties of associated class

    - by chiccodoro
    (Felt quite helpless in formulating an appropriate title...) In my C# app I display a list of "A" objects, along with some properties of their associated "B" objects and properties of B's associated "C" objects: A.Name B.Name B.SomeValue C.Name Foo Bar 123 HelloWorld Bar Hello 432 World ... To clarify: A has an FK to B, B has an FK to C. (Such as, e.g. BankAccount - Person - Company). I have tried two approaches to load these properties from the database (using NHibernate): A fast approach and a clean approach. My eventual question is how to do a fast & clean approach. Fast approach: Define a view in the database which joins A, B, C and provides all these fields. In the A class, define properties "BName", "BSomeValue", "CName" Define a hibernate mapping between A and the View, whereas the needed B and C properties are mapped with update="false" insert="false" and do actually stem from B and C tables, but Hibernate is not aware of that since it uses the view. This way, the listing only loads one object per "A" record, which is quite fast. If the code tries to access the actual associated property, "A.B", I issue another HQL query to get B, set the property and update the faked BName and BSomeValue properties as well. Clean approach: There is no view. Class A is mapped to table A, B to B, C to C. When loading the list of A, I do a double left-join-fetch to get B and C as well: from A a left join fetch a.B left join fetch a.B.C B.Name, B.SomeValue and C.Name are accessed through the eagerly loaded associations. The disadvantage of this approach is that it gets slower and takes more memory, since it needs to created and map 3 objects per "A" record: An A, B, and C object each. Fast and clean approach: I feel somehow uncomfortable using a database view that hides a join and treat that in NHibernate as if it was a table. So I would like to do something like: Have no views in the database. Declare properties "BName", "BSomeValue", "CName" in class "A". Define the mapping for A such that NHibernate fetches A and these properties together using a join SQL query as a database view would do. The mapping should still allow for defining lazy many-to-one associations for getting A.B.C My questions: Is this possible? Is it [un]artful? Is there a better way?

    Read the article

  • Efficient algorithm to distribute work?

    - by Zwei Steinen
    It's a bit complicated to explain but here we go. We have problems like this (code is pseudo-code, and is only for illustrating the problem. Sorry it's in java. If you don't understand, I'd be glad to explain.). class Problem { final Set<Integer> allSectionIds = { 1,2,4,6,7,8,10 }; final Data data = //Some data } And a subproblem is: class SubProblem { final Set<Integer> targetedSectionIds; final Data data; SubProblem(Set<Integer> targetedSectionsIds, Data data){ this.targetedSectionIds = targetedSectionIds; this.data = data; } } Work will look like this, then. class Work implements Runnable { final Set<Section> subSections; final Data data; final Result result; Work(Set<Section> subSections, Data data) { this.sections = SubSections; this.data = data; } @Override public void run(){ for(Section section : subSections){ result.addUp(compute(data, section)); } } } Now we have instances of 'Worker', that have their own state sections I have. class Worker implements ExecutorService { final Map<Integer,Section> sectionsIHave; { sectionsIHave = {1:section1, 5:section5, 8:section8 }; } final ExecutorService executor = //some executor. @Override public void execute(SubProblem problem){ Set<Section> sectionsNeeded = fetchSections(problem.targetedSectionIds); super.execute(new Work(sectionsNeeded, problem.data); } } phew. So, we have a lot of Problems and Workers are constantly asking for more SubProblems. My task is to break up Problems into SubProblem and give it to them. The difficulty is however, that I have to later collect all the results for the SubProblems and merge (reduce) them into a Result for the whole Problem. This is however, costly, so I want to give the workers "chunks" that are as big as possible (has as many targetedSections as possible). It doesn't have to be perfect (mathematically as efficient as possible or something). I mean, I guess that it is impossible to have a perfect solution, because you can't predict how long each computation will take, etc.. But is there a good heuristic solution for this? Or maybe some resources I can read up before I go into designing? Any advice is highly appreciated!

    Read the article

  • Endianness conversion and g++ warnings

    - by SuperBloup
    I've got the following C++ code : template <int isBigEndian, typename val> struct EndiannessConv { inline static val fromLittleEndianToHost( val v ) { union { val outVal __attribute__ ((used)); uint8_t bytes[ sizeof( val ) ] __attribute__ ((used)); } ; outVal = v; std::reverse( &bytes[0], &bytes[ sizeof(val) ] ); return outVal; } inline static void convertArray( val v[], uint32_t size ) { // TODO : find a way to map the array for (uint32_t i = 0; i < size; i++) for (uint32_t i = 0; i < size; i++) v[i] = fromLittleEndianToHost( v[i] ); } }; Which work and has been tested (without the used attributes). When compiling I obtain the following errors from g++ (version 4.4.1) || g++ -Wall -Wextra -O3 -o t t.cc || t.cc: In static member function 'static val EndiannessConv<isBigEndian, val>::fromLittleEndianToHost(val)': t.cc|98| warning: 'used' attribute ignored t.cc|99| warning: 'used' attribute ignored || t.cc: In static member function 'static val EndiannessConv<isBigEndian, val>::fromLittleEndianToHost(val) [with int isBigEndian = 1, val = double]': t.cc|148| instantiated from here t.cc|100| warning: unused variable 'outVal' t.cc|100| warning: unused variable 'bytes' I've tried to use the following code : template <int size, typename valType> struct EndianInverser { /* should not compile */ }; template <typename valType> struct EndianInverser<4, valType> { static inline valType reverseEndianness( const valType &val ) { uint32_t castedVal = *reinterpret_cast<const uint32_t*>( &val ); castedVal = (castedVal & 0x000000FF << (3 * 8)) | (castedVal & 0x0000FF00 << (1 * 8)) | (castedVal & 0x00FF0000 >> (1 * 8)) | (castedVal & 0xFF000000 >> (3 * 8)); return *reinterpret_cast<valType*>( &castedVal ); } }; but it break when enabling optimizations due to the type punning. So, why does my used attribute got ignored? Is there a workaround to convert endianness (I rely on the enum to avoid type punning) in templates?

    Read the article

  • BizTalk - generating schema from Oracle stored proc with table variable argument

    - by Ron Savage
    I'm trying to set up a simple example project in BizTalk that gets changes made to a table in a SQL Server db and updates a copy of that table in an Oracle db. On the SQL Server side, I have a stored proc named GetItemChanges() that returns a variable number of records. On the Oracle side, I have a stored proc named Update_Item_Region_Table() designed to take a table of records as a parameter so that it can process all the records returned from GetItemChanges() in one call. It is defined like this: create or replace type itemrec is OBJECT ( UPC VARCHAR2(15), REGION VARCHAR2(5), LONG_DESCRIPTION VARCHAR2(50), POS_DESCRIPTION VARCHAR2(30), POS_DEPT VARCHAR2(5), ITEM_SIZE VARCHAR2(10), ITEM_UOM VARCHAR2(5), BRAND VARCHAR2(10), ITEM_STATUS VARCHAR2(5), TIME_STAMP VARCHAR2(20), COSTEDBYWEIGHT INTEGER ); create or replace type tbl_of_rec is table of itemrec; create or replace PROCEDURE Update_Item_Region_table ( Item_Data tbl_of_rec ) IS errcode integer; errmsg varchar2(4000); BEGIN for recIndex in 1 .. Item_Data.COUNT loop update FL_ITEM_REGION_TEST set Region = Item_Data(recIndex).Region, Long_description = Item_Data(recIndex).Long_description, Pos_Description = Item_Data(recIndex).Pos_description, Pos_Dept = Item_Data(recIndex).Pos_dept, Item_Size = Item_Data(recIndex).Item_Size, Item_Uom = Item_Data(recIndex).Item_Uom, Brand = Item_Data(recIndex).Brand, Item_Status = Item_Data(recIndex).Item_Status, Timestamp = to_date(Item_Data(recIndex).Time_stamp, 'yyyy-mm-dd HH24:mi:ss'), CostedByWeight = Item_Data(recIndex).CostedByWeight where UPC = Item_Data(recIndex).UPC; log_message(Item_Data(recIndex).Region, '', 'Updated item ' || Item_Data(recIndex).UPC || '.'); end loop; EXCEPTION WHEN OTHERS THEN errcode := SQLCODE(); errmsg := SQLERRM(); log_message('CE', '', 'Error in Update_Item_Region_table(): Code [' || errcode || '], Msg [' || errmsg || '] ...'); END; In my BizTalk project I generate the schemas and binding information for both stored procedures. For the Oracle procedure, I specified a path for the GeneratedUserTypesAssemblyFilePath parameter to generate a DLL to contain the definition of the data types. In the Send Port on the server, I put the path to that Types DLL in the UserAssembliesLoadPath parameter. I created a map to translate the GetItemChanges() schema to the Update_Item_Region_Table() schema. When I run it the data is extracted and transformed fine but causes an exception trying to pass the data to the Oracle proc: *The adapter failed to transmit message going to send port "WcfSendPort_OracleDBBinding_HOST_DATA_Procedure_Custom" with URL "oracledb://dvotst/". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.InvalidOperationException: Custom type mapping for 'HOST_DATA.TBL_OF_REC' is not specified or is invalid.* So it is apparently not getting the information about the custom data type TBL_OF_REC into the Types DLL. Any tips on how to make this work?

    Read the article

  • jQuery: resizing element cuts off parent's background

    - by Justine
    Hi, I've been trying to recreate an effect from this tutorial: http://jqueryfordesigners.com/jquery-look-tim-van-damme/ Unfortunately, I want a background image underneath and because of the resize going on in JavaScript, it gets resized and cut off as well, like so: http://dev.gentlecode.net/dotme/index-sample.html - you can view source there to check the HTML, but basic structure looks like this: <div class="page"> <div class="container"> div.header ul.nav div.main </div> </div> Here is my jQuery code: $('ul.nav').each(function() { var $links = $(this).find('a'), panelIds = $links.map(function() { return this.hash; }).get().join(","), $panels = $(panelIds), $panelWrapper = $panels.filter(':first').parent(), delay = 500; $panels.hide(); $links.click(function() { var $link = $(this), link = (this); if ($link.is('.current')) { return; } $links.removeClass('current'); $link.addClass('current'); $panels.animate({ opacity : 0 }, delay); $panelWrapper.animate({ height: 0 }, delay, function() { var height = $panels.hide().filter(link.hash).show().css('opacity', 1).outerHeight(); $panelWrapper.animate({ height: height }, delay); }); }); var showtab = window.location.hash ? '[hash=' + window.location.hash + ']' : ':first'; $links.filter(showtab).click(); }); In this example, panelWrapper is a div.main and it gets resized to fit the content of tabs. The background is applied to the div.page but because its child is getting resized, it resizes as well, cutting off the background image. It's hard to explain so please look at the link above to see what I mean. I guess what I'm trying to ask is: is there a way to resize an element without resizing its parent? I tried setting height and min-height of .page to 100% and 101% but that didn't work. I tried making the background image fixed, but nada. It also happens if I add the background to the body or even html. Help?

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • How to change the JSON output format and how to support chinese character?

    - by sky
    Currently I using the following code to get my JSON output from MySQL. <?php $session = mysql_connect('localhost','name','pass'); mysql_select_db('dbname', $session); $result= mysql_query('SELECT message FROM posts', $session); $somethings = array(); while ($row = mysql_fetch_assoc($result)) { $somethings[] = $row; } ?> <script type="text/javascript"> var somethings= <?php echo json_encode($somethings); ?>; </script> And the output is: <script type="text/javascript"> var somethings= [{"message":"Welcome to Yo~ :)"},{"message":"Try iPhone post!"},{"message":"????"}]; </script> Here is the question, how can I change my output into format like : <script type="text/javascript"> userAge = new Array('21','36','20'), userMid = new Array('liuple','anhu','jacksen'); </script> Which I'll be using later with following code : var html = ' <table class="map-overlay"> <tr> <td class="user">' + '<a class="username" href="/' + **userMid[index]** + '" target="_blank"><img alt="" src="' + getAvatar(signImgList[index], '72x72') + '"></a><br> <a class="username" href="/' + **userMid[index]** + '" target="_blank">' + userNameList[index] + '</a><br> <span class="info">' + **userSex[index]** + ' ' + **userAge[index]** + '?<br> ' + cityList[index] + '</span>' + '</td> <td class="content">' + picString + somethings[index] + '<br> <span class="time">' + timeList[index] + picTips + '</span></td> </tr> </table> '; Thanks for helping and reading!

    Read the article

  • Problems with validates_inclusion_of, acts_as_tree and rspec

    - by Jens Fahnenbruck
    I have problems to get rspec running properly to test validates_inclusion_of my migration looks like this: class CreateCategories < ActiveRecord::Migration def self.up create_table :categories do |t| t.string :name t.integer :parent_id t.timestamps end end def self.down drop_table :categories end end my model looks like this: class Category < ActiveRecord::Base acts_as_tree validates_presence_of :name validates_uniqueness_of :name validates_inclusion_of :parent_id, :in => Category.all.map(&:id), :unless => Proc.new { |c| c.parent_id.blank? } end my factories: Factory.define :category do |c| c.name "Category One" end Factory.define :category_2, :class => Category do |c| c.name "Category Two" end my model spec looks like this: require 'spec_helper' describe Category do before(:each) do @valid_attributes = { :name => "Category" } end it "should create a new instance given valid attributes" do Category.create!(@valid_attributes) end it "should have a name and it shouldn't be empty" do c = Category.new :name => nil c.should be_invalid c.name = "" c.should be_invalid end it "should not create a duplicate names" do Category.create!(@valid_attributes) Category.new(@valid_attributes).should be_invalid end it "should not save with invalid parent" do parent = Factory(:category) child = Category.new @valid_attributes child.parent_id = parent.id + 100 child.should be_invalid end it "should save with valid parent" do child = Factory.build(:category_2) child.parent = Factory(:category) # FIXME: make it pass, it works on cosole, but I don't know why the test is failing child.should be_valid end end I get the following error: 'Category should save with valid parent' FAILED Expected #<Category id: nil, name: "Category Two", parent_id: 5, created_at: nil, updated_at: nil to be valid, but it was not Errors: Parent is missing On console everything seems to be fine and work as expected: c1 = Category.new :name => "Parent Category" c1.valid? #=> true c1.save #=> true c1.id #=> 1 c2 = Category.new :name => "Child Category" c2.valid? #=> true c2.parent_id = 100 c2.valid? #=> false c2.parent_id = 1 c2.valid? #=> true I'm running rails 2.3.5, rspec 1.3.0 and rspec-rails 1.3.2 Anybody, any idea?

    Read the article

  • Java - is this an idiom or pattern, behavior classes with no state

    - by Berlin Brown
    I am trying to incorporate more functional programming idioms into my java development. One pattern that I like the most and avoids side effects is building classes that have behavior but they don't necessarily have any state. The behavior is locked into the methods but they only act on the parameters passed in. The code below is code I am trying to avoid: public class BadObject { private Map<String, String> data = new HashMap<String, String>(); public BadObject() { data.put("data", "data"); } /** * Act on the data class. But this is bad because we can't * rely on the integrity of the object's state. */ public void execute() { data.get("data").toString(); } } The code below is nothing special but I am acting on the parameters and state is contained within that class. We still may run into issues with this class but that is an issue with the method and the state of the data, we can address issues in the routine as opposed to not trusting the entire object. Is this some form of idiom? Is this similar to any pattern that you use? public class SemiStatefulOOP { /** * Private class implies that I can access the members of the <code>Data</code> class * within the <code>SemiStatefulOOP</code> class and I can also access * the getData method from some other class. * * @see Test1 * */ class Data { protected int counter = 0; public int getData() { return counter; } public String toString() { return Integer.toString(counter); } } /** * Act on the data class. */ public void execute(final Data data) { data.counter++; } /** * Act on the data class. */ public void updateStateWithCallToService(final Data data) { data.counter++; } /** * Similar to CLOS (Common Lisp Object System) make instance. */ public Data makeInstance() { return new Data(); } } // End of Class // Issues with the code above: I wanted to declare the Data class private, but then I can't really reference it outside of the class: I can't override the SemiStateful class and access the private members. Usage: final SemiStatefulOOP someObject = new SemiStatefulOOP(); final SemiStatefulOOP.Data data = someObject.makeInstance(); someObject.execute(data); someObject.updateStateWithCallToService(data);

    Read the article

  • Multiple layouts in rails [Newbie Q]

    - by BriteLite
    Hi. As a newb, I decided to build a "home inventory" application. I am now stuck on how to programmatically select a layout based on what type of item it is when viewing it in a browser. According to my planning, so far I should have created a few models to represent types of items I can find in my home: Furniture, Electronics and Books. class Book < ActiveRecord::Base end class Furniture < ActiveRecord::Base end class Electronic < ActiveRecord::Base end Now the Books model has things like isbn, pages, address, and category. Furniture model has things like color, price, address, and category. Electronics has things like name, voltage, address, and category. Here is where I got confused. I know the property address is going to be the same for all of them. I also know that, I will need to create multiple "layouts" for 3 different types of items to show the different properties of said items with appropriate graphics and stylesheets. But how will I go about deciding which category the item is so I can determine which layout to render. According to me, this is how I will do it: class DisplayController < ApplicationController def display @item = Params[:item] if @item.category = "electronics" render :layout => 'electronics' end end In my routes.rb map.display ':item', :controller => 'display', :action => 'display' I only seem to have one concern with this, I probably will add a lot of categories later on and think there should be a more DRY-esque way of dealing, rather than hardcoding them. I understand that I need to add into my layout html tags to display relevant information for that particular category. ----Questions---- Is this the right way to approach this type of problem. Will this approach be compatible when I decide to add a gem like *thinking_sphinx* to run search. What issues do you see with my approach and how can I make it better. I was reading something about "Polymorphic Assoc", does that apply in this case, since category exist for all items? Also, I was trying to get a routes to render a URL like "http://localhost/living-room-tv"

    Read the article

  • Haskell data serialization of some data implementing a common type class

    - by Evan
    Let's start with the following data A = A String deriving Show data B = B String deriving Show class X a where spooge :: a -> Q [ Some implementations of X for A and B ] Now let's say we have custom implementations of show and read, named show' and read' respectively which utilize Show as a serialization mechanism. I want show' and read' to have types show' :: X a => a -> String read' :: X a => String -> a So I can do things like f :: String -> [Q] f d = map (\x -> spooge $ read' x) d Where data could have been [show' (A "foo"), show' (B "bar")] In summary, I wanna serialize stuff of various types which share a common typeclass so I can call their separate implementations on the deserialized stuff automatically. Now, I realize you could write some template haskell which would generate a wrapper type, like data XWrap = AWrap A | BWrap B deriving (Show) and serialize the wrapped type which would guarantee that the type info would be stored with it, and that we'd be able to get ourselves back at least an XWrap... but is there a better way using haskell ninja-ery? EDIT Okay I need to be more application specific. This is an API. Users will define their As, and Bs and fs as they see fit. I don't ever want them hacking through the rest of the code updating their XWraps, or switches or anything. The most i'm willing to compromise is one list somewhere of all the A, B, etc. in some format. Why? Here's the application. A is "Download a file from an FTP server." B is "convert from flac to mp3". A contains username, password, port, etc. information. B contains file path information. A and B are Xs, and Xs shall be called "Tickets." Q is IO (). Spooge is runTicket. I want to read the tickets off into their relevant data types and then write generic code that will runTicket on the stuff read' from the stuff on disk. At some point I have to jam type information into the serialized data.

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >