Search Results

Search found 27142 results on 1086 pages for 'control structure'.

Page 757/1086 | < Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >

  • How to get apache to look for files in different subfolders folders?

    - by prb
    I am definitely new to mod-rewrite stuff. Note:- here the URL is common, and all the folders and subfolders on same host. The url a user uses to access their page is http://myurl.com/1234/filename.jpg Here the name of the subfolder is an integer is unique and generated dynamically by another application. The subfolder stores images specific to an individual user. So the folder structure is as follows main1 = document root main2 is another folder within main1 or document root. /main1/1234/filename.jpg /main1/5678/filename.jpg /main1/2345/filename.jpg /main1/1212/filename.jpg /main1/main2/2367/filename.jpg /main1/main2/8790/filename.jpg /main1/main2/9966/filename.jpg So, I want to write a rewrite a rule so that if a user tries to type in http://myurl.com/1234/filename.jpg, the rewrite rule will need to look where the file is and serve the request; so for request http:/myurl.com/1234/filename.jpg the actual page is located at /main1/1234/filename.jpg and then need to serve that page from that folder. So, if another users makes a request as http://myurl.com/9966/filename.jpg, it should serve the page from the following destination /main1/main2/9966/filename.jpg Please let me know if the question is still not clear. This is what i have done so far and does not work at all. RewriteCond {DOCUMENT_ROOT}/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/$1 [L] RewriteCond {DOCUMENT_ROOT}/main2/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/main2/$1 [L] any help is really grateful

    Read the article

  • Integrating external computer into a domain - some recommendations please

    - by TomTom
    Given: * A multi loation company. Every office has local routers that connect to a central VPN capable rouer in a data center. All fine so far. We now need to move a computer off site into a hosting center across the globe, to get it closer to some supplier computers we work for. it will run limited logic but latency is important, and our latency so far is too large. This computer will be in a data center and does no require incoming connections except for adminsitrative purposes, although it needs outgoing connetions. I have no real chance to put one of my VPN routers there, sadly - otherwise I would have no problem. Usage of RRAs is not recommended (we had various probblems there over time). I could deal with it. The computer MUSt integrate into the corporate structure via VPN and join the domain and be fully "tracked" (controlled for performance). What is the best suggestion? So far it looks like my best bets woudl be to log in via RRAS and deal with whatever issues arise there plus uise the local firewall the limit incoming connections to this computer to what is needed (which runs down to an emergency RDP connection allowance). Anyone a better idea?

    Read the article

  • Finding Font Issues in Acrobat

    - by Jayme
    Ok let me try this again, so sorry for not be clear. We create our PDFs through Quark, then send to print. I usually create outlines on my EPS files before I load in Quark but forgot this time. We bypassed the font error that Quark gave us by accident and found out our PDF was bad too late and it cost a lot of money to fix. We are trying to find a way to check our PDF for font problems before we send it to print, in case this problem happens again. We just want to be extra sure that we have tried everything. What I see in Quark is what the font is supposed to look like. When I view my PDF, the text is mixed up. Its readable but doesn't look like its supposed to and the spacing is all off within the text. My boss told me about the preflight in Quark and the Internal Structure for the fonts. She was asking me if this would help and what the lingo all meant. (which is where my first question started) The image on the left is my EPS that is correct, the image on the right is from the PDF. The white text in the top right and the website at the bottom left is what is messed up. I am running Mac 10.5.8, Quark 7.5 and Acrobat 8.3.1. Thanks, Jayme

    Read the article

  • Is there any equivalence of `--depth immediates` in `git`?

    - by ???
    Currently, I'm try to setup git front-end to the Subversion repository. My Subversion repository is a single large repository which consists of several co-related projects: svn-root |-- project1 | |-- branches | |-- tags | `-- trunk |-- project2 | |-- branches | |-- tags | `-- trunk `-- project3 |-- branches |-- tags `-- trunk Because it's sometimes needs to move files between different projects, so I don't want to break the repository to separate ones. I'm going to use git-svn to setup a git front-end, but I don't see how to exactly mapping the svn to git structure. The two systems treat branches and tags very different and I doubt it is possible. To simplify the problem, I would just git svn clone the whole root directory and let branches/tags/trunk directories just sit there. But this will definitely result in too many files in branches and tags directories. In Subversion, it's easy to just set the depth of checkout to immediates, which will only checkout the branch/tag titles, without the directory contents. but I don't know if this can be done in git. The git-svn messed me up. I hope there's more elegant solution.

    Read the article

  • Server.transfer causing HttpException

    - by salvationishere
    I am developing a C#/SQL ASP.NET web application in VS 2008. Currently I am using the Server.Transfer method to transfer control from one ASPX.CS file to another ASPX file. The first time through, this works. But after control is transferred to this new file it encounters a condition: if (restart == false) { where "restart" is a boolean variable. After this statement it immediately transfers control back to the same ASPX.CS file and tries to reexecute the Server.Transfer method. This time it gives me the following exception and stack trace. Do you know what is causing this? I tried to read this but it didn't make much sense to me. System.Web.HttpException was unhandled by user code Message="Error executing child request for DataMatch.aspx." Source="System.Web" ErrorCode=-2147467259 StackTrace: at System.Web.HttpServerUtility.Execute(String path, TextWriter writer, Boolean preserveForm) at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at System.Web.HttpServerUtility.Transfer(String path) at AddFileToSQL._Default.btnAppend_Click(Object sender, EventArgs e) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\Default.aspx.cs:line 109 at System.Web.UI.HtmlControls.HtmlInputButton.OnServerClick(EventArgs e) at System.Web.UI.HtmlControls.HtmlInputButton.RaisePostBackEvent(String eventArgument) at System.Web.UI.HtmlControls.HtmlInputButton.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.Web.HttpCompileException Message="c:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx(14): error CS1502: The best overloaded method match for 'System.Web.UI.HtmlControls.HtmlTableRowCollection.Add(System.Web.UI.HtmlControls.HtmlTableRow)' has some invalid arguments" Source="System.Web" ErrorCode=-2147467259 SourceCode="#pragma checksum \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\" \"{406ea660-64cf-4c82-b6f0-42d48172a799}\" \"76750ABD913CF678D216C1E9CFB62BDF\"\r\n//------------------------------------------------------------------------------\r\n// \r\n// This code was generated by a tool.\r\n// Runtime Version:2.0.50727.3603\r\n//\r\n// Changes to this file may cause incorrect behavior and will be lost if\r\n// the code is regenerated.\r\n// \r\n//------------------------------------------------------------------------------\r\n\r\nnamespace ASP {\r\n \r\n #line 285 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Web.Profile;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 280 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Text.RegularExpressions;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 282 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Web.Caching;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 278 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Configuration;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 277 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Collections.Specialized;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 19 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n using System.Web.UI.WebControls.WebParts;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 289 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Web.UI.HtmlControls;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 19 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n using System.Web.UI.WebControls;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 19 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n using System.Web.UI;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 276 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Collections;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 275 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 284 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Web.Security;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 281 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Web;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 283 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Web.SessionState;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 279 \"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\web.config\"\r\n using System.Text;\r\n \r\n #line default\r\n #line hidden\r\n \r\n \r\n [System.Runtime.CompilerServices.CompilerGlobalScopeAttribute()]\r\n public class datamatch_aspx : global::AddFileToSQL.DataMatch, System.Web.SessionState.IRequiresSessionState, System.Web.IHttpHandler {\r\n \r\n private static bool @_initialized;\r\n \r\n private static object @_fileDependencies;\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n public datamatch_aspx() {\r\n string[] dependencies;\r\n ((global::AddFileToSQL.DataMatch)(this)).AppRelativeVirtualPath = \"~/DataMatch.aspx\";\r\n if ((global::ASP.datamatch_aspx.@__initialized == false)) {\r\n dependencies = new string[1];\r\n dependencies[0] = \"~/DataMatch.aspx\";\r\n global::ASP.datamatch_aspx.@__fileDependencies = this.GetWrappedFileDependencies(dependencies);\r\n global::ASP.datamatch_aspx.@__initialized = true;\r\n }\r\n this.Server.ScriptTimeout = 30000000;\r\n }\r\n \r\n protected System.Web.Profile.DefaultProfile Profile {\r\n get {\r\n return ((System.Web.Profile.DefaultProfile)(this.Context.Profile));\r\n }\r\n }\r\n \r\n protected ASP.global_asax ApplicationInstance {\r\n get {\r\n return ((ASP.global_asax)(this.Context.ApplicationInstance));\r\n }\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.HtmlControls.HtmlTitle @_BuildControl_control3() {\r\n global::System.Web.UI.HtmlControls.HtmlTitle @_ctrl;\r\n \r\n #line 6 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.HtmlControls.HtmlTitle();\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.HtmlControls.HtmlHead @_BuildControl_control2() {\r\n global::System.Web.UI.HtmlControls.HtmlHead @_ctrl;\r\n \r\n #line 5 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.HtmlControls.HtmlHead(\"head\");\r\n \r\n #line default\r\n #line hidden\r\n global::System.Web.UI.HtmlControls.HtmlTitle @_ctrl1;\r\n \r\n #line 5 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl1 = this.@_BuildControl_control3();\r\n \r\n #line default\r\n #line hidden\r\n System.Web.UI.IParserAccessor @_parser = ((System.Web.UI.IParserAccessor)(@_ctrl));\r\n \r\n #line 5 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(@_ctrl1);\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 5 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n \r\n \r\n \r\n \r\n\"));\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.HtmlControls.HtmlTableRow @_BuildControl_control5() {\r\n global::System.Web.UI.HtmlControls.HtmlTableRow @_ctrl;\r\n \r\n #line 15 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.HtmlControls.HtmlTableRow();\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.WebControls.PlaceHolder @_BuildControlphTextBoxes() {\r\n global::System.Web.UI.WebControls.PlaceHolder @_ctrl;\r\n \r\n #line 19 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.WebControls.PlaceHolder();\r\n \r\n #line default\r\n #line hidden\r\n this.phTextBoxes = @_ctrl;\r\n \r\n #line 19 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.ID = \"phTextBoxes\";\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.HtmlControls.HtmlTableCell @_BuildControl_control8() {\r\n global::System.Web.UI.HtmlControls.HtmlTableCell @_ctrl;\r\n \r\n #line 18 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.HtmlControls.HtmlTableCell(\"td\");\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 18 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Align = \"center\";\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 18 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.VAlign = \"top\";\r\n \r\n #line default\r\n #line hidden\r\n System.Web.UI.IParserAccessor @_parser = ((System.Web.UI.IParserAccessor)(@_ctrl));\r\n \r\n #line 18 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n \"));\r\n \r\n #line default\r\n #line hidden\r\n global::System.Web.UI.WebControls.PlaceHolder @_ctrl1;\r\n \r\n #line 18 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl1 = this.@_BuildControlphTextBoxes();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 18 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(@_ctrl1);\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 18 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n \"));\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.WebControls.Label @_BuildControlInstructions() {\r\n global::System.Web.UI.WebControls.Label @_ctrl;\r\n \r\n #line 22 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.WebControls.Label();\r\n \r\n #line default\r\n #line hidden\r\n this.Instructions = @_ctrl;\r\n @_ctrl.ApplyStyleSheetSkin(this);\r\n \r\n #line 22 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.ID = \"Instructions\";\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 22 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Font.Italic = true;\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 22 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Text = \"Now select from the dropdownlists which table columns from my database you want t\" +\r\n \"o map these fields to\";\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.HtmlControls.HtmlTableCell @_BuildControl_control9() {\r\n global::System.Web.UI.HtmlControls.HtmlTableCell @_ctrl;\r\n \r\n #line 21 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.HtmlControls.HtmlTableCell(\"td\");\r\n \r\n #line default\r\n #line hidden\r\n System.Web.UI.IParserAccessor @_parser = ((System.Web.UI.IParserAccessor)(@_ctrl));\r\n \r\n #line 21 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n \"));\r\n \r\n #line default\r\n #line hidden\r\n global::System.Web.UI.WebControls.Label @_ctrl1;\r\n \r\n #line 21 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl1 = this.@_BuildControlInstructions();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 21 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(@_ctrl1);\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 21 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n \"));\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.WebControls.Button @_BuildControlbtnSubmit() {\r\n global::System.Web.UI.WebControls.Button @_ctrl;\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.WebControls.Button();\r\n \r\n #line default\r\n #line hidden\r\n this.btnSubmit = @_ctrl;\r\n @_ctrl.ApplyStyleSheetSkin(this);\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.ID = \"btnSubmit\";\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Text = \"Submit\";\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Width = new System.Web.UI.WebControls.Unit(150, System.Web.UI.WebControls.UnitType.Pixel);\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n ((System.Web.UI.IAttributeAccessor)(@_ctrl)).SetAttribute(\"style\", \"top:auto; left:auto\");\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n ((System.Web.UI.IAttributeAccessor)(@_ctrl)).SetAttribute(\"top\", \"100px\");\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Click -= new System.EventHandler(this.btnSubmit_Click);\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 26 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @__ctrl.Click += new System.EventHandler(this.btnSubmit_Click);\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.HtmlControls.HtmlTableCell @_BuildControl_control10() {\r\n global::System.Web.UI.HtmlControls.HtmlTableCell @_ctrl;\r\n \r\n #line 25 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.HtmlControls.HtmlTableCell(\"td\");\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 25 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Align = \"center\";\r\n \r\n #line default\r\n #line hidden\r\n System.Web.UI.IParserAccessor @_parser = ((System.Web.UI.IParserAccessor)(@_ctrl));\r\n \r\n #line 25 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n \"));\r\n \r\n #line default\r\n #line hidden\r\n global::System.Web.UI.WebControls.Button @_ctrl1;\r\n \r\n #line 25 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl1 = this.@_BuildControlbtnSubmit();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 25 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(@_ctrl1);\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 25 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n  \r\n \"));\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private void @_BuildControl_control7(System.Web.UI.HtmlControls.HtmlTableCellCollection @_ctrl) {\r\n global::System.Web.UI.HtmlControls.HtmlTableCell @_ctrl1;\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl1 = this.@_BuildControl_control8();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Add(@_ctrl1);\r\n \r\n #line default\r\n #line hidden\r\n global::System.Web.UI.HtmlControls.HtmlTableCell @_ctrl2;\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl2 = this.@_BuildControl_control9();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Add(@_ctrl2);\r\n \r\n #line default\r\n #line hidden\r\n global::System.Web.UI.HtmlControls.HtmlTableCell @_ctrl3;\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl3 = this.@_BuildControl_control10();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Add(@_ctrl3);\r\n \r\n #line default\r\n #line hidden\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.HtmlControls.HtmlTableRow @_BuildControl_control6() {\r\n global::System.Web.UI.HtmlControls.HtmlTableRow @_ctrl;\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.HtmlControls.HtmlTableRow();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Align = \"center\";\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 17 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n this.@_BuildControl_control7(@_ctrl.Cells);\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.WebControls.Literal @_BuildControllTextData() {\r\n global::System.Web.UI.WebControls.Literal @_ctrl;\r\n \r\n #line 34 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.WebControls.Literal();\r\n \r\n #line default\r\n #line hidden\r\n this.lTextData = @_ctrl;\r\n \r\n #line 34 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.ID = \"lTextData\";\r\n \r\n #line default\r\n #line hidden\r\n return @_ctrl;\r\n }\r\n \r\n [System.Diagnostics.DebuggerNonUserCodeAttribute()]\r\n private global::System.Web.UI.WebControls.Panel @_BuildControlpnlDisplayData() {\r\n global::System.Web.UI.WebControls.Panel @_ctrl;\r\n \r\n #line 31 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl = new global::System.Web.UI.WebControls.Panel();\r\n \r\n #line default\r\n #line hidden\r\n this.pnlDisplayData = @_ctrl;\r\n @_ctrl.ApplyStyleSheetSkin(this);\r\n \r\n #line 31 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.ID = \"pnlDisplayData\";\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 31 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl.Visible = false;\r\n \r\n #line default\r\n #line hidden\r\n System.Web.UI.IParserAccessor @_parser = ((System.Web.UI.IParserAccessor)(@_ctrl));\r\n \r\n #line 31 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralControl(\"\r\n \r\n \r\n \" +\r\n \" \"));\r\n \r\n #line default\r\n #line hidden\r\n global::System.Web.UI.WebControls.Literal @_ctrl1;\r\n \r\n #line 31 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_ctrl1 = this.@_BuildControllTextData();\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 31 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(@_ctrl1);\r\n \r\n #line default\r\n #line hidden\r\n \r\n #line 31 \"C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\DataMatch.aspx\"\r\n @_parser.AddParsedSubObject(new System.Web.UI.LiteralCont

    Read the article

  • How to print PCL Directly to the printer? Vb.net VS 2010

    - by Justin
    Good day, I've been trying to upgrade an old Print Program that prints raw pcl directly to the printer. The print program takes a PCL Mask and a PCL Batch Spool File (just pcl with page turn commands, I think) merges them and sends them off to the printer. I'm able to send to the Printer a file stream of PCL but I get mixed results and i do not understand why printing is so difficult under .net. I mean yes there is the PrintDocument class, but to print PCL... Let's just say I'm ready to detach my printer from the network and burn it a live. Here is my class PrintDirect (rather a hybrid class) Imports System Imports System.Text Imports System.Runtime.InteropServices <StructLayout(LayoutKind.Sequential)> _ Public Structure DOCINFO <MarshalAs(UnmanagedType.LPWStr)> _ Public pDocName As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pOutputFile As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pDataType As String End Structure 'DOCINFO Public Class PrintDirect <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function OpenPrinter(ByVal pPrinterName As String, ByRef phPrinter As IntPtr, ByVal pDefault As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartDocPrinter(ByVal hPrinter As IntPtr, ByVal Level As Integer, ByRef pDocInfo As DOCINFO) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Ansi, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function WritePrinter(ByVal hPrinter As IntPtr, ByVal data As String, ByVal buf As Integer, ByRef pcWritten As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndDocPrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function ClosePrinter(ByVal hPrinter As IntPtr) As Long End Function 'Helps uses to work with the printer Public Class PrintJob ''' <summary> ''' The address of the printer to print to. ''' </summary> ''' <remarks></remarks> Public PrinterName As String = "" Dim lhPrinter As New System.IntPtr() ''' <summary> ''' The object deriving from the Win32 API, Winspool.drv. ''' Use this object to overide the settings defined in the PrintJob Class. ''' </summary> ''' <remarks>Only use this when absolutly nessacary.</remarks> Public OverideDocInfo As New DOCINFO() ''' <summary> ''' The PCL Control Character or the ascii escape character. \x1b ''' </summary> ''' <remarks>ChrW(27)</remarks> Public Const PCL_Control_Character As Char = ChrW(27) ''' <summary> ''' Opens a connection to a printer, if false the connection could not be established. ''' </summary> ''' <returns></returns> ''' <remarks></remarks> Public Function OpenPrint() As Boolean Dim rtn_val As Boolean = False PrintDirect.OpenPrinter(PrinterName, lhPrinter, 0) If Not lhPrinter = 0 Then rtn_val = True End If Return rtn_val End Function ''' <summary> ''' The name of the Print Document. ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Property DocumentName As String Get Return OverideDocInfo.pDocName End Get Set(ByVal value As String) OverideDocInfo.pDocName = value End Set End Property ''' <summary> ''' The type of Data that will be sent to the printer. ''' </summary> ''' <value>Send the print data type, usually "RAW" or "LPR". To overide use the OverideDocInfo Object</value> ''' <returns>The DataType as it was provided, if it is not part of the enumeration it is set to "UNKNOWN".</returns> ''' <remarks></remarks> Public Property DocumentDataType As PrintDataType Get Select Case OverideDocInfo.pDataType.ToUpper Case "RAW" Return PrintDataType.RAW Case "LPR" Return PrintDataType.LPR Case Else Return PrintDataType.UNKOWN End Select End Get Set(ByVal value As PrintDataType) OverideDocInfo.pDataType = [Enum].GetName(GetType(PrintDataType), value) End Set End Property Public Property DocumentOutputFile As String Get Return OverideDocInfo.pOutputFile End Get Set(ByVal value As String) OverideDocInfo.pOutputFile = value End Set End Property Enum PrintDataType UNKOWN = 0 RAW = 1 LPR = 2 End Enum ''' <summary> ''' Initializes the printing matrix ''' </summary> ''' <param name="PrintLevel">I have no idea what the hack this is...</param> ''' <remarks></remarks> Public Sub OpenDocument(Optional ByVal PrintLevel As Integer = 1) PrintDirect.StartDocPrinter(lhPrinter, PrintLevel, OverideDocInfo) End Sub ''' <summary> ''' Starts the next page. ''' </summary> ''' <remarks></remarks> Public Sub StartPage() PrintDirect.StartPagePrinter(lhPrinter) End Sub ''' <summary> ''' Writes a string to the printer, can be used to write out a section of the document at a time. ''' </summary> ''' <param name="data">The String to Send out to the Printer.</param> ''' <returns>The pcWritten as an integer, 0 may mean the writter did not write out anything.</returns> ''' <remarks>Warning the buffer is automatically created by the lenegth of the string.</remarks> Public Function WriteToPrinter(ByVal data As String) As Integer Dim pcWritten As Integer = 0 PrintDirect.WritePrinter(lhPrinter, data, data.Length - 1, pcWritten) Return pcWritten End Function Public Sub EndPage() PrintDirect.EndPagePrinter(lhPrinter) End Sub Public Sub CloseDocument() PrintDirect.EndDocPrinter(lhPrinter) End Sub Public Sub ClosePrint() PrintDirect.ClosePrinter(lhPrinter) End Sub ''' <summary> ''' Opens a connection to a printer and starts a new document. ''' </summary> ''' <returns>If false the connection could not be established. </returns> ''' <remarks></remarks> Public Function Open() As Boolean Dim rtn_val As Boolean = False rtn_val = OpenPrint() If rtn_val Then OpenDocument() End If Return rtn_val End Function ''' <summary> ''' Closes the document and closes the connection to the printer. ''' </summary> ''' <remarks></remarks> Public Sub Close() CloseDocument() ClosePrint() End Sub End Class End Class 'PrintDirect Here is how I print my file. I'm simple printing the PCL Masks, to show proof of concept. But I can't even do that. I can effectively create PCL and send it the printer without reading the file and it works fine... Plus I get mixed results with different stream reader text encoding as well. Dim pJob As New PrintDirect.PrintJob pJob.DocumentName = " test doc" pJob.DocumentDataType = PrintDirect.PrintJob.PrintDataType.RAW pJob.PrinterName = sPrinter.GetDevicePath '//This is where you'd stick your device name, sPrinter stands for Selected Printer from a dropdown (combobox). 'pJob.DocumentOutputFile = "C:\temp\test-spool.txt" If Not pJob.OpenPrint() Then MsgBox("Unable to connect to " & pJob.PrinterName, MsgBoxStyle.OkOnly, "Print Error") Exit Sub End If pJob.OpenDocument() pJob.StartPage() Dim sr As New IO.StreamReader(Me.txtFile.Text, Text.Encoding.ASCII) '//I've got best results with ASCII before, but only mized. Dim line As String = sr.ReadLine 'Fix for fly code on first run 'If line = 27 Then line = PrintDirect.PrintJob.PCL_Control_Character Do While (Not line Is Nothing) pJob.WriteToPrinter(line) line = sr.ReadLine Loop 'pJob.WriteToPrinter(ControlChars.FormFeed) pJob.EndPage() pJob.CloseDocument() sr.Close() sr.Dispose() sr = Nothing pJob.Close() I was able to get to print the mask, in the morning... now I'm getting strange printer characters scattered through 5 pages... I'm totally clueless. I must have changed something or the printer is at fault. You can access my test Check Mask, here http://kscserver.com/so/chk_mask.zip .

    Read the article

  • client problems - misaligned expectations & not following SDLC protocols

    - by louism
    hi guys, im having some serious problems with a client on a project - i could use some advice please the short version i have been working with this client now for almost 6 months without any problems (a classified website project in the range of 500 hours) over the last few days things have drastically deteriorated to the point where ive had to place the project on-hold whilst i work-out what to do (this has pissed the client off even more) to be simplistic, the root cause of the issue is this: the client doesnt read the specs i make for him, i code the feature, he than wants to change things, i tell him its not to the agreed spec and that that change will have to be postponed and possibly charged for, he gets upset and rants saying 'hes paid for the feature' and im not keeping to the agreement (<- misalignment of expectations) i think the root cause of the root cause is my clients failure to take my SDLC protocols seriously. i have a bug tracking system in place which he practically refuses to use (he still emails me bugs), he doesnt seem to care to much for the protocols i use for dealing with scope creep and change control the whole situation came to a head recently where he 'cracked it' (an aussie term for being fed-up). the more terms like 'postponed for post-launch implementation', 'costed feature addition', and 'not to agreed spec' i kept using, the worse it got finally, he began to bully me - basically insisting i shut-up and do the work im being paid for. i wrote a long-winded email explaining how wrong he was on all these different points, and explaining what all the SDLC protocols do to protect the success of the project. than i deleted that email and wrote a new one in the new email, i suggested as a solution i write up a list of grievances we both had. we than review the list and compromise on different points: he gets some things he wants, i get some things i want. sometimes youve got to give ground to get ground his response to this suggestion was flat-out refusal, and a restatement that i should just get on with the work ive been paid to do so there you have the very subjective short version. if you have the time and inclination, the long version may be a little less bias as it has the email communiques between me and my client the long version (with background) the long version works by me showing you the email communiques which lead to the situation coming to a head. so here it is, judge for yourself where the trouble started... 1. client asked me why something was missing from a feature i just uploaded, my response was to show him what was in the spec: it basically said the item he was looking for was never going to be included 2. [clients response...] Memo Louis, We are following your own title fields and keeping a consistent layout. Why the big fuss about not adding "Part". It simply replaces "model" and is consistent with your current title fields. 3. [my response...] hi [client], the 'part' field appeared to me as a redundancy / mistake. i requested clarification but never received any in a timely manner (about 2 weeks ago) the specification for this feature also indicated it wasnt going to be included: RE: "Why the big fuss about not adding "Part" " it may not appear so, but it would actually be a lot of work for me to now add a 'Part' field it could take me up to 15-20 minutes to properly explain why its such a big undertaking to do this, but i would prefer to use that time instead to work on completing your v1.1 features as a simplistic explanation - it connects to the change in paradigm from a 'generic classified ad' model to a 'specific attributes for specific categories' model basically, i am saying it is a big fuss, but i understand that it doesnt look that way - after all, it is just one ity-bitty field :) if you require a fuller explanation, please let me know and i will commit the time needed to write that out also, if you recall when we first started on the project, i said that with the effort/time required for features, you would likely not know off the top of your head. you may think something is really complex, but in reality its quite simple, you might think something is easy - but it could actually be a massive trauma to code (which is the case here with the 'Part' field). if you also recalled, i said the best course of action is to just ask, and i would let you know on a case-by-case basis 4. [email from me to client...] hi [client], the online catalogue page is now up live (see my email from a few days ago for information on how it works) note: the window of opportunity for input/revisions on what data the catalogue stores has now closed (as i have put the code up live now) RE: the UI/layout of the online catalogue page you may still do visual/ui tweaks to the page at the moment (this window for input/revisions will close in a couple of days time) 5. [email from client to me...] *(note: i had put up the feature & asked the client to review it, never heard back from them for a few days)* Memo Louis, Here you go again. CLOSED without a word of input from the customer. I don't think so. I will reply tomorrow regarding the content and functionality we require from this feature. 5. [from me to client...] hi [client]: RE: from my understanding, you are saying that the mini-sale yard control would change itself based on the fact someone was viewing for parts & accessories <- is that correct? this change is outside the scope of the v1.1 mini-spec and therefore will need to wait 'til post launch for costing/implementation 6. [email from client to me...] Memo Louis, Following your v1.1 mini-spec and all your time paid in full for the work selected. We need to make the situation clear. There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon. You have undertaken to complete the Parts and accessories feature as follows. Obviously, as part of this process the "mini search" will be effected, and will require "adaption to make sense". 7. [email from me to client...] hi [client], RE: "There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon." a few points to consider: 1) the specification for the 'parts & accessories' feature was as follows: (i.e. [what] "...we have agreed upon.") 2) you have received the 'parts & accessories' feature free of charge (you have paid $0 for it). ive spent two days coding that feature as a gesture of good will i would request that you please consider these two facts carefully and sincerely 8. [email from client to me...] Memo Louis, I don't see how you are giving us anything for free. From your original fee proposal you have deleted more than 30 hours of included features. Your title "shelved features". Further you have charged us twice by adding back into the site, at an addition cost, some of those "shelved features" features. See v1.1 mini-spec. Did include in your original fee proposal a change request budget but then charge without discussion items included in v1.1 mini-spec. Included a further Features test plan for a regression test, a fee of 10 hours that would not have been required if the "shelved features" were not left out of the agreed fee proposal. I have made every attempt to satisfy your your uneven business sense by offering you everything your heart desired, in the v1.1 mini-spec, to be left once again with your attitude of "its too hard, lets leave it for post launch". I am no longer accepting anything less than what we have contracted you to do. That is clearly defined in v1.1 mini-spec, and you are paid in advance for delivering those items as an acceptable function. a few notes about the above email... i had to cull features from the original spec because it didnt fit into the budget. i explained this to the client at the start of the project (he wanted more features than he had budget hours to do them all) nothing has been charged for twice, i didnt charge the client for culled features. im charging him to now do those culled features the draft version of the project schedule included a change request budget of 10 hours, but i had to remove that to meet the budget (the client may not have been aware of this to be fair to them) what the client refers to as my attitude of 'too hard/leave it for post-launch', i called a change request protocol and a method for keeping scope creep under control 9. [email from me to client...] hi [client], RE: "...all your grievances..." i had originally written out a long email response; it was fantastic, it had all these great points of how 'you were wrong' and 'i was right', you would of loved it (and by 'loved it', i mean it would of just infuriated you more) so, i decided to deleted it start over, for two reasons: 1) a long email is being disrespectful of your time (youre a busy businessman with things to do) 2) whos wrong or right gets us no closer to fixing the problems we are experiencing what i propose is this... i prepare a bullet point list of your grievances and my grievances (yes, im unhappy too about how things are going - and it has little to do with money) i submit this list to you for you to add to as necessary we then both take a good hard look at this list, and we decide which areas we are willing to give ground on as an example, the list may look something like this: "louis, you keep taking away features you said you would do" [your grievance 2] [your grievance 3] [your grievance ...] "[client], i feel you dont properly read the specs i prepare for you..." [my grievance 2] [my grievance 3] [my grievance ...] if you are willing to give this a try, let me know will it work? who knows. but if it doesnt, we can always go back to arguing some more :) obviously, this will only work if you are willing to give it a genuine try, and you can accept that you may have to 'give some ground to get some ground' what do you think? 10. [email from client to me ...] Memo Louis, Instead of wasting your time listing grievances, I would prefer you complete the items in v1.1 mini-spec, to a satisfactory conclusion. We almost had the website ready for launch until you brought the v1.1 mini-spec into the frame. Obviously I expected you could complete the v1.1 mini-spec in a two-week time frame as you indicated and give the site a more profession presentation. Most of the problems have been caused by you not following our instructions, but deciding to do what you feel like at the time. And then arguing with us how the missing information is not necessary. For instance "Parts and Accessories". Why on earth would you leave out the parts heading, when it ties-in with the fields you have already developed. It replaces "model" and is just as important in the context of information that appears in the "Details" panel. We are at a stage where the the v1.1 mini-spec needs to be completed without further time wasting and the site is complete (subject to all features working). We are on standby at this end to do just that. Let me know when you are back, working on the site and we will process and complete each v1.1 mini-spec, item by item, until the job is complete. 11. [last email from me to client...] hi [client], based on this reply, and your demonstrated unwillingness to compromise/give any ground on issues at hand, i have decided to place your project on-hold for the moment i will be considering further options on how to over-come our challenges over the next few days i will contact you by monday 17/may to discuss any new options i have come up with, and if i believe it is appropriate to restart work on your project at that point or not told you it was long... what do you think?

    Read the article

  • c++ stl priority queue insert bad_alloc exception

    - by bsg
    Hi, I am working on a query processor that reads in long lists of document id's from memory and looks for matching id's. When it finds one, it creates a DOC struct containing the docid (an int) and the document's rank (a double) and pushes it on to a priority queue. My problem is that when the word(s) searched for has a long list, when I try to push the DOC on to the queue, I get the following exception: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012ee88.. When the word has a short list, it works fine. I tried pushing DOC's onto the queue in several places in my code, and they all work until a certain line; after that, I get the above error. I am completely at a loss as to what is wrong because the longest list read in is less than 1 MB and I free all memory that I allocate. Why should there suddenly be a bad_alloc exception when I try to push a DOC onto a queue that has a capacity to hold it (I used a vector with enough space reserved as the underlying data structure for the priority queue)? I know that questions like this are almost impossible to answer without seeing all the code, but it's too long to post here. I'm putting as much as I can and am anxiously hoping that someone can give me an answer, because I am at my wits' end. The NextGEQ function is too long to put here, but it reads a list of compressed blocks of docids block by block. That is, if it sees that the lastdocid in the block (in a separate list) is larger than the docid passed in, it decompresses the block and searches until it finds the right one. If it sees that it was already decompressed, it just searches. Below, when I call the function the first time, it decompresses a block and finds the docid; the push onto the queue after that works. The second time, it doesn't even need to decompress; that is, no new memory is allocated, but after that time, pushing on to the queue gives a bad_alloc error. struct DOC{ long int docid; long double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; struct listnode{ int* metapointer; int* blockpointer; int docposition; int frequency; int numberdocs; int* iquery; listnode* nextnode; }; void QUERYMANAGER::SubmitQuery(char *query){ vector<DOC> docvec; docvec.reserve(20); DOC doct; //create a priority queue to use as a min-heap to store the documents and rankings; //although the priority queue uses the heap as its underlying data structure, //I found it easier to use the STL priority queue implementation priority_queue<DOC, vector<DOC>,std::greater<DOC>> q(docvec.begin(), docvec.end()); q.push(doct); //do some processing here; startlist is a pointer to a listnode struct that starts the //linked list cout << "Opening lists:" << endl; //point the linked list start pointer to the node returned by the OpenList method startlist = &OpenList(value); listnode* minpointer; q.push(doct); //more processing here; else{ //start by finding the first docid in the shortest list int i = 0; q.push(doct); num = NextGEQ(0, *startlist); q.push(doct); while(num != -1) cout << "finding nextGEQ from shortest list" << endl; q.push(doct); //the is where the problem starts - every previous q.push(doct) works; the one after //NextGEQ(num +1, *startlist) gives the bad_alloc error num = NextGEQ(num + 1, *startlist); q.push(doct); //if you didn't break out of the loop; i.e., all lists contain a matching docid, //calculate the document's rank; if it's one of the top 20, create a struct //containing the docid and the rank and add it to the priority queue if(!loop) { cout << "found match" << endl; if(num < 0) { cout << "reached end of list" << endl; //reached the end of the shortest list; close the list CloseList(startlist); break; } rank = calculateRanking(table, num); try{ //if the heap is not full, create a DOC struct with the docid and //rank and add it to the heap if(q.size() < 20) { doc.docid = num; doc.rank = rank; q.push(doct); q.push(doc); } } catch (exception& e) { cout << e.what() << endl; } } } Thank you very much, bsg.

    Read the article

  • Marshalling to a native library in C#

    - by Daniel Baulig
    I'm having trouble calling functions of a native library from within managed C# code. I am developing for the 3.5 compact framework (Windows Mobile 6.x) just in case this would make any difference. I am working with the waveIn* functions from coredll.dll (these are in winmm.dll in regular Windows I believe). This is what I came up with: // namespace winmm; class winmm [StructLayout(LayoutKind.Sequential)] public struct WAVEFORMAT { public ushort wFormatTag; public ushort nChannels; public uint nSamplesPerSec; public uint nAvgBytesPerSec; public ushort nBlockAlign; public ushort wBitsPerSample; public ushort cbSize; } [StructLayout(LayoutKind.Sequential)] public struct WAVEHDR { public IntPtr lpData; public uint dwBufferLength; public uint dwBytesRecorded; public IntPtr dwUser; public uint dwFlags; public uint dwLoops; public IntPtr lpNext; public IntPtr reserved; } public delegate void AudioRecordingDelegate(IntPtr deviceHandle, uint message, IntPtr instance, ref WAVEHDR wavehdr, IntPtr reserved2); [DllImport("coredll.dll")] public static extern int waveInAddBuffer(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint cWaveHdrSize); [DllImport("coredll.dll")] public static extern int waveInPrepareHeader(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint Size); [DllImport("coredll.dll")] public static extern int waveInStart(IntPtr hWaveIn); // some other class private WinMM.WinMM.AudioRecordingDelegate waveIn; private IntPtr handle; private uint bufferLength; private void setupBuffer() { byte[] buffer = new byte[bufferLength]; GCHandle bufferPin = GCHandle.Alloc(buffer, GCHandleType.Pinned); WinMM.WinMM.WAVEHDR hdr = new WinMM.WinMM.WAVEHDR(); hdr.lpData = bufferPin.AddrOfPinnedObject(); hdr.dwBufferLength = this.bufferLength; hdr.dwFlags = 0; int i = WinMM.WinMM.waveInPrepareHeader(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInPrepare"; return; } i = WinMM.WinMM.waveInAddBuffer(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInAddrBuffer"; return; } } private void setupWaveIn() { WinMM.WinMM.WAVEFORMAT format = new WinMM.WinMM.WAVEFORMAT(); format.wFormatTag = WinMM.WinMM.WAVE_FORMAT_PCM; format.nChannels = 1; format.nSamplesPerSec = 8000; format.wBitsPerSample = 8; format.nBlockAlign = Convert.ToUInt16(format.nChannels * format.wBitsPerSample); format.nAvgBytesPerSec = format.nSamplesPerSec * format.nBlockAlign; this.bufferLength = format.nAvgBytesPerSec; format.cbSize = 0; int i = WinMM.WinMM.waveInOpen(out this.handle, WinMM.WinMM.WAVE_MAPPER, ref format, Marshal.GetFunctionPointerForDelegate(waveIn), 0, WinMM.WinMM.CALLBACK_FUNCTION); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInOpen"; return; } setupBuffer(); WinMM.WinMM.waveInStart(this.handle); } I read alot about marshalling the last few days, nevertheless I do not get this code working. When my callback function is called (waveIn) when the buffer is full, the hdr structure passed back in wavehdr is obviously corrupted. Here is an examlpe of how the structure looks like at that point: - wavehdr {WinMM.WinMM.WAVEHDR} WinMM.WinMM.WAVEHDR dwBufferLength 0x19904c00 uint dwBytesRecorded 0x0000fa00 uint dwFlags 0x00000003 uint dwLoops 0x1990f6a4 uint + dwUser 0x00000000 System.IntPtr + lpData 0x00000000 System.IntPtr + lpNext 0x00000000 System.IntPtr + reserved 0x7c07c9a0 System.IntPtr This obiously is not what I expected to get passed. I am clearly concerned about the order of the fields in the view. I do not know if Visual Studio .NET cares about actual memory order when displaying the record in the "local"-view, but they are obviously not displayed in the order I speciefied in the struct. Then theres no data pointer and the bufferLength field is far to high. Interestingly the bytesRecorded field is exactly 64000 - bufferLength and bytesRecorded I'd expect both to be 64000 though. I do not know what exactly is going wrong, maybe someone can help me out on this. I'm an absolute noob to managed code programming and marshalling so please don't be too harsh to me for all the stupid things I've propably done. Oh here's the C code definition for WAVEHDR which I found here, I believe I might have done something wrong in the C# struct definition: /* wave data block header */ typedef struct wavehdr_tag { LPSTR lpData; /* pointer to locked data buffer */ DWORD dwBufferLength; /* length of data buffer */ DWORD dwBytesRecorded; /* used for input only */ DWORD_PTR dwUser; /* for client's use */ DWORD dwFlags; /* assorted flags (see defines) */ DWORD dwLoops; /* loop control counter */ struct wavehdr_tag FAR *lpNext; /* reserved for driver */ DWORD_PTR reserved; /* reserved for driver */ } WAVEHDR, *PWAVEHDR, NEAR *NPWAVEHDR, FAR *LPWAVEHDR; If you are used to work with all those low level tools like pointer-arithmetic, casts, etc starting writing managed code is a pain in the ass. It's like trying to learn how to swim with your hands tied on your back. Some things I tried (to no effect): .NET compact framework does not seem to support the Pack = 2^x directive in [StructLayout]. I tried [StructLayout(LayoutKind.Explicit)] and used 4 bytes and 8 bytes alignment. 4 bytes alignmentgave me the same result as the above code and 8 bytes alignment only made things worse - but that's what I expected. Interestingly if I move the code from setupBuffer into the setupWaveIn and do not declare the GCHandle in the context of the class but in a local context of setupWaveIn the struct returned by the callback function does not seem to be corrupted. I am not sure however why this is the case and how I can use this knowledge to fix my code. I'd really appreciate any good links on marshalling, calling unmanaged code from C#, etc. Then I'd be very happy if someone could point out my mistakes. What am I doing wrong? Why do I not get what I'd expect.

    Read the article

  • Implementing a robust async stream reader for a console

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. Edit: There are comments below correctly stating that since the data is coming from a stream, there is absolutely nothing that the receiver can infer about the structure of the data unless it is fully prepared to parse it. What I am trying to do here is leverage the "flush the output" "structure" that the owner of the console imparts while writing on it. I am prepared to assume (better: allow my caller to have the option to assume) that the OS will pass me the data written between two flushes of the stream in exactly one piece. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • Cumulative +1/-1 Cointoss crashes on 1000 iterations. Please advise; c++ boost random libraries

    - by user1731972
    following some former advice Multithreaded application, am I doing it right? I think I have a threadsafe number generator using boost, but my program crashes when I input 1000 iterations. The output .csv file when graphed looks right, but I'm not sure why it's crashing. It's using _beginthread, and everyone is telling me I should use the more (convoluted) _beingthreadex, which I'm not familiar with. If someone could recommend an example, I would greatly appreciate it. Also... someone pointed out I should be applying a second parameter to my _beginthread for the array counting start positions, but I have no idea how to pass more than one parameter, other than attempting to use a structure, and I've read structure's and _beginthread don't get along (although, I could just use the boost threads...) #include <process.h> #include <windows.h> #include <iostream> #include <fstream> #include <time.h> #include <random> #include <boost/random.hpp> //for srand48_r(time(NULL), &randBuffer); which doesn't work #include <stdio.h> #include <stdlib.h> //#include <thread> using namespace std; using namespace boost; using namespace boost::random; void myThread0 (void *dummy ); void myThread1 (void *dummy ); void myThread2 (void *dummy ); void myThread3 (void *dummy ); //for random seeds void initialize(); //from http://stackoverflow.com/questions/7114043/random-number-generation-in-c11-how-to-generate-how-do-they-work uniform_int_distribution<> two(1,2); typedef std::mt19937 MyRNG; // the Mersenne Twister with a popular choice of parameters uint32_t seed_val; // populate somehow MyRNG rng1; // e.g. keep one global instance (per thread) MyRNG rng2; // e.g. keep one global instance (per thread) MyRNG rng3; // e.g. keep one global instance (per thread) MyRNG rng4; // e.g. keep one global instance (per thread) //only needed for shared variables //CRITICAL_SECTION cs1,cs2,cs3,cs4; // global int main() { ofstream myfile; myfile.open ("coinToss.csv"); int rNum; long numRuns; long count = 0; int divisor = 1; float fHolder = 0; long counter = 0; float percent = 0.0; //? //unsigned threadID; //HANDLE hThread; initialize(); HANDLE hThread[4]; const int size = 100000; int array[size]; printf ("Runs (uses multiple of 100,000) "); cin >> numRuns; for (int a = 0; a < numRuns; a++) { hThread[0] = (HANDLE)_beginthread( myThread0, 0, (void*)(array) ); hThread[1] = (HANDLE)_beginthread( myThread1, 0, (void*)(array) ); hThread[2] = (HANDLE)_beginthread( myThread2, 0, (void*)(array) ); hThread[3] = (HANDLE)_beginthread( myThread3, 0, (void*)(array) ); //waits for threads to finish before continuing WaitForMultipleObjects(4, hThread, TRUE, INFINITE); //closes handles I guess? CloseHandle( hThread[0] ); CloseHandle( hThread[1] ); CloseHandle( hThread[2] ); CloseHandle( hThread[3] ); //dump array into calculations //average array into fHolder //this could be split into threads as well for (int p = 0; p < size; p++) { counter += array[p] == 2 ? 1 : -1; //cout << array[p] << endl; //cout << counter << endl; } //this fHolder calculation didn't work //fHolder = counter / size; //so I had to use this cout << counter << endl; fHolder = counter; fHolder = fHolder / size; myfile << fHolder << endl; } } void initialize() { //seed value needs to be supplied //rng1.seed(seed_val*1); rng1.seed((unsigned int)time(NULL)); rng2.seed(((unsigned int)time(NULL))*2); rng3.seed(((unsigned int)time(NULL))*3); rng4.seed(((unsigned int)time(NULL))*4); }; void myThread0 (void *param) { //EnterCriticalSection(&cs1); //aquire the critical section object int *i = (int *)param; for (int x = 0; x < 25000; x++) { //doesn't work, part of merssene twister //i[x] = next(); i[x] = two(rng1); //original srand //i[x] = rand() % 2 + 1; //doesn't work for some reason. //uint_dist2(rng); //i[x] = qrand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs1); // release the critical section object } void myThread1 (void *param) { //EnterCriticalSection(&cs2); //aquire the critical section object int *i = (int *)param; for (int x = 25000; x < 50000; x++) { //param[x] = rand() % 2 + 1; i[x] = two(rng2); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs2); // release the critical section object } void myThread2 (void *param) { //EnterCriticalSection(&cs3); //aquire the critical section object int *i = (int *)param; for (int x = 50000; x < 75000; x++) { i[x] = two(rng3); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs3); // release the critical section object } void myThread3 (void *param) { //EnterCriticalSection(&cs4); //aquire the critical section object int *i = (int *)param; for (int x = 75000; x < 100000; x++) { i[x] = two(rng4); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs4); // release the critical section object }

    Read the article

  • PHP / MYSQL: Database empties when I use a variable in the WHERE condition of the last mysql_query

    - by Christian Cugnet
    <?php require 'connect.php'; $search = $_POST["search"]; These two queries work fine. So I used their format for the one below. $result = mysql_query("SELECT * FROM `subjects` WHERE $search = `student_id`"); $result2 = mysql_query("SELECT * FROM `grades` WHERE $search = `student_id`"); while($row = mysql_fetch_array($result)) { $row2 = mysql_fetch_array($result2); echo"<table border='1'>"; echo "<tr>"; echo "<th>Subjects:</th>"; echo "<th>Current Mark:</th>"; echo "<th>Edit Mark:</th>"; echo"</tr>"; echo"<tr>"; echo "<td>". $row['c1'] ."</td>"; echo "<td>". $row2['m1'] ."</td>"; echo "<td><input type='text' name='m1'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c2'] ."</td>"; echo "<td>". $row2['m2'] ."</td>"; echo "<td><input type='text' name='m2'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c3'] ."</td>"; echo "<td>". $row2['m3'] ."</td>"; echo "<td><input type='text' name='m3'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c4'] ."</td>"; echo "<td>". $row2['m4'] ."</td>"; echo "<td><input type='text' name='m4'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c5'] ."</td>"; echo "<td>". $row2['m5'] ."</td>"; echo "<td><input type='text' name='m5'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c6'] ."</td>"; echo "<td>". $row2['m6'] ."</td>"; echo "<td><input type='text' name='m6'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c7'] ."</td>"; echo "<td>". $row2['m7'] ."</td>"; echo "<td><input type='text' name='m7'></td>"; echo "</tr>"; echo "</table>"; echo "<input type='submit' name='submit' value='Submit'>"; echo "</form>"; } $M1 = $_POST["m1"]; $M2 = $_POST["m2"]; $M3 = $_POST["m3"]; $M4 = $_POST["m4"]; $M5 = $_POST["m5"]; $M6 = $_POST["m6"]; $M7 = $_POST["m7"]; It works if I put numbers e.x. 11111 Otherwise it just enters blank spaces into the table. I've tried '".$search."' I've tried ".$search." mysql_query("UPDATE grades SET m1 = '$M1', m2 = '$M2',m3 = '$M3',m4 = '$M4',m5 = '$M5',m6 = '$M6',m7 = '$M7' WHERE $search = `student_id`"); ?> Table +------------+---+---+---+---+---+---+---+ |student_id|m1|m2|m3|m4|m5|m6|m7| +------------+---+---+---+---+---+---+---+ ===Database d1 == Table structure for table grades |------ |Column|Type|Null|Default |------ |//student_id//|int(5)|No| |m1|text|No| |m2|text|No| |m3|text|No| |m4|text|No| |m5|text|No| |m6|text|No| |m7|text|No| == Dumping data for table grades |11111| | | | | | | |11112|fg|fd|f|f|fd|f|f ===Database d1 == Table structure for table subjects |------ |Column|Type|Null|Default |------ |//student_id//|int(11)|No| |c1|text|No| |c2|text|No| |c3|text|No| |c4|text|No| |c5|text|No| |c6|text|No| |c7|text|No| == Dumping data for table subjects |11111|English|Math|Science|Sport|IT|Art|History |11112|grdgg|vsbvbbb|bdbbrfd|bdbrb|dbrbfbf|fbdfbdbf|dbfbdfb

    Read the article

  • Need help with copy constructor for very basic implementation of singly linked lists

    - by Jesus
    Last week, we created a program that manages sets of strings, using classes and vectors. I was able to complete this 100%. This week, we have to replace the vector we used to store strings in our class with simple singly linked lists. The function basically allows users to declare sets of strings that are empty, and sets with only one element. In the main file, there is a vector whose elements are a struct that contain setName and strSet (class). HERE IS MY PROBLEM: It deals with the copy constructor of the class. When I remove/comment out the copy constructor, I can declare as many empty or single sets as I want, and output their values without a problem. But I know I will obviously need the copy constructor for when I implement the rest of the program. When I leave the copy constructor in, I can declare one set, either single or empty, and output its value. But if I declare a 2nd set, and i try to output either of the first two sets, i get a Segmentation Fault. Moreover, if i try to declare more then 2 sets, I get a Segmentation Fault. Any help would be appreciated!! Here is my code for a very basic implementation of everything: Here is the setcalc.cpp: (main file) #include <iostream> #include <cctype> #include <cstring> #include <string> #include "help.h" #include "strset2.h" using namespace std; // Declares of structure to hold all the sets defined struct setsOfStr { string nameOfSet; strSet stringSet; }; // Checks if the set name inputted is unique bool isSetNameUnique( vector<setsOfStr> strSetArr, string setName) { for(unsigned int i = 0; i < strSetArr.size(); i++) { if( strSetArr[i].nameOfSet == setName ) { return false; } } return true; } int main(int argc, char *argv[]) { char commandChoice; // Declares a vector with our declared structure as the type vector<setsOfStr> strSetVec; string setName; string singleEle; // Sets a loop that will constantly ask for a command until 'q' is typed while (1) { // declaring a set to be empty if(commandChoice == 'd') { cin >> setName; // Check that the set name inputted is unique if (isSetNameUnique(strSetVec, setName) == true) { strSet emptyStrSet; setsOfStr set1; set1.nameOfSet = setName; set1.stringSet = emptyStrSet; strSetVec.push_back(set1); } else { cerr << "ERROR: Re-declaration of set '" << setName << "'\n"; } } // declaring a set to be a singleton else if(commandChoice == 's') { cin >> setName; cin >> singleEle; // Check that the set name inputted is unique if (isSetNameUnique(strSetVec, setName) == true) { strSet singleStrSet(singleEle); setsOfStr set2; set2.nameOfSet = setName; set2.stringSet = singleStrSet; strSetVec.push_back(set2); } else { cerr << "ERROR: Re-declaration of set '" << setName << "'\n"; } } // using the output function else if(commandChoice == 'o') { cin >> setName; if(isSetNameUnique(strSetVec, setName) == false) { // loop through until the set name is matched and call output on its strSet for(unsigned int k = 0; k < strSetVec.size(); k++) { if( strSetVec[k].nameOfSet == setName ) { (strSetVec[k].stringSet).output(); } } } else { cerr << "ERROR: No such set '" << setName << "'\n"; } } // quitting else if(commandChoice == 'q') { break; } else { cerr << "ERROR: Ignoring bad command: '" << commandChoice << "'\n"; } } return 0; } Here is the strSet2.h: #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor // will implement destructor later void output() const; strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ And here is the strSet2.cpp (implementation of class) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { cout << "copy-cst\n"; node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } void strSet::output() const { if(first == NULL) { cout << "Empty set\n"; } else { node *temp; temp = first; while(1) { cout << temp->s1 << endl; if(temp->next == NULL) break; temp = temp->next; } } } strSet& strSet::operator = (const strSet& rtSide) { first = rtSide.first; return *this; }

    Read the article

  • Using a mounted NTFS share with nginx

    - by Hoff
    I have set up a local testing VM with Ubuntu Server 12.04 LTS and the LEMP stack. It's kind of an unconventional setup because instead of having all my PHP scripts on the local machine, I've mounted an NTFS share as the document root because I do my development on Windows. I had everything working perfectly up until this morning, now I keep getting a dreaded 'File not found.' error. I am almost certain this must be somehow permission related, because if I copy my site over to /var/www, nginx and php-fpm have no problems serving my PHP scripts. What I can't figure out is why all of a sudden (after a reboot of the server), no PHP files will be served but instead just the 'File not found.' error. Static files work fine, so I think it's PHP that is causing the headache. Both nginx and php-fpm are configured to run as the user www-data: root@ubuntu-server:~# ps aux | grep 'nginx\|php-fpm' root 1095 0.0 0.0 5816 792 ? Ss 11:11 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf www-data 1096 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process www-data 1098 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process root 1130 0.0 0.4 175560 4212 ? Ss 11:11 0:00 php-fpm: master process (/etc/php5/php-fpm.conf) www-data 1131 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1132 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1133 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www root 1686 0.0 0.0 4368 816 pts/1 S+ 11:11 0:00 grep --color=auto nginx\|php-fpm I have mounted the NTFS share at /mnt/webfiles by editing /etc/fstab and adding the following line: //192.168.0.199/c$/Websites/ /mnt/webfiles cifs username=Jordan,password=mypasswordhere,gid=33,uid=33 0 0 Where gid 33 is the www-data group and uid 33 is the user www-data. If I list the contents of one of my sites you can in fact see that they belong to the user www-data: root@ubuntu-server:~# ls -l /mnt/webfiles/nTv5-2.0 total 8 drwxr-xr-x 0 www-data www-data 0 Jun 6 19:12 app drwxr-xr-x 0 www-data www-data 0 Aug 22 19:00 assets -rwxr-xr-x 0 www-data www-data 1150 Jan 4 2012 favicon.ico -rwxr-xr-x 0 www-data www-data 1412 Dec 28 2011 index.php drwxr-xr-x 0 www-data www-data 0 Jun 3 16:44 lib drwxr-xr-x 0 www-data www-data 0 Jan 3 2012 plugins drwxr-xr-x 0 www-data www-data 0 Jun 3 16:45 vendors If I switch to the www-data user, I have no problem creating a new file on the share: root@ubuntu-server:~# su www-data $ > /mnt/webfiles/test.txt $ ls -l /mnt/webfiles | grep test\.txt -rwxr-xr-x 0 www-data www-data 0 Sep 8 11:19 test.txt There should be no problem reading or writing to the share with php-fpm running as the user www-data. When I examine the error log of nginx, it's filled with a bunch of lines that look like the following: 2012/09/08 11:22:36 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" 2012/09/08 11:22:39 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET /apc.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" It's bizarre that this was working previously and now all of sudden PHP is complaining that it can't "find" the scripts on the share. Does anybody know why this is happening? EDIT I tried editing php-fpm.conf and changing chdir to the following: chdir = /mnt/webfiles When I try and restart the php-fpm service, I get the error: Starting php-fpm [08-Sep-2012 14:20:55] ERROR: [pool www] the chdir path '/mnt/webfiles' does not exist or is not a directory This is a total load of bullshit because this directory DOES exist and is mounted! Any ls commands to list that directory work perfectly. Why the hell can't PHP-FPM see this directory?! Here are my configuration files for reference: nginx.conf user www-data; worker_processes 2; error_log /var/log/nginx/nginx.log info; pid /var/run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { include fastcgi.conf; include mime.types; default_type application/octet-stream; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; ## Proxy proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ## Compression gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ### TCP options tcp_nodelay on; tcp_nopush on; keepalive_timeout 65; sendfile on; include /etc/nginx/sites-enabled/*; } my site config server { listen 80; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /mnt/webfiles/nTv5-2.0/app/webroot; index index.php; ## Block bad bots if ($http_user_agent ~* (HTTrack|HTMLParser|libcurl|discobot|Exabot|Casper|kmccrew|plaNETWORK|RPT-HTTPClient)) { return 444; } ## Block certain Referers (case insensitive) if ($http_referer ~* (sex|vigra|viagra) ) { return 444; } ## Deny dot files: location ~ /\. { deny all; } ## Favicon Not Found location = /favicon.ico { access_log off; log_not_found off; } ## Robots.txt Not Found location = /robots.txt { access_log off; log_not_found off; } if (-f $document_root/maintenance.html) { rewrite ^(.*)$ /maintenance.html last; } location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "max-age=2678400, public, must-revalidate"; } location / { try_files $uri $uri/ index.php; if (-f $request_filename) { break; } rewrite ^(.+)$ /index.php?url=$1 last; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/opt/php5). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /opt/php5 otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /opt/php5/var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; Note: the default prefix is /opt/php5/var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /opt/php5) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 20 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic') ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; max children reached: 1 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /opt/php5) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i

    Read the article

  • Categorized Document Management System

    - by cmptrgeekken
    At the company I work for, we have an intranet that provides employees with access to a wide variety of documents. These documents fall into several categories and subcategories, and each of these categories have their own web page. Below is one such page (each of the links shown will link to a similar view for that category): We currently store each document as a file on the web server and hand-code links to these documents whenever we need to add a new document. This is tedious and error-prone, and it also means we lack any sort of security for accessing these documents. I began looking into document management systems (like KnowledgeTree and OpenKM), however, none of these systems seem to provide a categorized view like in the preview above. My question is ... does anyone know of any Document Management System that allow for the type of flexibility we currently have with hand-coding links to our documents into various webpages (major and minor , while also providing security, ease of use, and (less important) version control? Or do you think I'd be better off developing such a system from scratch?

    Read the article

  • How to set java_home on Windows 7?

    - by Derek
    I went to the Environment Variables in 'System' in the control panel and made 2 new variables. one for user variables and one for system variables, both named JAVA_HOME and both pointing to C:\Sun\SDK\jdk\bin but for some reason, I still get the below error when running a java command... BUILD FAILED C:\Users\Derek\Desktop\eclipse\eclipse\glassfish\setup.xml:161: The following error occurred while executing this line: C:\Users\Derek\Desktop\eclipse\eclipse\glassfish\setup.xml:141: The following error occurred while executing this line: C:\Users\Derek\Desktop\eclipse\eclipse\glassfish\setup.xml:137: Please set java.home to a JDK installation Total time: 1 second C:\Users\Derek\Desktop\eclipse\eclipse\glassfish>lib\ant\bin\ant -f setup.xml Unable to locate tools.jar. Expected to find it in C:\Program Files\Java\jre6\lib\tools.jar Buildfile: setup.xml

    Read the article

  • Visual Studio - Project with that name already opened in the solution

    - by Mike K.
    I recently migrated a VSS database to TFS 2008. Using Source Control Explorer, I got the latest version of a solution with 12 projects. When I opened the solution in VS 2005, two of the projects were not found. I am not sure why these two projects were not found, but thought it easiest to just delete and re-add them to the solution. When I do this, VS gives me a "A project with that name is already open in the solution." The project doesn't appear in solution explorer, and is not listed in the .sln file. Any ideas?

    Read the article

  • How to dock CPaneDialog to MainFrm and.. ?

    - by JongAm Park
    Hello, I have problem with CPaneDialog. I tested with SetPaneSize MFC feature pack sample projects. What is weird is that CPaneDialog can't be docked to MainFrm while CDockablePane can be. The CPaneDialog is also a child class of the CDockablePane, but it can't be. Only DockToWindow( &other_CPaneDialog_instance... ) is possible. If I call DockToPane(), the content of the CPaneDialog is not drawn or refreshed correctly. How can a CPaneDialog be docked to MainFrm window? Another problem is about drawing. If remove codes for tree control in the SetPaneSize sample, the content of the view1 ( an instance of CDockablePane) is not redrawn properly. After doing some experiment, I decided that something should be done in its OnSize and OnPaint method. (OnSize is more critical. ) Is this expected behaviour?

    Read the article

  • How Can I Bypass the X-Frame-Options: SAMEORIGIN HTTP Header?

    - by Daniel Coffman
    I am developing a web page that needs to display, in an iframe, a report served by another company's SharePoint server. They are fine with this. The page we're trying to render in the iframe is giving us X-Frame-Options: SAMEORIGIN which causes the browser (at least IE8) to refuse to render the content in a frame. First, is this something they can control or is it something SharePoint just does by default? If I ask them to turn this off, could they even do it? Second, can I do something to tell the browser to ignore this http header and just render the frame?

    Read the article

  • Using telerik radGrid - how to set the Date format for autogenerated column in edit mode

    - by Mark Breen
    Hello All, Using VS2008, and Telerik radGrid version 2010.1.519.35 I have a about 50 DNN modules using telerik radgrid and I need to display my dates in dd/mm/yy format. It is possible to do this easily in view mode, but when I switch to edit mode, it is more of a struggle. I can write a snippit of code to reformat the displayed date values to dd/mm/yy, but for inserts the user must enter mm/dd/yy. IOW, I need to change the culture of the form to en-GB culture. In my DotnetNuke App, I have made a change to the web.config, but it still assumes en-US format. I am not sure whether I need to set this at web.config level, page level or at the column within the control. I am struggling with this for a month or more and any help would be appriciated, thanks Mark Breen Ireland BMW R80GS 1987

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • ScriptResource.axd Access is denied. Cross-Domain iFrame

    - by EtienneT
    We have a web page containing an iframe containing a page sharing an authentification cookie with it's parent page. For example the iframe page is on the domain foo.domain.com and the page containing the iframe is on foo2.domain.com. Both share a cookie from domain.com. Authentification works great, but the problem is with ASP.NET in IE7, we always get a javascript error: Access is denied. ScriptResource.axd We are using ASP.NET 3.5, we use Ajax Control Toolkit also (latest version 3.0.30930.0). The problem doesn't occur for IE8. No problem in Firefox and Chrome also. Anyone encountered this problem before?

    Read the article

  • The project file has been moved renamed or not on your computer

    - by sineas
    I get this error when i try to load a VS 2008 project from TFS (Source control) The project file has been moved renamed or not on your computer After i click OK the project says "unavailable". What is the problem? how do i resolve this? i never had this problem before. Some blogs said to delete the .suo file. But i cant locate the .suo file. I deleted the entire project on my local computer so that the next time it opens it will create a new one, but i still get same error. any ideas?

    Read the article

  • DotNetOpenAuth OpenID Provider "Sequence contains more than one element"

    - by Matthew Johnson
    Hello, all, I'm having trouble implementing my OpenID provider with DNOA 3.4.3. Everything was going absolutely peachy until I needed AX support as well. I set AXFetchAsSregTransform in the web config, as recommended by Andrew at http://groups.google.com/group/dotnetopenid/browse_thread/thread/5629a24c0a7e8d99. Doing this caused me to get the exception "Sequence Contains More Than One Element" on my decide.aspx page, however, and I haven't been able to get past it. The following line is throwing the exception: Edit: Strangely enough, this is not the line throwing the error anymore. The SendResponse() is now triggering the exception ClaimsRequest requestedFields = ProviderEndpoint.PendingRequest.GetExtension(); ProviderEndpoint.SendResponse() Any thoughts on why this may be? Any help would be greatly appreciated! The logs leading up to the error are as follows: 2010-04-28 12:38:20,247 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/provider.ashx?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.mode=checkid_setup&openid.ns.ext1=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ext1.mode=fetch_request&openid.ext1.type.email=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ext1.type.fullname=http%3A%2F%2Faxschema.org%2FnamePerson&openid.ext1.type.language=http%3A%2F%2Faxschema.org%2Fpref%2Flanguage&openid.ext1.required=email&openid.return_to=http%3A%2F%2Fmyrelyingparty%2Flogin.jsp%3Foidreturn%3D%252Fhome&openid.assoc_handle=%7B634080802953194640%7D%7BHxjFNw==%7D%7B20%7D&openid.realm=http%3A%2F%2Fmyrelyingparty 2010-04-28 12:38:20,285 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Processing incoming CheckIdRequest (2.0) message: openid.claimed_id: http://specs.openid.net/auth/2.0/identifier_select openid.identity: http://specs.openid.net/auth/2.0/identifier_select openid.assoc_handle: {634080802953194640}{HxjFNw==}{20} openid.return_to: http://myrelyingparty/login.jsp?oidreturn=%2Fhome openid.realm: http://myrelyingparty/ openid.mode: checkid_setup openid.ns: http://specs.openid.net/auth/2.0 openid.ns.ext1: http://openid.net/srv/ax/1.0 openid.ext1.mode: fetch_request openid.ext1.type.email: http://axschema.org/contact/email openid.ext1.type.fullname: http://axschema.org/namePerson openid.ext1.type.language: http://axschema.org/pref/language openid.ext1.required: email 2010-04-28 12:38:22,773 (GMT-7) [14] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:36,167 (GMT-7) [5] INFO DotNetOpenAuth.Messaging.Channel - Scanning incoming request for messages: https://myprovider/login.aspx?ReturnUrl=%2fdecide.aspx 2010-04-28 12:38:38,147 (GMT-7) [14] ERROR DotNetOpenAuth.Messaging - Protocol error: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverable(OpenIdProvider provider) at OpenIdProviderWebForms.decide.Page_Load(Object src, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) 2010-04-28 12:38:38,149 (GMT-7) [14] INFO DotNetOpenAuth.Yadis - Relying party discovery at URL http://myrelyingparty/ failed. DotNetOpenAuth.Messaging.ProtocolException: An HTTP request to the realm URL (http://myrelyingparty/) resulted in a redirect, which is not allowed during relying party discovery. at DotNetOpenAuth.Messaging.ErrorUtilities.VerifyProtocol(Boolean condition, String message, Object[] args) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\Messaging\ErrorUtilities.cs:line 235 at DotNetOpenAuth.OpenId.Realm.Discover(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 446 at DotNetOpenAuth.OpenId.Realm.DiscoverReturnToEndpoints(IDirectWebRequestHandler requestHandler, Boolean allowRedirects) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Realm.cs:line 424 at DotNetOpenAuth.OpenId.Provider.HostProcessedRequest.IsReturnUrlDiscoverableCore(OpenIdProvider provider) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\HostProcessedRequest.cs:line 142 2010-04-28 12:38:42,076 (GMT-7) [8] ERROR OpenIdProviderWebForms.Global - An unhandled exception was raised. Details follow: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.InvalidOperationException: Sequence contains more than one element at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source) at DotNetOpenAuth.OpenId.Provider.Request.GetExtension[T]() in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\Request.cs:line 176 at DotNetOpenAuth.OpenId.Extensions.ExtensionsInteropHelper.ConvertSregToMatchRequest(IHostProcessedRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Extensions\ExtensionsInteropHelper.cs:line 180 at DotNetOpenAuth.OpenId.Behaviors.AXFetchAsSregTransform.DotNetOpenAuth.OpenId.Provider.IProviderBehavior.OnOutgoingResponse(IAuthenticationRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Behaviors\AXFetchAsSregTransform.cs:line 139 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.ApplyBehaviorsToResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 482 at DotNetOpenAuth.OpenId.Provider.OpenIdProvider.SendResponse(IRequest request) in c:\TeamCity\buildAgent\work\bf9e2ca68b75a334\src\DotNetOpenAuth\OpenId\Provider\OpenIdProvider.cs:line 325 at OpenIdProviderWebForms.decide.Yes_Click(Object sender, EventArgs e) in C:\Projects\OpenIdProviderWebForms\decide.aspx.cs:line 130 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.decide_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\root\7f580b93\b3e4d917\App_Web_tulh9ymv.1.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • WPF: Master - detail view with two datagrids and in MVVM

    - by EV
    Hi, I'm trying to write a master - detail control that consists of a master datagrid and the detail datagrid. My scenario was following - I used the SelectedItem and bound it to a property in ModelView. The problem is - the SelectedItem in ViewModel is never used, so I can't get the information which item is selected in a master datagrid and cannot fetch the data for thos selection. The code is below: <toolkit:DataGrid ItemsSource="{Binding}" RowDetailsVisibilityMode="VisibleWhenSelected" SelectedItem="{Binding SelectedItemHandler, Mode=TwoWay}"></toolkit:DataGrid> And in ViewModel private CustomerObjects _selectedItem; public CustomerObjects SelectedItemHandler { get { return _selectedItem; } set { OnPropertyChanged("SelectedItem"); } } The code in SelectedItemHandler is never used. What could be the problem? Should I use another approach to create master - detail in MVVM?

    Read the article

< Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >