Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 224/256 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Resetting a PChar variable

    - by scott-thornton
    Hi, I don't know much about delphi win 32 programming, but I hope someone can answer my question. I get duplicate l_sGetUniqueIdBuffer saved into the database which I want to avoid. The l_sGetUniqueIdBuffer is actually different ( the value of l_sAuthorisationContent is xml, and I can see a different value generated by the call to getUniqueId) between rows. This problem is intermittant ( duplicates are rare...) There is only milliseconds difference between the update date between the rows. Given: ( unnesseary code cut out) l_sGetUniqueIdBuffer: PChar; FOutputBufferSize : integer; FOutputBufferSize := 1024; while( not dmAccomClaim.ADOQuClaimIdentification.Eof ) do begin // Get a unique id for the request l_sGetUniqueIdBuffer := AllocMem (FOutputBufferSize); l_returnCode := getUniqueId (m_APISessionId^, l_sGetUniqueIdBuffer, FOutputBufferSize); dmAccomClaim.ADOQuAddContent.Active := False; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pContent').Value := (WideString(l_sAuthorisationContent)); dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pClaimId').Value := dmAccomClaim.ADOQuClaimIdentification.FieldByName('SB_CLAIM_ID').AsString; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pUniqueId').Value := string(l_sGetUniqueIdBuffer); dmAccomClaim.ADOQuAddContent.ExecSQL; FreeMem( l_sAuthorisationContent, l_iAuthoriseContentSize ); FreeMem( l_sGetUniqueIdBuffer, FOutputBufferSize ); end; I guess i need to know, is the value in l_sGetUniqueIdBuffer being reset for every row??

    Read the article

  • Best practice - When to evaluate conditionals of function execution

    - by Tesserex
    If I have a function called from a few places, and it requires some condition to be met for anything it does to execute, where should that condition be checked? In my case, it's for drawing - if the mouse button is held down, then execute the drawing logic (this is being done in the mouse movement handler for when you drag.) Option one says put it in the function so that it's guaranteed to be checked. Abstracted, if you will. public function Foo() { DoThing(); } private function DoThing() { if (!condition) return; // do stuff } The problem I have with this is that when reading the code of Foo, which may be far away from DoThing, it looks like a bug. The first thought is that the condition isn't being checked. Option two, then, is to check before calling. public function Foo() { if (condition) DoThing(); } This reads better, but now you have to worry about checking from everywhere you call it. Option three is to rename the function to be more descriptive. public function Foo() { DoThingOnlyIfCondition(); } private function DoThingOnlyIfCondition() { if (!condition) return; // do stuff } Is this the "correct" solution? Or is this going a bit too far? I feel like if everything were like this function names would start to duplicate their code. About this being subjective: of course it is, and there may not be a right answer, but I think it's still perfectly at home here. Getting advice from better programmers than I is the second best way to learn. Subjective questions are exactly the kind of thing Google can't answer.

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Horizontally align rows in multiple tables using web user control

    - by goku_da_master
    I need to align rows in different tables that are layed out horizontally. I'd prefer to put the html code in a single web user control so I can create as many instances of that control as I want and lay them out horizontally. The problem is, the text in the rows needs to wrap. So some rows may expand vertically and some may not (see the example below). When that happens, the rows in the other tables aren't aligned horizontally. I know I can accomplish all this by using a single table, but that would mean I'd have to duplicate the name, address and phone html code instead of dynamically creating new instances of my user control (in reality there are many more fields than this, but I'm keeping it simple). Is there any way to do this whether with div's, tables or something else? Here's the problem: Mary Jane's address field expands 2 lines, causing her phone field to not align properly with John's and Bob's. Name: John Doe Name: Mary Jane Name: Bob Smith Address: 123 broadway Address: Some really long address Address: Short address Phone: 123-456 that takes up multiple lines Phone: 111-2222 Phone: 456-789 I'm not restricted in any way how to do this (other than using asp.net), but I'd prefer to use a single web control that I instantiate X times at design time (in this example, it's 3 times). I'm using VS2008, and .Net 3.5

    Read the article

  • CommandBuilder and SqlTransaction to insert/update a row

    - by Jesse
    I can get this to work, but I feel as though I'm not doing it properly. The first time this runs, it works as intended, and a new row is inserted where "thisField" contains "doesntExist" However, if I run it a subsequent time, I get a run-time error that I can't insert a duplicate key as it violate the primary key "thisField". static void Main(string[] args) { using(var sqlConn = new SqlConnection(connString) ) { sqlConn.Open(); var dt = new DataTable(); var sqlda = new SqlDataAdapter("SELECT * FROM table WHERE thisField ='doesntExist'", sqlConn); sqlda.Fill(dt); DataRow dr = dt.NewRow(); dr["thisField"] = "doesntExist"; //Primary key dt.Rows.Add(dr); //dt.AcceptChanges(); //I thought this may fix the problem. It didn't. var sqlTrans = sqlConn.BeginTransaction(); try { sqlda.SelectCommand = new SqlCommand("SELECT * FROM table WITH (HOLDLOCK, ROWLOCK) WHERE thisField = 'doesntExist'", sqlConn, sqlTrans); SqlCommandBuilder sqlCb = new SqlCommandBuilder(sqlda); sqlda.InsertCommand = sqlCb.GetInsertCommand(); sqlda.InsertCommand.Transaction = sqlTrans; sqlda.DeleteCommand = sqlCb.GetDeleteCommand(); sqlda.DeleteCommand.Transaction = sqlTrans; sqlda.UpdateCommand = sqlCb.GetUpdateCommand(); sqlda.UpdateCommand.Transaction = sqlTrans; sqlda.Update(dt); sqlTrans.Commit(); } catch (Exception) { //... } } } Even when I can get that working through trial and error of moving AcceptChanges around, or encapsulating changes within Begin/EndEdit, then I begin to experience a "Concurrency violation" in which it won't update the changes, but rather tell me it failed to update 0 of 1 affected rows. Is there something crazy obvious I'm missing?

    Read the article

  • Advantages/Disadvantages of different implementations for Comparing Objects using .NET

    - by Kevin Crowell
    This questions involves 2 different implementations of essentially the same code. First, using delegate to create a Comparison method that can be used as a parameter when sorting a collection of objects: class Foo { public static Comparison<Foo> BarComparison = delegate(Foo foo1, Foo foo2) { return foo1.Bar.CompareTo(foo2.Bar); }; } I use the above when I want to have a way of sorting a collection of Foo objects in a different way than my CompareTo function offers. For example: List<Foo> fooList = new List<Foo>(); fooList.Sort(BarComparison); Second, using IComparer: public class BarComparer : IComparer<Foo> { public int Compare(Foo foo1, Foo foo2) { return foo1.Bar.CompareTo(foo2.Bar); } } I use the above when I want to do a binary search for a Foo object in a collection of Foo objects. For example: BarComparer comparer = new BarComparer(); List<Foo> fooList = new List<Foo>(); Foo foo = new Foo(); int index = fooList.BinarySearch(foo, comparer); My questions are: What are the advantages and disadvantages of each of these implementations? What are some more ways to take advantage of each of these implementations? Is there a way to combine these implementations in such a way that I do not need to duplicate the code? Can I achieve both a binary search and an alternative collection sort using only 1 of these implementations?

    Read the article

  • RFC: Whitespace's Assembly Mnemonics

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What is the general consensus on the following revised list for Whitespace's assembly instructions? They definitely come from thinking outside of the box and trying to come up with a better mnemonic set than last time. When the previous python interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a few hours. Hopefully, the instructions make more sense to those new to programming jargon.

    Read the article

  • SEO redirects for removed pages

    - by adam
    Hi, Apologies if SO is not the right place for this, but there are 700+ other SEO questions on here. I'm a senior developer for a travel site with 12k+ pages. We completely redeveloped the site and relaunched in January, and with the volatile nature of travel, there are many pages which are no longer on the site. Examples: /destinations/africa/senegal.aspx /destinations/africa/features.aspx Of course, we have a 404 page in place (and it's a hard 404 page rather than a 30x redirect to a 404). Our SEO advisor has asked us to 30x redirect all our 404 pages (as found in Webmaster Tools), his argument being that 404's are damaging to our pagerank. He'd want us to redirect our Senegal and features pages above to the Africa page (which doesn't contain the content previously found on Senegal.aspx or features.aspx). An equivalent for SO would be taking a url for a removed question and redirecting it to /questions rather than showing a 404 'Question/Page not found'. My argument is that, as these pages are no longer on the site, 404 is the correct status to return. I'd also argue that redirecting these to less relevant pages could damage our SEO (due to duplicate content perhaps)? It's also very time consuming redirecting all 404's when our site takes some content from our in-house system, which adds/removes content at will. Thanks for any advice, Adam

    Read the article

  • Retrieve property from classpath inside POM

    - by Jeroen
    For my current project I want to integrate a maven plug-in for database migrations. For this plug-in to work, however, I have to obtain the database settings inside my POM. My database settings are currently placed inside a hibernate.properties file, positioned in a directory that is marked as maven resource. For a variety of reasons I do not want to duplicate my database configurations in both the pom and hibernate.properties. I'm aware that maven offers a "filtering" ability which makes it possible to specify the database settings as property inside my POM, and reference them inside my hibernate.properties as ${property_name}. But as I'm using multiple maven profiles, with different property resources, this is not a suitable solution. Instead I'd like my database configurations to be loaded from a property file inside my classpath (e.g. classpath:hibernate.properties), and use these properties in my migration plug-in configuration. I have already tried the org.codehaus.mojo » properties-maven-plugin, but this plug-in only accepts absolute locations. Is there a plug-in which can scan all my maven resources for a certain property?

    Read the article

  • DVCS with a Windows central repository

    - by Mikko Rantanen
    We are currently using VSS for version control. Quite few of our developers are interested in a distributed model (And want to get rid of VSS). Our network is full of Windows machines and while our IT department has experience maintaining Linux machines they would prefer not to. What DVCS systems can host their central repository on Windows while providing.. Push access to the repository. Basic authentication. Mostly just a way to allow or deny access to the whole repository. No need for fine grained access. Server process so users don't need write right to the repository reducing the risk of accidentally messing with it. On the client side a GUI such as Tortoise would be more or less a requirement (Sorry, Windows shell sucks. :|). Ease of installation would be a huge plus as our IT department is already quite low on resources. And using windows credentials for authentication would be an advantage but not a requirement as long as the client is able to store the credentials. I have had a (really) quick look at Git, Mercurial and Bazaar. Git seemed to use ssh or simple WebDAV for repository access, requiring write permission for the users. Mercurial had a built in http server, but this seemed to be only for pull purposes. Update: Mercurial supports push as well. Bazaar Seemed to use sftp for repository access, again requiring a write permission for the users. Are there windows server processes for any DVCS systems and has anyone managed to set one up in a Windows land? And apologies if this is a duplicate question. I couldn't find one. Update Got Mercurial working for push purposes! Detailed list what was required can be found as an answer below.

    Read the article

  • Storing multiple inputs with the same name in a CodeIgniter session

    - by Joshua Cody
    I've posted this in the CodeIgniter forum and exhausted the forum search engine as well, so apologies if cross-posting is frowned upon. Essentially, I've got a single input, set up as <input type="text" name="goal">. At a user's request, they may add another goal, which throws a duplicate to the DOM. What I need to do is grab these values in my CodeIgniter controller and store them in a session variable. My controller is currently constructed thusly: function goalsAdd(){ $meeting_title = $this->input->post('topic'); $meeting_hours = $this->input->post('hours'); $meeting_minutes = $this->input->post('minutes'); $meeting_goals = $this->input->post('goal'); $meeting_time = $meeting_hours . ":" . $meeting_minutes; $sessionData = array( 'totaltime' => $meeting_time, 'title' => $meeting_title, 'goals' => $meeting_goals ); $this->session->set_userdata($sessionData); $this->load->view('test', $sessionData); } Currently, obviously, my controller gets the value of each input, writing over previous values in its wake, leaving only a string of the final value. I need to store these, however, so I can print them on a subsequent page. What I imagine I'd love to do is extend the input class to be able to call $this-input-posts('goal'). Eventually I will need to store other arrays to session values. But I'm totally open to implementation suggestion. Thanks so much for any help you can give.

    Read the article

  • Remove "Flash" between pages while using Internet Explorer modal boxes

    - by AaronS
    I have an internal web application, that is IE specific, and uses a lot of IE specific modal boxes: (window.showModalDialog). I recently received a request to remove the "flash" when navigating between pages of the site. To accomplish this, I just added a meta transition tag to my master page: <meta http-equiv="Page-Enter" content="blendTrans(duration=0.0)" /> This works perfectly except for the modal boxes. When you launch a modal box, and then move it around, the web page behind it keeps a trail of the modal box instead of re-drawing the web page content. This prevents the user from moving the modal box to read anything that was behind it. Is there a way to prevent the "flash" between pages in an IE specific site and have the site still work with modal boxes? Please note, this is a large and complex site, so re-architecting it to not use modal boxes isn't an option. This is an asp.net, c# web application, and all of my users are using IE 7 and IE 8 if it makes any difference. -Edit- To duplicate this, put the following into an html page, and open it in Internet Explorer: <html> <head> <title>Test</title> <meta content="blendTrans(duration=0.0)" http-equiv="Page-Exit"> </head> <body> <script language="javascript"> window.showModalDialog('modal.htm', window); </script> </body> </html>

    Read the article

  • Replacing accented/umlauted characters with their unadorned counterparts in C# [closed]

    - by Andrew Rollings
    Duplicate of 249087 I have a bunch of user generated addresses that may contain characters with diacritic marks. What is the most effective (i.e. generic) way (apart from a straightforward replace) to automatically convert any such characters to their closest English equivalent? E.g. any of àâãäå would become a æ would become the two separate letters ae ç would become c any of èéêë would become e etc. for all possible letter variations (preferably without having to find and encode lookups for each diacritic form of the letter). (Note: I have to pass these addresses on to third party software that is incapable of printing anything other than English characters. I'd rather the software was capable of handling them, but I have no control over that.) EDIT: Never mind... Found the answer [here][2]. It showed up in the "Related" section to the right of the question after I posted, but not in my prior search or as a pre-post suggestion. Hmm. I added the 'diacritics' tag to the other question in any case. EDIT 2: Jeez! Who voted this -1 after I closed it?

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • Using the Search API with Sharepoint Foundation 2010 - 0 results

    - by MB
    I am a sharepoint newbee and am having trouble getting any search results to return using the search API in Sharepoint 2010 Foundation. Here are the steps I have taken so far. The Service Sharepoint Foundation Search v4 is running and logged in as Local Service Under Team Site - Site Settings - Search and Offline Availability, Indexing Site Content is enabled. Running the PowerShell script Get-SPSearchServiceInstance returns TypeName : SharePoint Foundation Search Description : Search index file on the search server Id : 91e01ce1-016e-44e0-a938-035d37613b70 Server : SPServer Name=V-SP2010 Service : SPSearchService Name=SPSearch4 IndexLocation : C:\Program Files\Common Files\Microsoft Shared\Web Server Exten sions\14\Data\Applications ProxyType : Default Status : Online When I do a search using the search textbox on the team site I get a results as I would expect. Now, when I try to duplicate the search results using the Search API I either receive an error or 0 results. Here is some sample code: using Microsoft.SharePoint.Search.Query; using (var site = new SPSite(_sharepointUrl, token)) { // FullTextSqlQuery fullTextSqlQuery = new FullTextSqlQuery(site) { QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope() WHERE \"scope\"='All Sites' AND CONTAINS('\"{0}\"')", searchPhrase), //QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope()", searchPhrase), TrimDuplicates = true, StartRow = 0, RowLimit = 200, ResultTypes = ResultType.RelevantResults //IgnoreAllNoiseQuery = false }; ResultTableCollection resultTableCollection = fullTextSqlQuery.Execute(); ResultTable result = resultTableCollection[ResultType.RelevantResults]; DataTable tbl = new DataTable(); tbl.Load(result, LoadOption.OverwriteChanges); } When the scope is set to All Sites I retrieve an error about the search scope not being available. Other search just return 0 results. Any ideas about what I am doing wrong?

    Read the article

  • acts_as_xapian jobs table

    - by Grnbeagle
    Hi, Can someone explain to me the inner workings of acts_as_xapian_jobs table? I ran into an issue with the acts_as_xapian plugin recently, where I kept getting the following error when it creates an object with xapian indexed fields: Mysql::Error: Duplicate entry 'String-2147483647' for key 2: INSERT INTO `acts_as_xapian_jobs` (`action`, `model`, `model_id`) VALUES ('update', 'String', 23730251831560) It turns out the model_id exceeded the max int value of 2147483647. The workaround was to update model_id to use bigint. Why would the model_id be so huge? By looking at content of acts_as_xapian_jobs, it seems it creates a row for every field that is being indexed.. Understanding how a job gets created in the table would help a great deal. Here's a sampling of the table: mysql> select * from acts_as_xapian_jobs limit 5\G *************************** 1. row *************************** id: 19 model: String model_id: 23804037900560 action: update *************************** 2. row *************************** id: 49 model: String model_id: 23804037191200 action: update *************************** 3. row *************************** id: 79 model: String model_id: 23804037932180 action: update *************************** 4. row *************************** id: 109 model: String model_id: 23804037101700 action: update *************************** 5. row *************************** id: 139 model: String model_id: 23804037722160 action: update Thanks in advance, Amie

    Read the article

  • Assembly unavailable after Web.config change

    - by tags2k
    I'm using a custom framework that uses reflection to do a GetTypeByName(string fullName) on the fully-qualified type name that it gets from the database, to create an instance of said type and add it to the page, resulting in a standard modular kind of thing. GetTypeByName is a utility function of mine that simply iterates through Thread.GetDomain().GetAssemblies(), then performs an assembly.GetType(fullName) to find the relevant type. Obviously this result gets cached for future reference and speed. However, I'm experiencing some issues whereby if the web.config gets updated (and, in some scarier instances if the application pool gets recycled) then it will lose all knowledge of certain assemblies, resulting in the inability to render an instance of the module type. Debugging shows that the missing assembly literally does not exist in the current thread assemblies list. To get around this I added a second check which is a bit dirty but recurses through the /bin/ directory's DLLs and checks that each one exists in the assemblies list. If it doesn't, it loads it using Assembly.Load and fixing the context issue thanks to 'Solving the Assembly Load Context Problem'. This would work, only it seems that (and I'm aware this shouldn't be possible) some projects still have access to the missing assembly, for example my actual web project rather than the framework itself - and it then complains that duplicate references have been added! Has anyone ever heard of anything like this, or have any ideas why an assembly would simply drop out of existence on a config change? Short of a solution, what is the most elegant workaround to get all the assemblies in the bin to reload? It needs to be all in one "hit" so that the site visitors don't see any difference other than a small delay, so an app_offline.htm file is out of the question. Programatically renaming a DLL in the bin and then naming it back does work, but requires "modify" permissions for the IIS user account, which is insane. Thanks for any pointers the community can gather!

    Read the article

  • Programmtically choosing image conversion format to JPEG or PNG for Silverlight display

    - by Otaku
    I have a project where I need to convert a large number of image types to be displayable in a Silverlight app - TIFF, GIF, WMF, EMF, BMP, DIB, etc. I can do these conversions on the server before hydrating the Silverlight app. However, I'm not sure when I should choose to convert to which format, either JPG or PNG. Is there some kind of standard out there like TIFF should always be a JPEG and GIF should always be a PNG. Or, if a BMP is 24 bit, it should be converted to a JPEG - any lower and it can be a PNG. Or everything is a PNG and why? What I usually see or see in response to this type of question is "Well, if the picture is a photograph, go with JPEG" or "If it has straight lines, PNG is better." Unfortunately, I won't have the luxury of viewing any of the image files at all and would like just a standard way to do this via code, even if that is a zillion if/then statements. Are there any standards or best practices around this subject? P.S. Please don't move to close this subject - it actually has no duplicate on SO because I'm not looking for subjectivity.

    Read the article

  • Problems with ASP.NET, machine-level web.config, and the location element

    - by Daniel Schaffer
    I've got a server running Windows Web Server 2008 R2. The machine-level web.config has the following entries: <location path="Preview"> <appSettings> <add key="Environment" value="Preview" /> </appSettings> </location> <location path="Staging"> <appSettings> <add key="Environment" value="Staging" /> </appSettings> </location> <location path="Production"> <appSettings> <add key="Environment" value="Production" /> </appSettings> </location> I have a website that I'd set up in the direction D:\Sites\Preview\, so the full path would be D:\Sites\Preview\WebSite1. If I put a simple aspx file that just outputs the value of ConfigurationManager.AppSettings["Environment"], it displays the value Preview. I'm not clear on exactly how that works, but it does. I'd set this up several weeks ago, and just now tried to duplicate this - I put a second site in the D:\Sites\Preview\ directory, expecting that it would automatically pick up the appropriate appSettings entries, but for some reason it hasn't - the same aspx page doesn't show anything. Additionally, when I go into the IIS manager and open the Configuration Editor, there are no settings in there, whereas there are settings listed for the first site. Any ideas as to what I could be missing? Is the location element intended to work like this, or did I just find some magical fluke with my first site?

    Read the article

  • Prevent comment form re-submission

    - by Rob
    I'm got a comment form for an article and i'd to prevent re-submission. I notice that Worpdress handles this very well (going back doesn't cause the browser to request a form re-submission), but I can't figure out how they do it, even though our methods are very similar. My Script User visits mydomain.com/article/1/article_title.html Fills in a form which posts to mydomain.com/addnewcomment/1.html I then do a 302 redirect back to mydomain.com/article/1/article_title.html Now if I press back from this position it doesn't request a redirect. However, if I go to another page e.g. mydomain.com/tag/1/my_tag.html and press back it does resubmit the form. Obviously I want to prevent this. What Wordpress does User visits mydomain.com/?p=1 Fills in a form which posts to mydomain.com/wp-comments-post.php This then does a 302 redirect back to mydomain.com/?p=1 Pressing back or visiting another page and pressing back doesn't cause a re-submission. I've had a look through the WP code but I can't see how they manage this. Obviously it's something i'd like to achieve. Does anyone have any thoughts on where I may be going wrong? (I'm only using Wordpress as an example to prove that it's possible, obviously i'm not trying to exactly duplicate WP, that would be pointless)

    Read the article

  • How to replace custom IDs in the order of their appearance with a shell script?

    - by Péter Török
    I have a pair of rather large log files with very similar content, except that some identifiers are different between the two. A couple of examples: UnifiedClassLoader3@19518cc | UnifiedClassLoader3@d0357a JBossRMIClassLoader@13c2d7f | JBossRMIClassLoader@191777e That is, wherever the first file contains UnifiedClassLoader3@19518cc, the second contains UnifiedClassLoader3@d0357a, and so on. I want to replace these with identical IDs so that I can spot the really important differences between the two files. I.e. I want to replace all occurrences of both UnifiedClassLoader3@19518cc in file1 and UnifiedClassLoader3@d0357a in file2 with UnifiedClassLoader3@1; all occurrences of both JBossRMIClassLoader@13c2d7f in file1 and JBossRMIClassLoader@191777e in file2 with JBossRMIClassLoader@2 etc. Using the Cygwin shell, so far I managed to list all different identifiers occurring in one of the files with grep -o -e 'ClassLoader[0-9]*@[0-9a-f][0-9a-f]*' file1.log | sort | uniq However, now the original order is lost, so I don't know which is the pair of which ID in the other file. With grep -n I can get the line number, so the sort would preserve the order of appearance, but then I can't weed out the duplicate occurrences. Unfortunately grep can not print only the first match of a pattern. I figured I could save the list of identifiers produced by the above command into a file, then iterate over the patterns in the file with grep -n | head -n 1, concatenate the results and sort them again. The result would be something like 2 ClassLoader3@19518cc 137 ClassLoader@13c2d7f 563 ClassLoader3@1267649 ... Then I could (either manually or with sed itself) massage this into a sed command like sed -e 's/ClassLoader3@19518cc/ClassLoader3@2/g' -e 's/ClassLoader@13c2d7f/ClassLoader@137/g' -e 's/ClassLoader3@1267649/ClassLoader3@563/g' file1.log > file1_processed.log and similarly for file2. However, before I start, I would like to verify that my plan is the simplest possible working solution to this. Is there any flaw in this approach? Is there a simpler way?

    Read the article

  • "pushModalScreen called by a non-event thread" thrown on event thread

    - by JGWeissman
    I am trying to get my Blackberry application to display a custom modal dialog, and have the opening thread wait until the user closes the dialog screen. final Screen dialog = new FullScreen(); ...// Fields are added to dialog Application.getApplication().invokeAndWait(new Runnable() { public void run() { Application.getUiApplication().pushModalScreen(dialog); } }); This is throwing an Exception which says "pushModalScreen called by a non-event thread" despite the fact that I am using invokeAndWait to call pushModalScreen from the event thread. Any ideas about what the real problem is? Here is the code to duplicate this problem: package com.test; import net.rim.device.api.ui.*; import net.rim.device.api.ui.component.*; import net.rim.device.api.ui.container.*; public class Application extends UiApplication { public static void main(String[] args) { new Application(); } private Application() { new Thread() { public void run() { Application.this.enterEventDispatcher(); } }.start(); final Screen dialog = new FullScreen(); final ButtonField closeButton = new ButtonField("Close Dialog"); closeButton.setChangeListener(new FieldChangeListener() { public void fieldChanged(Field field, int context) { Application.getUiApplication().popScreen(dialog); } }); dialog.add(closeButton); Application.getApplication().invokeAndWait(new Runnable() { public void run() { try { Application.getUiApplication().pushModalScreen(dialog); } catch (Exception e) { // To see the Exception in the debugger throw new RuntimeException(e.getMessage()); } } }); System.exit(0); } } I am using Component Package version 4.5.0.

    Read the article

  • How do I search a NTEXT column for XML attributes and update the values? MS SQL 2005

    - by Alan
    Duplicate: this exact question was asked by the same author in http://stackoverflow.com/questions/1221583/how-do-i-update-a-xml-string-in-an-ntext-column-in-sql-server. Please close this one and answer in the original question. I have a SQL table with 2 columns. ID(int) and Value(ntext) The value rows have all sorts of xml strings in them. ID Value -- ------------------ 1 <ROOT><Type current="TypeA"/></ROOT> 2 <XML><Name current="MyName"/><XML> 3 <TYPE><Colour current="Yellow"/><TYPE> 4 <TYPE><Colour current="Yellow" Size="Large"/><TYPE> 5 <TYPE><Colour current="Blue" Size="Large"/><TYPE> 6 <XML><Name current="Yellow"/><XML> How do I: A. List the rows where <TYPE><Colour current="Yellow", bearing in mind that there is an entry <XML><Name current="Yellow"/><XML> B. Modify the rows that contain <TYPE><Colour current="Yellow" to be <TYPE><Colour current="Purple" Thanks! 4 your help

    Read the article

  • Multiple Concurrent Postbacks when using UpdatePanels

    - by d4nt
    Here's an example app that I built to demonstrate my problem. A single aspx page with the following on it: <form id="form1" runat="server"> <asp:ScriptManager runat="server" /> <asp:Button runat="server" ID="btnGo" Text="Go" OnClick="btnGo_Click" /> <asp:UpdatePanel runat="server"> <ContentTemplate> <asp:TextBox runat="server" ID="txtVal1" /> </ContentTemplate> </asp:UpdatePanel> </form> Then, in code behind, we have the following: protected void btnGo_Click(object sender, EventArgs e) { Thread.Sleep(5000); Debug.WriteLine(string.Format("{0}: {1}", DateTime.Now.ToString("HH:MM:ss.fffffff"), txtVal1.Text)); txtVal1.Text = ""; } If you run this and click on the "Go" button multiple times you will see multiple debug statements on the "Output" window showing that multiple requests have been processed. This appears to contradict the documented behaviour of update panels (i.e. If you make a request while one is processing, the first requests gets terminated and the current one is processed). Anyway, the point is I want to fix it. The obvious option would be to use Javascript to disable the button after the first press, but that strikes me as hard to maintain, we potentially have the same issue on a lot of screens it could be easily broken if someone renames a button. Do you have any suggestions? Perhaps there is something I could do in BeginRequest in Global.asax to detect a duplicate request? Is there some setting or feature on the UpdatePanel to stop it doing this, or maybe something in the AjaxControlToolkit that will prevent it?

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >