Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 396/457 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Variable not accessible within and if statment

    - by Chris
    I have a variable which holds a score for a game. My variable is accessible and correct outside of an if statement but not inside as shown below cout << score << endl; //works if(!justFinished){ cout << score << endl; // doesn't work prints a large negative number endTime = time(NULL); ifstream highscoreFile; highscoreFile.open("highscores.txt"); if(highscoreFile.good()){ highscoreFile.close(); }else{ std::ofstream outfile ("highscores.txt"); cout << score << endl; outfile << score << std::endl; outfile.close(); } justFinished = true; } cout << score << endl;//works

    Read the article

  • Starting a process synchronously, and "streaming" the output

    - by Benjol
    I'm looking at trying to start a process from F#, wait till it's finished, but also read it's output progressively. Is this the right/best way to do it? (In my case I'm trying to execute git commands, but that is tangential to the question) let gitexecute (logger:string->unit) cmd = let procStartInfo = new ProcessStartInfo(@"C:\Program Files\Git\bin\git.exe", cmd) // Redirect to the Process.StandardOutput StreamReader. procStartInfo.RedirectStandardOutput <- true procStartInfo.UseShellExecute <- false; // Do not create the black window. procStartInfo.CreateNoWindow <- true; // Create a process, assign its ProcessStartInfo and start it let proc = new Process(); proc.StartInfo <- procStartInfo; proc.Start() |> ignore // Get the output into a string while not proc.StandardOutput.EndOfStream do proc.StandardOutput.ReadLine() |> logger What I don't understand is how the proc.Start() can return a boolean and also be asynchronous enough for me to get the output out of the while progressively. Unfortunately, I don't currently have a large enough repository - or slow enough machine, to be able to tell what order things are happening in...

    Read the article

  • Getting a stream back from a .Net remoting service that is accessible with IP v4 and v6

    - by jon.ediger
    My company has an existing .Net Remoting service that listens on a port, fronting interfaces used by external systems. This all works great with IP v4 based communications. However, this service now needs to support both IP v4 communications and IP v6 communications. I have found info that the system.runtime.remoting section of the app.config should include two channels as follows: <channel ref="tcp" name="tcp6" port="9000" bindTo="[::]" /> <channel ref="tcp" name="tcp4" port="9000" bindTo="0.0.0.0" /> The above config file changes to the System.Runtime.Remoting config section will get the remoting service responding to non-stream functions on both ip v4 and ip v6. The issue comes only when attempting to get a stream back, used to upload or download large files. In this case, instead of getting a usable stream back, the following ArgumentException is thrown instead: IPv4 address 0.0.0.0 and IPv6 address ::0 are unspecified addresses that cannot be used as a target address. Parameter name: hostNameOrAddress Is there a way to modify the app.config (in the system.runtime.remoting section, or another section) so that the service will return a stream mapped to a real ip so the client can actually upload/download files while maintaining the ability to use both IP v4 and IP v6?

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • How can I prevent infinite recursion when using events to bind UI elements to fields?

    - by Billy ONeal
    The following seems to be a relatively common pattern (to me, not to the community at large) to bind a string variable to the contents of a TextBox. class MyBackEndClass { public event EventHandler DataChanged; string _Data; public string Data { get { return _Data; } set { _Data = value; //Fire the DataChanged event } } } class SomeForm : // Form stuff { MyBackEndClass mbe; TextBox someTextBox; SomeForm() { someTextBox.TextChanged += HandleTextBox(); mbe.DataChanged += HandleData(); } void HandleTextBox(Object sender, EventArgs e) { mbe.Data = ((TextBox)sender).Text; } void HandleData(Object sender, EventArgs e) { someTextBox.Text = ((MyBackEndClass) sender).Data; } } The problem is that changing the TextBox fires the changes the data value in the backend, which causes the textbox to change, etc. That runs forever. Is there a better design pattern (other than resorting to a nasty boolean flag) that handles this case correctly? EDIT: To be clear, in the real design the backend class is used to synchronize changes between multiple forms. Therefore I can't just use the SomeTextBox.Text property directly. Billy3

    Read the article

  • What is the fastest (to access) struct-like object in Python?

    - by DNS
    I'm optimizing some code whose main bottleneck is running through and accessing a very large list of struct-like objects. Currently I'm using namedtuples, for readability. But some quick benchmarking using 'timeit' shows that this is really the wrong way to go where performance is a factor: Named tuple with a, b, c: >>> timeit("z = a.c", "from __main__ import a") 0.38655471766332994 Class using __slots__, with a, b, c: >>> timeit("z = b.c", "from __main__ import b") 0.14527461047146062 Dictionary with keys a, b, c: >>> timeit("z = c['c']", "from __main__ import c") 0.11588272541098377 Tuple with three values, using a constant key: >>> timeit("z = d[2]", "from __main__ import d") 0.11106188992948773 List with three values, using a constant key: >>> timeit("z = e[2]", "from __main__ import e") 0.086038238242508669 Tuple with three values, using a local key: >>> timeit("z = d[key]", "from __main__ import d, key") 0.11187358437882722 List with three values, using a local key: >>> timeit("z = e[key]", "from __main__ import e, key") 0.088604143037173344 First of all, is there anything about these little timeit tests that would render them invalid? I ran each several times, to make sure no random system event had thrown them off, and the results were almost identical. It would appear that dictionaries offer the best balance between performance and readability, with classes coming in second. This is unfortunate, since, for my purposes, I also need the object to be sequence-like; hence my choice of namedtuple. Lists are substantially faster, but constant keys are unmaintainable; I'd have to create a bunch of index-constants, i.e. KEY_1 = 1, KEY_2 = 2, etc. which is also not ideal. Am I stuck with these choices, or is there an alternative that I've missed?

    Read the article

  • Methods to see result fo a code change faster

    - by Can't Tell
    This question came to me when developing using Eclipse. I use JBoss Application Server and use hot code replacement. But this option requires that the 'build automatically' option to be enabled. This makes Eclipse build the workspace automatically (periodically or when a file is saved?) and for a large code base this takes too much time and processing which makes the machine freeze for a while. Also sometimes an error message is shown saying that hot code replacement failed. The question that I have is: is there a better way to see the result of a code change? Currently I have the following two suggestions: Have unit tests - this will allow to run a single test and see the result of a code change. ( But for a JavaEE application that uses EJBs is it easy to setup unit tests?) Use OSGi - which allows to add jars to the running system without bringing down the JVM. Any ideas on above suggestions or any other suggestion or a framework that allows to do this is welcome.

    Read the article

  • CSS:Hover's problem with text that is hidden because of overflow:hidden ?

    - by Michael Harringon
    In my application i have lots of divs containing text. All divs have overflow set to hidden so that the user does not see the text if the container is not large enough to contain the writing. If the user wants to see the hidden text they are supposed to mouse over the "box". The box then expands and shows the text. Sounds simple enough, right? Well i am having problems and the solution i tried did not work. The problem is that when the user mouses over the box, the text does indeed appears but stays really narrow and comes out of the bottom box, the sameway it would if overflow was set to visible. below is the standard css applied to the div box: .newevent { overflow: hidden; z-index: 0; } I Tried to fix this by setting a hover trigger, when it is activated the box widens, i thought that this would then mean there would be more space to display the text, below is the hover effect: .newevent div:hover { width: 200px; padding: 50px; background-color:#D4D4D4; border: medium red dashed; overflow: visible; z-index: 1; } How do i go about "redrawing" the text when it is hovered over, so that the text can use the new widened area rather than behaving as it is still in a narrow box.

    Read the article

  • Applying powershell outside IT Management.

    - by Tormod
    Hi. We have a flexible process control system by which automation engineers configure up large application comprising thousands of small logical units that are parameterized and integrated into the control flow. There are many tasks that are repetitive on the granular level, and there are a multitude of proprietary productivity tools that have been made to meet this demand. We have different business segments, and the automation engineers vary across the board in skill sets and interests. Fancy GUI and usability versus flexibility is a common discussion. At first glance, powershell seems to be a sensible platform to implement such tooling and which also would be a advantageous cross-over skill to manage the IT aspects of the system setup and deployment as a whole. This should allow the script savvy their desired flexibility (they are already a scripting crowd) and the GUI dependant could still get their desired GUI underpinned by powershell. But I can't seem to find many people/groups who have tried to use the scriptability and object passing of powershell extensively to accommodate a heterogeneous user community outside the realm of IT management. Do anybody have any tips or word of caution? Am I missing something obvious as to why this shouldn't be done? Shouldn't powershell be taking over the world? ;-)

    Read the article

  • Display web page from another site in asp page.

    - by Daniel
    hi all, Our customer has a requirement to extend the functionality of their existing large government project. It is an ASP.NET 3.5 (recently upgraded from 2.0) project. The existing solution is quite a behemoth that is almost unmaintainable so they have decided that they want to provide the new functionality by hosting it on another website that is shown within the existing website. As to how this is best to be done I'm not quite sure right now and if there is any security issues preventing it or that need to be considered. Essentially the user would log on to the existing web site as normal and when cliicking on a certain link the page would load as normal with some kind of frame or control that has within it the contents of the page from the other site. IE. They do not want to simply redirect to the other site they want to show it embedded within the current one such that the existing menus etc are still available. I believe if information needed to be passed to the embedded page it would be done using query strings as I'm not sure if there is even another way to accomplish this. Can anyone give me some pointers on where to start at looking to implement this or any potential pitfalls I should be aware of. Thanks

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • Fetching only the first record in the table (via the view) without changing controller?

    - by bgadoci
    I am trying to only fetch the first record in my table for display. I am creating a site where a user can upload multiple images and attach to a post but I only want to display the first image view for each post. For further clarification posts belong_to projects. So when you are on the projects show page you see multiple posts. In this view I only want to display the first image for each post. Is there a way to do this in the view without affecting the controller (as later I want to allow users to browse all photos through the addition of a lightbox). Here is my /views/posts/_post.html.erb code: <% div_for post do %> <% post.photos.each do | photo | %> <%= image_tag(photo.data.url(:large), :alt => '') %> <%= photo.description %> <% end unless post.photos.first.new_record? rescue nil %> <%= link_to h(post.link_title), post.link %> <%= h(post.description) %> <%= link_to 'Manage this post', edit_post_path(post) %> <% end %> UPDATE: I am using a photos model to attach multiple photos to each post and using paperclip here.

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • php - using file() incrementally?

    - by NeedBeerStat
    I'm not sure if this is possible, I've been googling for a solution... But, essentially, I have a very large file, the lines of which I want to store in an array. Thus, I'm using file(), but is there a way to do that in batches? So that every,say, 100 lines it creates, it "pauses"? I think there's likely to be something I can do with a foreach loop or something, but I'm not sure that I'm thinking about it the right way... Like $i=0; $j=0; $throttle=100; foreach($files as $k => $v) { if($i < $j+$throttle && $i > $j) { $lines[] = file($v); //Do some other stuff, like importing into a db } $i++; $j++; } But, I think that won't really work because $i & $j will always be equal... Anyway, feeling muddled... Can someone help me think a lil' clearer?

    Read the article

  • Authentication and Security in my website - need advice please.

    - by Ichirichi
    Hi, I am using database with a list of username/passwords, and a simple web form that allows for users to enter their username/password. When they submit the page, I simply do a stored procedure check to authenticate. If they are authorised, then their user details (e.g. username, dob, address, company address, other important info) are stored in a custom User object and then in a session. This custom User object that I created is used throughout the web application, and also in a sub-site (session sharing). My question/problems are: Is my method of authentication the correct way to do things? I find users complaining that their session have expired although they "were not idle", possibly due the app pool recycling? They type large amounts of text and find that their session had expired and thus lose all the text typed in. I am uncertain whether the session does really reset sporadically but will Forms Authentication using cookies/cookiless resolve the issue? Alternatively should I build and store the User Object in a session, cookie or something else instead in order to be more "correct" and avoid cases like in point #2. If I go down the Forms Authentication route, I believe I cannot store my custom User object in a Forms Authentication cookie so does it mean I would store the UserID and then recreate the user object on every page? Would this not be a huge increase on the server load? Advice and answers much appreciated. L

    Read the article

  • Logging in to Wordpress through CodeIgniter DX Authentication

    - by whobutsb
    Hello All, I'm about to start a very large project of rebuilding my companies intranet. The plan is to have most of the intranet live in a CI application. I chose to use CI because i'm very familiar with all the CI methods. Some sections of the intranet are going to be wordpress blogs. For example the Human Resources Dept. and the Marketing Dept will have their own wordpress blogs. Ideally my plan is to log on to the intranet, with a CI authentication library like DXAuth by querying the Active Directory of the company. When I return the AD information for the user I will by saving their group memberships into a session. It would be fantastic if I could have that session information of the user be used by wordpress to log the user as an editor if they are a member of the Marketing Group. And allow users who are not members of the group be able to comment on that blog, with out logging into wordpress. My question is if there are any CI classes or Wordpress Plugins, or tutorals out there, of this sort of integration with the two systems. Thank you for your help!

    Read the article

  • TFS merging for users that are used to VSS

    - by JacksonD
    I just migrated a team of 7 developers from VSS to TFS. I migrated all of their code into a DEV folder which I then branched into a QA folder (which I branched into a PROD folder). The developers usually don't work on the same files, but there are some shared utility classes. All of the code is for a large ASP.NET web site. When the developers are ready to merge from DEV to QA, they only want to merge their changes. For example, let's say that Developer1 has been working on a project for the last 3 months and he's ready to merge all of his code into QA. However, Developer2 has been working on a different project for the last 2 months which is not ready to be merged. Developer1 and Developer2's changes are not in any way dependent on each other, but they are not separated into different folder structures and they each regularly do a get latest. There doesn't seem to be a way for developer1 to only merge his changes without also merging all of developer2's changes. Currently, developer1 is going through the Pending Changes window and 'Undoing Pending Changes' for all of Developer2's changes, but this is time consuming. They could merge each file individually, but this is also time consuming. Is there an easier way? I am going to have a coronary if I hear one more person explain how much easier it was to work in VSS.

    Read the article

  • Convert asp.net application to windows forms app

    - by rogdawg
    I have written and deployed an ASP.NET application that is pretty complex. It uses XSL transformations to create web forms for a large variety of data objects. The data comes from the database as XML via a web service. Now, I need to create a Windows desktop application that will provide a small subset of the web applications functionality to a user who may not have access to the web (working in remote areas). I will provide the data syncing using the MS Sync Framework. And I will have the desktop use a local data store. I would like to use the same xslt files in the desktop app that I use in the web app for the form creation so that, if changes are made, the desktop app can update itself when it connects and syncs its data. But, I am wondering how to replicate the asp.net codebehind logic of my web app in the windows forms. If I use a browser control to render the XSLTransformation result, then how could I handle click events, etc, in the form? Also, can I launch other windows as "dialog boxes" from my windows forms (I do this in my web app using RadControls functionality)? Thanks for any advice you can give.

    Read the article

  • Looping the layout that was set up in Interface Builder

    - by Slavenko
    I just need for someone to point me in the right direction of how I should be doing things. I wanted to make an iOS news like app that would have interface resembling Windows Phone. Large and small image tiles that represent one news item each. Now I was thinking to create some basic layout in storyboard, that would consist out of, for example, a title, and a 3 different sized tiles/images (the gray part on the attached image). Now, I would be getting the data as a JSON array that has holds different news categories so I was wondering if somehow the set up layout could be reused in a for loop since the layout will only repeat itself (the red part on the attached image) and oly the data would be different. Can this be done, should I even try doing something like this, or should I try to create an entire layout programmatically? I wouldn't mind doing it programatically, it's just that I don't have much experience in creating layouts that way, and wanted to make sure that I don't do something that I might regret later. Thank you for any help and advice.

    Read the article

  • Always send certain values with an ajax request/post

    - by DZittersteyn
    I'm building a system that displays a list of user, and on selection of a user requests some form of password. These values are saved in a hidden field on the page, and need to be sent with every request as a form of authentication. (I'm aware of the MITM-vulnerability that lies herein, but it's a very low-key system, so security is not a large concern). Now I need to send these values with each and every request, to auth the currently 'logged in' user. I'd like to automate this, via ajaxSetup, however i'm running into some issues. My first try was: init_user_auth: function(){ $.ajaxSetup({ data: { 'user' : site_user.selected_user_id(), 'passcode': site_user.selected_user_pc(), 'barcode' : site_user.selected_user_bc() } }); }, However, as I should have known, this reads the values once, at the time of the call to ajaxSetup, and never rereads them. What I need is a way to actually call the functions every time an ajax-call is made. I'm currently trying to understand what is happening here: https://groups.google.com/forum/?fromgroups=#!topic/jquery-dev/OBcEfgvTJ9I, however through the flamewar and very low-level stuff going on there, I'm not exactly sure I get what is going on. Is this the way to proceed, or should I just face facts and manually add login-info to each ajax-call?

    Read the article

  • eliminating duplicate Enum code

    - by Don
    Hi, I have a large number of Enums that implement this interface: /** * Interface for an enumeration, each element of which can be uniquely identified by it's code */ public interface CodableEnum { /** * Get the element with a particular code * @param code * @return */ public CodableEnum getByCode(String code); /** * Get the code that identifies an element of the enum * @return */ public String getCode(); } A typical example is: public enum IMType implements CodableEnum { MSN_MESSENGER("msn_messenger"), GOOGLE_TALK("google_talk"), SKYPE("skype"), YAHOO_MESSENGER("yahoo_messenger"); private final String code; IMType (String code) { this.code = code; } public String getCode() { return code; } public IMType getByCode(String code) { for (IMType e : IMType.values()) { if (e.getCode().equalsIgnoreCase(code)) { return e; } } } } As you can imagine these methods are virtually identical in all implementations of CodableEnum. I would like to eliminate this duplication, but frankly don't know how. I tried using a class such as the following: public abstract class DefaultCodableEnum implements CodableEnum { private final String code; DefaultCodableEnum(String code) { this.code = code; } public String getCode() { return this.code; } public abstract CodableEnum getByCode(String code); } But this turns out to be fairly useless because: An enum cannot extend a class Elements of an enum (SKYPE, GOOGLE_TALK, etc.) cannot extend a class I cannot provide a default implementation of getByCode(), because DefaultCodableEnum is not itself an Enum. I tried changing DefaultCodableEnum to extend java.lang.Enum, but this doesn't appear to be allowed. Any suggestions that do not rely on reflection? Thanks, Don

    Read the article

  • "for" loop from program 7.6 from Kochan's "Programming in Objective-C"

    - by Mr_Vlasov
    "The sigma notation is shorthand for a summation. Its use here means to add the values of 1/2^i, where i varies from 1 to n. That is, add 1/2 + 1/4 + 1/8 .... If you make the value of n large enough, the sum of this series should approach 1. Let’s experiment with different values for n to see how close we get." #import "Fraction.h" int main (int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; Fraction *aFraction = [[Fraction alloc] init]; Fraction *sum = [[Fraction alloc] init], *sum2; int i, n, pow2; [sum setTo: 0 over: 1]; // set 1st fraction to 0 NSLog (@"Enter your value for n:"); scanf ("%i", &n); pow2 = 2; for (i = 1; i <= n; ++i) { [aFraction setTo: 1 over: pow2]; sum2 = [sum add: aFraction]; [sum release]; // release previous sum sum = sum2; pow2 *= 2; } NSLog (@"After %i iterations, the sum is %g", n, [sum convertToNum]); [aFraction release]; [sum release]; [pool drain]; return 0; } Question: Why do we have to create additional variable sum2 that we are using in the "for" loop? Why do we need "release previous sum" here and then again give it a value that we just released? : sum2 = [sum add: aFraction]; [sum release]; // release previous sum sum = sum2; Is it just for the sake of avoiding memory leakage? (method "add" initializes a variable that is stored in sum2)

    Read the article

  • To NOLOCK or NOT to NOLOCK, that is the question

    - by Limey
    Hi all, This is really more of a discussion than a specific question about nolock. I took over an app recently that almost every query (and there are lots of them) has the nolock option on them. Now I am pretty new to SQL server (used Oracle for 10 years) but yet I find this pretty disturbing. So this weekend I was talking with one of my friends who runs a rather large ecommerce site (name will be withheld to protect the guilty) and he says he has to do this with all of his SQL servers cause he will always end in deadlocks. Is this just a huge short fall with SQL server? Is this just a failure in the DB design (mine is not 3rd level, but its close) Is anybody out there running an SQL server app without nolocks? These are issues that Oracle handles better with more grandulare recordlocks. Is SQL server just not able to handle big loads? Is there some better workaround than reading uncommited data? I would love to hear what people think. Thanks

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • How to pass XML from C# to a stored procedure in SQL Server 2008?

    - by Geetha
    I want to pass xml document to sql server stored procedure such as this: CREATE PROCEDURE BookDetails_Insert (@xml xml) I want compare some field data with other table data and if it is matching that records has to inserted in to the table. Requirements: How do I pass XML to the stored procedure? I tried this, but it doesn’t work:[Working] command.Parameters.Add( new SqlParameter("@xml", SqlDbType.Xml) { Value = new SqlXml(new XmlTextReader(xmlToSave.InnerXml, XmlNodeType.Document, null)) }); How do I access the XML data within the stored procedure? Edit: [Working] String sql = "BookDetails_Insert"; XmlDocument xmlToSave = new XmlDocument(); xmlToSave.Load("C:\\Documents and Settings\\Desktop\\XML_Report\\Books_1.xml"); SqlConnection sqlCon = new SqlConnection("..."); using (DbCommand command = sqlCon.CreateCommand()) { **command.CommandType = CommandType.StoredProcedure;** command.CommandText = sql; command.Parameters.Add( new SqlParameter("@xml", SqlDbType.Xml) { Value = new SqlXml(new XmlTextReader(xmlToSave.InnerXml , XmlNodeType.Document, null)) }); sqlCon.Open(); DbTransaction trans = sqlCon.BeginTransaction(); command.Transaction = trans; try { command.ExecuteNonQuery(); trans.Commit(); sqlCon.Close(); } catch (Exception) { trans.Rollback(); sqlCon.Close(); throw; } Edit 2: How to create a select query to select pages, description based on some conditions. <booksdetail> <isn_13>700001048</isbn_13> <isn_10>01048B</isbn_10> <Image_URL>http://www.landt.com/Books/large/00/7010000048.jpg</Image_URL> <title>QUICK AND FLUPKE</title> <Description> PRANKS AND JOKES QUICK AND FLUPKE </Description> </booksdetail>

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >