Search Results

Search found 18249 results on 730 pages for 'real world haskell'.

Page 590/730 | < Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >

  • How to move the Windows 7 bootloader to the Windows 7 partition?

    - by pauldoo
    I recently installed Windows 7 in a triple boot setup alongside XP and Linux. When I was finished and was in the process of restoring the bootloader for Linux I discovered something strange about what Windows 7 had done. I discovered that Windows 7 had not installed a bootloader to it's own partition, and instead had instead set up a bootloader on the pre-existing XP partition that offers a choice between 7 and XP. This behaviour has been noticed by others. Now my booting is slightly odd. I have GRUB on the MBR which lets me choose between Linux and Windows. When I select Windows I have Grub boot to the XP partition where I get the 2nd choice between 7 and XP. Why doesn't the Windows 7 installer put the Windows 7 bootloader on the Windows 7 partition like all previous MS OSs? This is now going to be a real problem for me, as I now want to wipe the XP partition and install something else there (probably another non-MS OS). How can I move the bootloader for Windows 7 onto the Windows 7 partition, thus making it bootable and allowing me to safely wipe the XP partition?

    Read the article

  • EF + UnitOfWork + SharePoint RunWithElevatedPrivileges

    - by Lorenzo
    In our SharePoint application we have used the UnitOfWork + Repository patterns together with Entity Framework. To avoid the usage of the passthrough authentication we have developed a piece of code that impersonate a single user before creating the ObjectContext instance in a similar way that is described in "Impersonating user with Entity Framework" on this site. The only difference between our code and the referred question is that, to do the impersonation, we are using RunWithElevatedPrivileges to impersonate the Application Pool identity as in the following sample. SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite(url)) { _context = new MyDataContext(ConfigSingleton.GetInstance().ConnectionString); } }); We have done this way because we expected that creating the ObjectContext after impersonation and, due to the fact that Repositories are receiving the impersonated ObjectContext would solve our requirement. Unfortunately it's not so easy. In fact we experienced that, even if the ObjectContext is created before and under impersonation circumstances, the real connection is made just before executing the query, and so does not use impersonation, which break our requirement. I have checked the ObjectContext class to see if there was any event through which we can inject the impersonation but unfortunately found nothing. Any help?

    Read the article

  • implementing Ws-security within WCF proxy

    - by harrisonmeister
    Hi, I have imported an axis based wsdl into a VS 2008 project as a service reference. I need to be able to pass security details such as username/password and nonce values to call the axis based service. I have looked into doing it for wse, which i understand the world hates (no issues there) I have very little experience of WCF, but have worked how to physically call the endpoint now, thanks to SO, but have no idea how to set up the SoapHeaders as the schema below shows: <S:Envelope xmlns:S="http://www.w3.org/2001/12/soap-envelope" xmlns:ws="http://schemas.xmlsoap.org/ws/2002/04/secext"> <S:Header> <ws:Security> <ws:UsernameToken> <ws:Username>aarons</ws:Username> <ws:Password>snoraa</ws:Password> </ws:UsernameToken> </wsse:Security> ••• </S:Header> ••• </S:Envelope> Any help much appreciated Thanks, Mark

    Read the article

  • WCF Callbacks often break

    - by cdecker
    I'm having quite some trouble with the WCF Callback mechanism. They sometimes work but most of the time they don't. I have a really simple Interface for the callbacks to implement: public interface IClientCallback { [OperationContract] void Log(string content); } I then implmenent that interface with a class on the client: [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] [ServiceContract] internal sealed class ClientCallback : IClientCallback { public void Log(String content){ Console.Write(content); } } And on the client I finally connect to the server: NetTcpBinding tcpbinding = new NetTcpBinding(SecurityMode.Transport); EndpointAddress endpoint = new EndpointAddress("net.tcp://127.0.0.1:1337"); ClientCallback callback= new ClientCallback(); DuplexChannelFactory<IServer> factory = new DuplexChannelFactory<IServer>(callback,tcpbinding, endpoint); factory.Open(); _connection = factory.CreateChannel(); ((ICommunicationObject)_connection).Faulted += new EventHandler(RecreateChannel); try { ((ICommunicationObject)_connection).Open(); } catch (CommunicationException ce) { Console.Write(ce.ToString()); } To invoke the callback I use the following: OperationContext.Current.GetCallbackChannel().Log("Hello World!"); But it just hangs there, and after a while the client complains about timeouts. Is there a simple solution as to why?

    Read the article

  • Find actual value of PHP variable

    - by Simon S
    Hi all. I am having a real headache with reading in a tab delimited text file and inserting it into a MySQL Database. The tab delimited text file was generated (I think) from a MS SQL Database, and I have written a simple script to read in the file and insert it into an existing table in my MySQL database. However, there seems to be some problem with the data in the txt file. When my PHP script parses the file and I output the INSERT statements, the values in each of the fields are longer than they should be. For example, the first field should be a simple two character alphanumeric value. If I echo out the INSERT statements, using Firebug (in Firefox), between each of the characters is a question mark in a black diamond. If I var_dump the values, I get the following: string(5) "A1" Now, this clearly shows a two character string, but var_dump tells me it is five characters long!! If I trim() the value, all I get is the first character (in this case "A"). How can I get at the other characters, even if it is only to remove them? Additionally, this appears to be forcing MySQL to insert the value as a BLOB, not as a varchar as it should. Simon

    Read the article

  • Delegate OpenID to Google (NOT Google Apps)

    - by Rio
    OK, I searched this question on SO but no good answer. After spent some time I figured out how to do it. I'm going to answer this myself as a way to share it. Not sure if this is the correct way to use SO, but here it goes: Now it is possible delegate OpenID to your Google account (not Google Apps). No, this is not using the demo OpenID provider using App Engine. This is your REAL Google account! First you need to enable your Google Profiles. Try to view your profile and edit it, there should be an option to set your Profile URL. You have two choices there: either use your Gmail account name (without the @gmail.com part) as your profile id, or a random number assigned to you. It's up to you to decide which one to use. Either way, that id is your profile id below. Now add the following HTML code to your delegating page: <link rel="openid2.provider" href="https://www.google.com/accounts/o8/ud?source=profiles" > <link rel="openid2.local_id" href="http://www.google.com/profiles/[YOUR PROFILE ID]" > And it's done. Now try login SO with your custom url!

    Read the article

  • Automatic multi-page multi-column flowing text with QtWebkit (HTML/CSS/JS -> PDF)

    - by Peter Boughton
    I have some HTML documents that are converted to PDF, using software that renders using QtWebkit (not sure which version). Currently, the documents have specific tags to split into columns and pages - so whenever the wording changes, it is a manual time-consuming process to move these tags so that the columns and pages fit. Can anyone provide a way to have text auto-wrapped into the next column/page (as appropriate) when it reaches the bottom of the current container? Any HTML, CSS or JS supported by QtWebkit is ok (assuming it works in the PDF converter). (I have tested the webkit-column-* in CSS3 and it appears QtWebkit does not support this.) To make things more exciting, it also needs to: - put a header at the top of each page, with page X of Y numbering; - if an odd number of pages, add a blank page at the end (with no header); - have the ability to say "don't break inside this block" or "don't break after this header" I have put some quick example initial markup and target markup to help explain what I'm trying to do. (The actual documents are far more complicated than that, but I need a simple proof-of-concept before I attack the real ones.) Any suggestions? Update: I've got a partially working solution using Aaron's "filling up" suggestion - I'll post more details in a bit.

    Read the article

  • Behavior of retained property while holder is retained

    - by Aurélien Vallée
    Hello everyone, I am a beginner ObjectiveC programmer, coming from the C++ world. I find it very difficult to understand the memory management offered by NSObject :/ Say I have the following class: @interface User : NSObject { NSString* name; } @property (nonatomic,retain) NSString* name; - (id) initWithName: (NSString*) theName; - (void) release; @end @implementation User @synthesize name - (id) initWithName: (NSString*) theName { if ( self = [super init] ) { [self setName:theName]; } return self; } - (void) release { [name release]; [super release]; } @end No considering the following code, I can't understand the retain count results: NSString* name = [[NSString alloc] initWithCString:/*C string from sqlite3*/]; // (1) name retainCount = 1 User* user = [[User alloc] initWithName:name]; // (2) name retainCount = 2 [whateverMutableArray addObject:user]; // (3) name retainCount = 2 [user release]; // (4) name retainCount = 1 [name release]; // (5) name retainCount = 0 At (4), the retain count of name decreased from 2 to 1. But that's not correct, there is still the instance of user inside the array that points to name ! The retain count of a variable should only decrease when the retain count of a referring variable is 0, that is, when it is dealloced, not released.

    Read the article

  • Combining data sets without losing observations in SAS

    - by John
    Hye guys, I know, another post another problem :D :(. I took a screenshot to easily explain my problem. http://i39.tinypic.com/rhms0h.jpg As you can see I want to merge two tables (again), the Base & Analyst table. What I want to achieve is displayed in the right bottom corner table. I’m calculating the number of total analysts and female analysts for each month in the analyst table. In the base table I have different observations for one company (here company Alcoa with ticker AA). When I use the following command: data want; merge base analyst; by month ; run; I get the right up corner problem. My observations in the main table are being narrowed down to only 4 observations (for each different year one observation, 2001, 2002, 2005, 2006). What I want is that the observations are not reduced but that for every year the same data is being placed as shown in the right bottom corner. What am I missing in my merge command? In both tables I have month as a time count variable ( the observations in my base table are monthly) on which I need to merge. For clarity I added 2 screenshots of my real databases in SAS. The base table: http://i42.tinypic.com/dr5jky.jpg The analyst table: http://i40.tinypic.com/eqpmqq.jpg Here is what my merged table looks like: http://i43.tinypic.com/116i62s.jpg You can clearly see that the merged table only has four observations left for AA (one for each unique year) instead of the original 8. Anyone an idea to solve this?

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Got a table of people, who I want to link to each other, many-to-many, with the links being bidirect

    - by dflock
    Imagine you live in very simplified example land - and imagine that you've got a table of people in your MySQL database: create table person ( person_id int, name text ) select * from person; +-------------------------------+ | person_id | name | +-------------------------------+ | 1 | Alice | | 2 | Bob | | 3 | Carol | +-------------------------------+ and these people need to collaborate/work together, so you've got a link table which links one person record to another: create table person__person ( person__person_id int, person_id int, other_person_id int ) This setup means that links between people are uni-directional - i.e. Alice can link to Bob, without Bob linking to Alice and, even worse, Alice can link to Bob and Bob can link to Alice at the same time, in two separate link records. As these links represent working relationships, in the real world they're all two-way mutual relationships. The following are all possible in this setup: select * from person__person; +---------------------+-----------+--------------------+ | person__person_id | person_id | other_person_id | +---------------------+-----------+--------------------+ | 1 | 1 | 2 | | 2 | 2 | 1 | | 3 | 2 | 2 | | 4 | 3 | 1 | +---------------------+-----------+--------------------+ For example, with person__person_id = 4 above, when you view Carol's (person_id = 3) profile, you should see a relationship with Alice (person_id = 1) and when you view Alice's profile, you should see a relationship with Carol, even though the link goes the other way. I realize that I can do union and distinct queries and whatnot to present the relationships as mutual in the UI, but is there a better way? I've got a feeling that there is a better way, one where this issue would neatly melt away by setting up the database properly, but I can't see it. Anyone got a better idea?

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • gdb reverse debugging error

    - by Werner
    Hi, i started to try reverse debugging with gdb 7, followin the tutorial: http://www.sourceware.org/gdb/wiki/ProcessRecord/Tutorial and I thought, great! Then I started to debug a real program which gives an error at the end. So I run it with gdb, and I put a breakpoint just before the place I think the error appears. Then I type "record" in order to start to recrd actions for future reverse-debugging. But after some steps I get Process record doesn't support instruction 0xf0d at address 0x2aaaab4c4b4e. Process record: failed to record execution log. Program received signal SIGTRAP, Trace/breakpoint trap. 0x00002aaaab4c4b4e in memcpy () from /lib64/libc.so.6 (gdb) n Single stepping until exit from function memcpy, which has no line number information. Process record doesn't support instruction 0xf0d at address 0x2aaaab4c4b4e. Process record: failed to record execution log. Program received signal SIGABRT, Aborted. 0x00002aaaab4c4b4e in memcpy () from /lib64/libc.so.6 Before I look at in in detail, I wonder if this feature is still buggy, or if I should start to record from the beginning. Where this "record" error happens, just an object is created as a copy of other: Thanks

    Read the article

  • Pong: How does the paddle know where the ball will hit?

    - by Roflcoptr
    After implementing Pacman and Snake I'm implementing the next very very classic game: Pong. The implementation is really simple, but I just have one little problem remaining. When one of the paddle (I'm not sure if it is called paddle) is controlled by the computer, I have trouble to position it at the correct position. The ball has a current position, a speed (which for now is constant) and a direction angle. So I could calculate the position where it will hit the side of the computer controlled paddle. And so Icould position the paddle right there. But however in the real game, there is a probability that the computer's paddle will miss the ball. How can I implement this probability? If I only use a probability of lets say 0.5 that the computer's paddle will hit the ball, the problem is solved, but I think it isn't that simple. From the original game I think the probability depends on the distance between the current paddle position and the position the ball will hit the border. Does anybody have any hints how exactly this is calculated?

    Read the article

  • Makefile trickery using VPATH and include.

    - by roe
    Hi, I'm playing around with make files and the VPATH variable. Basically, I'm grabbing source files from a few different places (specified by the VPATH), and compile them into the current directory using simply a list of .o-files that I want. So far so good, now I'm generating dependency information into a file called '.depend' and including that. Gnumake will attempt to use the rules defined so far to create the included file if it doesn't exist, so that's ok. Basically, my makefile looks like this. VPATH=A/source:B/source:C/source objects=first.o second.o third.o executable: $(objects) .depend: $(objects:.o=.c) $(CC) -MM $^ > $@ include .depend Now for the real question, can I suppress the generation of the .depend file in any way? I'm currently working in a clearcase environment - sloooow, so I'd prefer to have it a bit more under control when to update the dependency information. It's more or less an academic exercise as I could just wrap the thing in a script which is touching the .depend file before executing make (thus making it more recent than any source file), but it'd interesting to know if I can somehow suppress it using 'pure' make. I cannot remove the dependency to the source files (i.e. using simply .depend:), as I'm depending on the $^ variable to do the VPATH resolution for me. If there'd be any way to only update dependencies as a result of updated #include directives, that'd be even better of course.. But I'm not holding my breath for that one.. :)

    Read the article

  • How to use the Request URL/URL Rewriting For Localization in ASP.NET - Using an HTTP Module or Globa

    - by LocalizedUrlDMan
    I wanted to see if there is a way to use the request URL/URL rewriting to set the language a page is rendered in by examining a portion of the URL in ASP.NET. We have a site that already works with ASP.NET’s resource localization and user’s can change the language that they see pages/resources on the site in, however the current mechanism in not very search engine friendly since the language variations for each language all appear as one page. It would be much better if we could have pages like www.site.com/en-mx/realfolder/realpage.aspx that allow linking to culture specific versions of a page. I know lots of people have likely done localization through URL structures before and I wanted to know if one of your could share how to do this in the Global.asax file or with an HTTP Module (pointing to links to blog postings would be great too). We have a restriction that the site is based on ASP.NET 2.0 (so we can't used the 3.5+ features yet). Here is the example scenario: A real page exits at: www.site.com/realfolder/realpage.aspx The page has a mechanism for the user to change the language it is displayed in via a dropdown. There are search engine optimization and user links sharing benefits to doing this since people can link directly to a page that has content that is applicable to a certain language (this could also include right-to-left layouts for languages like Japanese). I would like to use an HTTP module to see if the first part of the URL after www.site.com, site.com, subdomain.site.com, etc. contains a valid culture code (e.g. en-us, es-mx) then use that value to set the localization culture of the page/resources based on that URL. So if the user accesses the URL www.site.com/en-MX/realfolder/realpage.aspx Then the page will render in Mexico’s variant of Spanish. If the user goes to www.site.com/realfolder/realpage.aspx directly the page would just use their browser’s language settings.

    Read the article

  • Detecting what changed in an HTML Textfield

    - by teehoo
    For a major school project I am implementing a real-time collaborative editor. For a little background, basically what this means is that two(or more) users can type into a document at the same time, and their changes are automatically propagated to one another (similar to Etherpad). Now my problem is as follows: I want to be able to detect what changes a user carried out onto an HTML textfield. They could: Insert a character Delete a character Paste a string of characters Cut a string of characters I want to be able to detect which of these changes happened and then notify other clients similar to "insert character 'c' at position 2" etc. Anyway I was hoping to get some advice on how I would go about implementing the detection of these changes? My first attempt was to consider the carot position before and after a change occurred, but this failed miserably. For my second attempt I was thinking about doing a diff on the entire contents of the textfields old and new value. Am I missing anything obvious with this solution? Is there something simpler?

    Read the article

  • How do you replicate changes from one excel sheet to another in two separate excel apps?

    - by incognick
    This is all in C# .NET Excel Interop Automation for Office 2007. Say you create two excel apps and open the same workbook for each application: app = new Excel.ApplicationClass(); app2 = new Excel.ApplicationClass(); string fileLocation = "myBook.xslx"; workbook = app.Workbooks.Open(fileLocation, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); workbook2 = app2.Workbooks.Open(fileLocation, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); Now, I want to replicate any changes that occur in workbook2, into workbook. I figured out I can hook up the SheetChanged event to capture cell changes: app.SheetChange += new Microsoft.Office.Interop.Excel.AppEvents_SheetChangeEventHandler(app_SheetChange); void app_SheetChange(object Sh, Microsoft.Office.Interop.Excel.Range Target) { Excel.Worksheet sheetReadOnly = (Excel.Worksheet)Sh; string changedRange = Target.get_Address(missing, missing, Excel.XlReferenceStyle.xlA1, missing, missing); Console.WriteLine("The value of " + sheetReadOnly.Name + ":" + changedRange + " was changed to = " + Target.Value2); Excel.Worksheet sheet = workbook.Worksheets[sheetReadOnly.Index] as Excel.Worksheet; Excel.Range range = sheet.get_Range(changedRange, missing); range.Value2 = Target.Value2; } How do you capture calculate changes? I can hook onto the calculate event but the only thing that is passed is the sheet, not the cells that were updated. I tried forcing an app.Calculate() or app.CalculateFullRebuild() but nothing updates in the other application. The change event does not get fired when formulas change (i.e. a slider control causes a SheetCalculate event and not a SheetChange event) Is there a way to see what formulas were updated? Or is there an easier way to sync two workbooks programmatically in real time?

    Read the article

  • What is the right approach to checksumming UDP packets

    - by mr.b
    I'm building UDP server application in C#. I've come across a packet checksum problem. As you probably know, each packet should carry some simple way of telling receiver if packet data is intact. Now, UDP already has 2-byte checksum as part of header, which is optional, at least in IPv4 world. Alternative method is to have custom checksum as part of data section in each packet, and to verify it on receiver. My question boils down to: is it better to rely on (optional) checksum in UDP packet header, or to make a custom checksum implementation as part of packet data section? Perhaps the right answer depends on circumstances (as usual), so one circumstance here is that, even though code is written and developed in .NET on Windows, it might have to run under platform-independent Mono.NET, so eventual solution should be compatible with other platforms. I believe that custom checksum algorithm would be easily portable, but I'm not so sure about the first one. Any thoughts? Also, shouts about packet checksumming in general are welcome.

    Read the article

  • Polymorphic Numerics on .Net and In C#

    - by Bent Rasmussen
    It's a real shame that in .Net there is no polymorphism for numbers, i.e. no INumeric interface that unifies the different kinds of numerical types such as bool, byte, uint, int, etc. In the extreme one would like a complete package of abstract algebra types. Joe Duffy has an article about the issue: http://www.bluebytesoftware.com/blog/CommentView,guid,14b37ade-3110-4596-9d6e-bacdcd75baa8.aspx How would you express this in C#, in order to retrofit it, without having influence over .Net or C#? I have one idea that involves first defining one or more abstract types (interfaces such as INumeric - or more abstract than that) and then defining structs that implement these and wrap types such as int while providing operations that return the new type (e.g. Integer32 : INumeric; where addition would be defined as public Integer32 Add(Integer32 other) { return Return(Value + other.Value); } I am somewhat afraid of the execution speed of this code but at least it is abstract. No operator overloading goodness... Any other ideas? .Net doesn't look like a viable long-term platform if it cannot have this kind of abstraction I think - and be efficient about it. Abstraction is reuse.

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • How to terminate all [grand]child processes using C# on WXP (and newer MSWindows)

    - by NVRAM
    Question: How can I determine all processes in the child's Process Tree to kill them? I have an application, written in C# that will: Get a set of data from the server, Spawn a 3rd party utility to process the data, then Return the results to the server. This is working fine. But since a run consumes a lot of CPU and may take as long as an hour, I want to add the ability to have my app terminate its child processes. Some issues that make the simple solutions I've found elsewhere are: My app's child process "A" (InstallAnywhere EXE I think) spawns the real processing app "B" (a java.exe), which in turns spawns more children "C1".."Cn" (most of which are also written in Java). There will likely be multiple copies of my application (and hence, multiple sets of its children) running on the same machine. The child process is not in my control so there might be some "D" processes in the future. My application must run on 32-bit and 64-bit versions of MSWindows. On the plus side there is no issue of data loss, a "clean" shutdown doesn't matter as long as the processes end fairly quickly.

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • Should I go vor Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = false) Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point.

    Read the article

< Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >