Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 701/906 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • web.config + asp.net MVC + location > system.web > authorization + Integrated Security

    - by vdh_ant
    Hi guys I have an ASP.Net MVC app using Integrated Security that I need to be able grant open access to a specific route. The route in question is '~/Agreements/Upload' and the config I have setup looks like this: <configuration> ... <location path="~/Agreements/Upload"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration> I have tried a few things and nothing has worked thus far. In IIS under Directory Security Authentication Methods I only have "Integrated Windows Authentication" selected. Now this could be part of my problem (as even though IIS allows the above IIS doesn't). But if that's the case how do I configure it so that Integrated Security works but allows people who aren't authenticated to access the given route. Cheers Anthony

    Read the article

  • Plink SSH: '-m file' option not working

    - by Technext
    Hi, I am trying to use Plink for running commands on remote server. Both, local & remote machine are Windows. Though I am able to connect to the remote machine using Plink, i am not able to use the '-m file' option. I tried the following three ways but to no avail: Try 1: plink.exe -ssh -pw mypwd gchhabra@machine -m file.txt Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found file.txt only contains one command i.e., dir Try 2: plink.exe -ssh -pw mypwd gchhabra@machine dir Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found Try 3: plink.exe -ssh -pw mypwd gchhabra@machine < file.txt In this case, I get the following output: Using username "gchhabra". ****USAGE WARNING**** This is a private computer system. This computer system, including all ..... including personal information, placed or sent over this system may be monitored. Use of this computer system, authorized or unauthorized, constitutes consent ... constitutes consent to monitoring for these purposes. dirCould not chdir to home directory /home/gchhabra: No such file or directory Microsoft Windows [Version x.x.xxx] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\OpenSSH After I get the above prompt, it hangs. Can anyone please help me with this? Regards, Gaurav

    Read the article

  • Navigating to nodes using xpath in flat structure

    - by James Berry
    I have an xml file in a flat structure. We do not control the format of this xml file, just have to deal with it. I've renamed the fields because they are highly domain specific and don't really make any difference to the problem. <attribute name="Title">Book A</attribute> <attribute name="Code">1</attribute> <attribute name="Author"> <value>James Berry</value> <value>John Smith</value> </attribute> <attribute name="Title">Book B</attribute> <attribute name="Code">2</attribute> <attribute name="Title">Book C</attribute> <attribute name="Code">3</attribute> <attribute name="Author"> <value>James Berry</value> </attribute> Key things to note: the file is not particularly hierarchical. Books are delimited by an occurance of an attribute element with name='Title'. But the name='Author' attribute node is optional. Is there a simple xpath statement I can use to find the authors of book 'n'? It is easy to identify the title of book 'n', but the authors value is optional. And you can't just take the following author because in the case of book 2, this would give the author for book 3. I have written a state machine to parse this as a series of elements, but I can't help thinking there would have been a way to directly get the results that I want.

    Read the article

  • Flex List ItemRenderer with image looses BitmapData when scrolling

    - by Dominik
    Hi i have a mx:List with a DataProvider. This data Provider is a ArrayCollection if FotoItems public class FotoItem extends EventDispatcher { [Bindable] public var data:Bitmap; [Bindable] public var id:int; [Bindable] public var duration:Number; public function FotoItem(data:Bitmap, id:int, duration:Number, target:IEventDispatcher=null) { super(target); this.data = data; this.id = id; this.duration = duration; } } my itemRenderer looks like this: <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" > <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; ]]> </fx:Script> <s:Label text="index"/> <mx:Image source="{data.data}" maxHeight="100" maxWidth="100"/> <s:Label text="Duration: {data.duration}ms"/> <s:Label text="ID: {data.id}"/> </mx:VBox> Now when i am scrolling then all images that leave the screen disappear :( When i take a look at the arrayCollection every item's BitmapData is null. Why is this the case?

    Read the article

  • How does browser know when to prompt user to save password?

    - by Eric
    This is related to the question I asked here: http://stackoverflow.com/questions/2382329/how-can-i-get-browser-to-prompt-to-save-password This is the problem: I CAN'T get my browser to prompt me to save the password for the site I'm developing. (I'm talking about the bar that appears sometimes when you submit a form on Firefox, that says "Remember the password for yoursite.com? Yes / Not now / Never") This is super frustrating because this feature of Firefox (and most other modern browsers, which I hope work in a similar fashion) seems to be a mystery. It's like a magic trick the browser does, where it looks at your code, or what you submit, or something, and if it "looks" like a login form with a username (or email address) field and a password field, it offers to save. Except in this case, where it's not offering my users that option after they use my login form, and it's making me nuts. :-) (I checked my Firefox settings-- I have NOT told the browser "never" for this site. It should be prompting.) My question: exactly what the heuristics are that Firefox (or any other modern browser) uses to know when it should prompt the user to save? This shouldn't be too difficult to answer, since it's right there in the Mozilla source (I don't know where to look or else I'd try to dig it out myself). You'd think there would be a blog post or some other similar developer note from the Mozilla developers about this but I can't find that either. (* Note that if your answer to me has anything to do with cookies, encryption or anything else that is about how I'm storing the user's passwords in the database, you've probably misread my question. :-)

    Read the article

  • JScrollPanel without scrollbars

    - by Erik Itland
    I'm trying to use a JScrollPanel to display a JPanel that might be too big for the containing Jpanel. I don't want to show the scrollbars (yes, this is questionable UI design, but it is my best guess of what the customer wants. We use the same idea other places in the application, and I feel this case have given me enough time to ponder if I can do it in a better way, but if you have a better idea I might accept it an answer.) First attempt: set verticalScrollBarPolicy to NEVER. Result: Scrolling using mouse wheel doesn't work. Second attempt: set the scrollbars to null. Result: Scrolling using mouse wheel doesn't work. Third attempt: set scrollbars visible property to false. Result: It is immidiately set visible by Swing. Fourth attempt: inject a scrollbar where setVisible is overridden to do nothing when called with true. Result: Can't remember exactly, but I think it just didn't work. Fifth attempt: inject a scrollbar where setBounds are overridden. Result: Just didn't look nice. (I might have missed something here, though.) Sixth attempt: ask stackoverflow. Result: Pending. Scrolling works once scrollbars are back.

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • Why don't file type filters work properly with nsIFilePicker on Mac OSX?

    - by Eric Strom
    I am running a chrome app in firefox (started with -app) with the following code to open a filepicker: var nsIFilePicker = Components.interfaces.nsIFilePicker; var fp = Components.classes["@mozilla.org/filepicker;1"] .createInstance(nsIFilePicker); fp.init(window, "Select Files", nsIFilePicker.modeOpenMultiple); fp.appendFilter("video", "*.mov; *.mpg; *.mpeg; *.avi; *.flv; *.m4v; *.mp4"); fp.appendFilter("all", "*.*"); var res = fp.show(); if (res == nsIFilePicker.returnCancel) return; var files = fp.files; var paths = []; while (files.hasMoreElements()) { var arg = files.getNext().QueryInterface( Components.interfaces.nsILocalFile ).path; paths.push(arg); } Everything seems to work fine on Windows, and the file picker itself works on OSX, but the dropdown menu to select between file types only displays in Windows. The first filter (video in this case) is in effect, but the dropdown to select the other type never shows. Is there something extra that is needed to get this working on OSX? I have tried the latest firefox (3.6) and an older one (3.0.13) and both don't show the file type dropdown on OSX.

    Read the article

  • Display a input type=file over another input type=file

    - by Kevin Sedgley
    WARNING: Lengthy description coming up! I have written an uploader based upon APC progress uploader for PHP. This works fine and dandy, but the script as a whole (apc etc) is intended to be used only for those with Javascript. To achieve this, I have searched for any input type=file, and replaced these with an absolutely positioned form that appears over the original area where the old file input area was. The reasons for this are so the new uploader can submit to a hidden in page IFrame has to be in a seperate <form> in order to submit to the APC reciever to display the progress upload bar. allows it to be used within any form with an input type=file throughout the site I have used JQuery to do this, with the following code: Original HTML form code: <div><input type="file" name="media" id="media" /></div> Find position of div block code: // get the parent div, and properties thereof parentDiv = $(this).closest('div'); w = $(parentDiv).width(); h = $(parentDiv).height(); loc = $(parentDiv).offset(); Locate new block over old block: $('#_sender').appendTo('body').css({left:loc.left,top:loc.top,position:'absolute',zIndex:400,height:h,width:w}).show(); This works fine, and shows over the old block OK. The problem: When other elements in the DOM before or above it change (in this case a "tree view" selector is pushing the old block down) the new upload form gets moved over other elements. Is there a JQuery (or JS) method for changing this upon DOM change? Some kind of .onchange for the page?! Or an .onmove for the original block? Thanks in advance you lovely people Before DOM change: . After: .

    Read the article

  • Is ASP.NET MVC is really MVC? Or how to separate model from controller?

    - by Andrey
    Hi all, This question is a bit rhetorical. At some point i got a feeling that ASP.NET MVC is not that authentic implementation of MVC pattern. Or i didn't understood it. Consider following domain: electric bulb, switch and motion detector. They are connected together and when you enter the room motion detector switches on the bulb. If i want to represent them as MVC: switch is model, because it holds the state and contains logic bulb is view, because it presents the state of model to human motion detector is controller, because it converts user actions to generic model commands Switch has one private field (On/Off) as a State and two methods (PressOn, PressOff). If you call PressOn when it is Off it goes to On, if you call it again state doesn't change. Bulb can be replaced with buzzer, motion detector with timer or button, but the model still represent the same logic. Eventually system will have same behavior. This is how i understand classical MVC decomposition, please correct me if i am wrong. Now let's decompose it in ASP.Net MVC way. Bulb is still a view Controller will be switch + motion detector Model is some object that will just pass state to bulb. So the logic that defines behavior moves to controller. Question 1: Is my understanding of MVC and ASP.NET MVC correct? Question 2: If yes, do you agree that ASP.NET MVC is not 100% accurate implementation? And back to life. The final question is how to separate model from controller in case of ASP.NET MVC. There can be two extremes. Controller does basic stuff and call model to do all the logic. Another is controller does all the logic and model is just something like class with properties that is mapped to DB. Question 3: Where should i draw the line between this extremes? How to balance? Thanks, Andrey

    Read the article

  • Ruby Gem Install question + answer(on windows vista Home Basic environment)

    - by Vamsi
    Recently I am having problems with installing rcov gem on my windows (vista Home Basic environment), so after googling I found one solution and that is gem install rcov -v 0.8.1.1.0 #version that installs without errors gem update rcov #update to the latest version, in my case rcov-0.8.1.2.0-x86-mswin32 But this solution didn't worked on my colleague's system (windows xp) and after that we came to know about RubyInstaller devkit for winddows But that dev kit is not working on my vista, when I tried gem install rcov in my command prompt, it game me this error, C:\Users\Vamsi>gem install rcov Building native extensions. This could take a while... ERROR: Error installing rcov: ERROR: Failed to build gem native extension.ERROR: Failed to build gem native extension. D:/Spritle/Programs/Ruby/bin/ruby.exe extconf.rb creating Makefile nmake 'nmake' is not recognized as an internal or external command, operable program or batch file. Gem files will remain installed in D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/ge ms/rcov-0.9.8 for inspection. Results logged to D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/gems/rcov-0.9.8/ext /rcovrt/gem_make.out So after that my colleague tried to install nmake as well but it was throwing some other error. Can some one suggest a better solution for solving this problems for all windows environments? I am aware of cygwin for windows but I am not sure that is an 100% solution either.

    Read the article

  • How to access application.xml file of an EAR deployed to IBM WebSphere 6.1

    - by Matt1776
    I am deploying an EAR file to the IBM WebSpehre server 6.1 - I want to be able to access the EAR application name which is stored in the deployment file under 'display-name'. Looking through stack overflow posts on related subjects, I've been able to gather that this is possible via the Java MBean API - or IBM's WAS API - Problem is I cannot find a place where these API lists are summarized, i.e. cannot figure out which one to begin looking at. I could hardcode the WAS install location and find the file by looking in the 'installedApps' directory, but this is not dynamic. Does anyone have any experience working with these APIs? Any other way to dynamically find the deployed EAR's display name? EDIT - I should add that the reason I would like this information is to dynamically load our properties files - that are named by the following convention "EARAppName.properties" - so you see there IS a reasonable 'rationale' behind desiring this information in my application EDIT 2 - I should also note that this app will always be deployed on a WAS - but in the case that it isnt, a generic non-proprietary solution would be preferred, but not necessary at this moment. EDIT 3 - What I want to accomplish: Is there a way to dynamically find the deployed EAR's display name from within the application code?

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • MyMessage<T> throws an exception when calling XmlSerializer

    - by Arthis
    I am very new to nservicebus. I am using version 3.0.1, the last one up to date. And I wonder if my case is a normal limitation of NSB, I am not aware of. I have an asp.net MVC application, I am trying to setup and in my global.asax, I have the following : var configure = Configure.WithWeb() .DefaultBuilder() .ForMvc() .XmlSerializer(); But I have an error with the XmlSerializer when dealing with one of my object: [Serializable] public class MyMessage<T> : IMessage { public T myobject { get; set; } } I pass trough : XmlSerializer() instance.Initialize(types); this.InitType(type, moduleBuilder); this.InitType(info2.PropertyType, moduleBuilder); and then when dealing With T, string typeName = GetTypeName(t); typename is null and the following instruction : if (!nameToType.ContainsKey(typeName)) ends in error. null value not allowed. Is this some limitations to Nservicebus, or am I messing something up?

    Read the article

  • Convert UCS-2 characters to UTF-8 Using C#

    - by quanticle
    I'm pulling some internationalized text from a MS SQL Server 2005 database. As per the defaults for that DB, the characters are stored as UCS-2. However, I need to output the data in UTF-8 format, as I'm sending it out over the web. Currently, I have the following code to convert: SqlString dbString = resultReader.GetSqlString(0); byte[] dbBytes = dbString.GetUnicodeBytes(); byte[] utf8Bytes = System.Text.Encoding.Convert(System.Text.Encoding.Unicode, System.Text.Encoding.UTF8, dbBytes); System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding(); string outputString = encoder.GetString(utf8Bytes); However, when I examine the output in the browser, it appears to be garbage, no matter what I set the encoding to. What am I missing? EDIT: In response to the answers below, the reason I thought I had to perform a conversion is because I can output literal multibyte strings just fine. For example: OutputControl.Text = "????????????????????????????????????????????????????????????????"; works. Here, OutputControl is an ASP.Net Literal. However, OutputControl.Text = outputString; //Output from above snippet results in mangled output as described above. My hypothesis was that the database's output was somehow getting mangled by ASP.Net. If that's not the case, then what are some other possibilities?

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • grep 5 seconds of input from the serial port inside a shell-script

    - by pica
    I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like: ttylog -b 115200 -d /dev/ttyS0 I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example: touch tempFile ttylog -b 115200 -d /dev/ttyS0 >> tempFile & serialPID=$! sleep 5 #kill ${serialPID} #does not work, gets wrong PID killall ttylog cat tempFile The file gets created but never filled with any data. I can also replace the ttylog line with: ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile & In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh). I have no idea what's going on here. It seems to be a failure of redirection within my script. Am I on the right track? Is there a better way to sample 5 seconds of the serial port?

    Read the article

  • Wanted to know in detail about how shared libraries work vis-a-vis static library.

    - by goldenmean
    Hello, I am working on creating and linking shared library (.so). While working with them, many questions popped up which i could not find satisying answers when i searched for them, hence putting them here. The questions about shared libraries i have are: 1.) How is shared library different than static library? What are the Key differences in way they are created, they execute? 2.) In case of a shared library at what point are the addresses where a particular function in shared library will be loaded and run from, given? Who gives those functions is load/run addresses? 3.) Will an application linked against shared library be slower in execution as compared to that which is linked with a static library? 4.) Will application executable size differ in these two cases? 5.) Can one do source level debugging of by stepping into functions defined inside a shared library? Is any thing extra needed to make these functions visible to the application? 6.) What are pros and cons in using either kind of library? Thanks. -AD

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • F#, Linux and makefiles

    - by rwallace
    I intend to distribute an F# program as both binary and source so the user has the option of recompiling it if desired. On Windows, I understand how to do this: provide .fsproj and .sln files, which both Visual Studio and MSBuild can understand. On Linux, the traditional solution for C programs is a makefile. This depends on gcc being directly available, which it always is. The F# compiler can be installed on Linux and works under Mono, so that's fine so far. However, as far as I can tell, it doesn't create a scenario where fsc runs the compiler, instead the command is mono ...path.../fsc.exe. This is also fine, except I don't know what the path is going to be. So the full command to run the compiler in my case could be mono ~/FSharp-2.0.0.0/bin/fsc.exe types.fs tptp.fs main.fs -r FSharp.PowerPack.dll except that I'm not sure where fsc.exe will actually be located on the user's machine. Is there a way to find that out within a makefile, or would it be better to fall back on just explaining the above in the documentation and relying on the user to modify the command according to his setup?

    Read the article

  • Database Error Handling: What if You have to Call Outside service and the Transaction Fails?

    - by Ngu Soon Hui
    We all know that we can always wrap our database call in transaction ( with or without a proper ORM), in a form like this: $con = Propel::getConnection(EventPeer::DATABASE_NAME); try { $con->begin(); // do your update, save, delete or whatever here. $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } This way would guarantee that if the transaction fails, the database is restored to the correct status. But the problem is that let's say when I do a transaction, in addition to that transaction, I need to update another database ( an example would be when I update an entry in a column in databaseA, another entry in a column in databaseB must be updated). How to handle this case? Let's say, this is my code, I have three databases that need to be updated ( dbA, dbB, dbc): $con = Propel::getConnection("dbA"); try { $con->begin(); // update to dbA // update to dbB //update to dbc $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } If dbc fails, I can rollback the dbA but I can't rollback dbb. I think this problem should be database independent. And since I am using ORM, this should be ORM independent as well. Update: Some of the database transactions are wrapped in ORM, some are using naked PDO, oledb ( or whatever bare minimum language provided database calls). So my solution has to take care this. Any idea?

    Read the article

  • Thread Message Loop Hangs in Delphi

    - by erikjw
    Hello all. I have a simple Delphi program that I'm working on, in which I am attempting to use threading to separate the functionality of the program from its GUI, and to keep the GUI responsive during more lengthy tasks, etc. Basically, I have a 'controller' TThread, and a 'view' TForm. The view knows the controller's handle, which it uses to send the controller messages via PostThreadMessage. I have had no problem in the past using this sort of model for forms which are not the main form, but for some reason, when I attempt to use this model for the main form, the message loop of the thread just quits. Here is my code for the threads message loop: procedure TController.Execute; var Msg : TMsg; begin while not Terminated do begin if (Integer(GetMessage(Msg, hwnd(0), 0, 0)) = -1) then begin Synchronize(Terminate); end; TranslateMessage(Msg); DispatchMessage(Msg); case Msg.message of // ...call different methods based on message end; end; To set up the controller, I do this: Controller := TController.Create(true); // Create suspended Controller.FreeOnTerminate := True; Controller.Resume; For processing the main form's messages, I have tried using both Application.Run and the following loop (immediately after Controller.Resume) while not Application.Terminated do begin Application.ProcessMessages; end; I've run stuck here - any help would be greatly appreciated.

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >