Search Results

Search found 7628 results on 306 pages for 'internal communications'.

Page 282/306 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • C# Detect Localhost Port Usage

    - by ThaKidd
    In advance, thank you for your advice. I am currently working on a program which uses Putty to create a SSH connection with a server that uses local port forwarding to enable a client, running my software, to access the service behind the SSH server via localhost. IE: client:20100 - Internet - Remote SSH server exposed via router/firewall - Local Intranet - Intranet Web POP3 Server:110. Cmd Line: "putty -ssh -2 -P 22 -C -L 20100:intranteIP:110 -pw sshpassword sshusername@sshserver" Client would use putty to create a SSH connection with the SSH server specifying in the connection string that it would like to tie port 110 of the Intranet POP3 Server to port 20100 on the client system. Therefore the client would be able to open up a mail client to localhost:20100 and interact with the Internal POP3 server over the SSH tunnel. The above is a general description. I already know what I am trying to do will work without a problem so am not looking for debate on the above. The question is this...How can I ensure the local port (I cannot use dynamic ports, so it must be static) on localhost is not being used or listened to by any other application? I am currently executing this code in my C# app: private bool checkPort(int port) { try { //Create a socket on the current IPv4 address Socket TestSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Create an IP end point IPEndPoint localIP = new IPEndPoint(IPAddress.Parse("127.0.0.1"), port); // Bind that port TestSocket.Bind(localIP); // Cleanup TestSocket.Close(); return false; } catch (Exception e) { // Exception occurred. Port is already bound. return true; } } I am currently calling this function starting with a specific port in a for loop to get the 'false' return at the first available port. The first port I try is actually being listened to by uTorrent. The above code does not catch this and my connection fails. What is the best method to ensure a port is truly free? I do understand some other program may grab the port during/after I have tested it. I just need to find something that will ensure it is not currently in use AT ALL when the test is executed. If there is a way to truly reserve the localhost port during the test, I would love to hear about it.

    Read the article

  • how to: handle exceptions, best practices

    - by b0x0rz
    need to implement a global error handling, so maybe you can help out with the following example... i have this code: public bool IsUserAuthorizedToSignIn(string userEMailAddress, string userPassword) { // get MD5 hash for use in the LINQ query string passwordSaltedHash = this.PasswordSaltedHash(userEMailAddress, userPassword); // check for email / password / validity using (UserManagementDataContext context = new UserManagementDataContext()) { var users = from u in context.Users where u.UserEMailAdresses.Any(e => e.EMailAddress == userEMailAddress) && u.UserPasswords.Any(p => p.PasswordSaltedHash == passwordSaltedHash) && u.IsActive == true select u; // true if user found return (users.Count() == 1) ? true : false; } } and the md5 as well: private string PasswordSaltedHash(string userEMailAddress, string userPassword) { MD5 hasher = MD5.Create(); byte[] data = hasher.ComputeHash(Encoding.Default.GetBytes(userPassword + userEMailAddress)); StringBuilder stringBuilder = new StringBuilder(); for (int i = 0; i < data.Length; i++) { stringBuilder.Append(data[i].ToString("x2")); } Trace.WriteLine(String.Empty); Trace.WriteLine("hash: " + stringBuilder.ToString()); return stringBuilder.ToString(); } so, how would i go about handling exceptions from these functions? they first one is called from the Default.aspx page. the second one is only called from other functions from the class library. what is the best practice? surround code INSIDE each function with try-catch surround the FUNCTION CALL with try-catch something else?? what to do if exceptions happen? in this example: this is a user sign in, so somehow even if everything fails, the user should get some meaningful info - along the lines: sign in ok (just redirect), sign in not ok (wrong user name / password), sign in not possible due to internal problems, sorry (exception happened). for the first function i am worried if there is a problem with database access. not sure if there is anything that needs to be handled in the second one. thnx for the info. how would you do it? need specific info on this (easier for me to understand), but also general info on how to handle other tasks/functions. i looked around the internet but everyone has different things to say, so unsure what to do... will go with either most votes here, or most logicaly explained answer :) thank you.

    Read the article

  • Using boost::iterator

    - by Neil G
    I wrote a sparse vector class (see #1, #2.) I would like to provide two kinds of iterators: The first set, the regular iterators, can point any element, whether set or unset. If they are read from, they return either the set value or value_type(), if they are written to, they create the element and return the lvalue reference. Thus, they are: Random Access Traversal Iterator and Readable and Writable Iterator The second set, the sparse iterators, iterate over only the set elements. Since they don't need to lazily create elements that are written to, they are: Random Access Traversal Iterator and Readable and Writable and Lvalue Iterator I also need const versions of both, which are not writable. I can fill in the blanks, but not sure how to use boost::iterator_adaptor to start out. Here's what I have so far: template<typename T> class sparse_vector { public: typedef size_t size_type; typedef T value_type; private: typedef T& true_reference; typedef const T* const_pointer; typedef sparse_vector<T> self_type; struct ElementType { ElementType(size_type i, T const& t): index(i), value(t) {} ElementType(size_type i, T&& t): index(i), value(t) {} ElementType(size_type i): index(i) {} ElementType(ElementType const&) = default; size_type index; value_type value; }; typedef vector<ElementType> array_type; public: typedef T* pointer; typedef T& reference; typedef const T& const_reference; private: size_type size_; mutable typename array_type::size_type sorted_filled_; mutable array_type data_; // lots of code for various algorithms... public: class sparse_iterator : public boost::iterator_adaptor< sparse_iterator // Derived , array_type::iterator // Base (the internal array) (this paramater does not compile! -- says expected a type, got 'std::vector::iterator'???) , boost::use_default // Value , boost::random_access_traversal_tag? // CategoryOrTraversal > class iterator_proxy { ??? }; class iterator : public boost::iterator_facade< iterator // Derived , ????? // Base , ????? // Value , boost::?????? // CategoryOrTraversal > { }; };

    Read the article

  • Naming selenium grid nodes. Spawning a specific node

    - by ???? ????
    I'm trying to implement a kind of default queues in selenium hub. There is a possibility to specify node's name (actually its environment, smth like "firefox on ubuntu" or "chrome on windows"). Selenium grid itself has a default queue, it works according to 'First In, First Out' principle. But I want to prioritize some of my tasks given to selenium server. I have no possibility to introduce custom queue (seems like there is no API for that), that's why I decided to separate queue's logic from selenium server. I'll only call a specific node with specific name (environment) for example "firefox important node" or smth like that. So, I want to know how to directly tell selenium which node to use for my task? And generally, am I thinking in a right way? Here are my configs: hubConfig.json.erb { "host": null, "port": <%= node[:selenium][:server][:port] %>, "newSessionWaitTimeout": -1, "servlets" : [], "prioritizer": null, "capabilityMatcher": "org.openqa.grid.internal.utils.DefaultCapabilityMatcher", "throwOnCapabilityNotPresent": true, "nodePolling": <%= node[:selenium][:server][:node_polling] %>, "cleanUpCycle": <%= node[:selenium][:server][:cleanup_cycle] %>, "timeout": <%= node[:selenium][:server][:timeout] %>, "browserTimeout": 0, "maxSession": <%= node[:selenium][:server][:max_session] %> } nodeConfig.json.erb { "capabilities": [ { "browserName": "firefox", "maxInstances": 5, "seleniumProtocol": "WebDriver" }, { "browserName": "chrome", "maxInstances": 5, "seleniumProtocol": "WebDriver" }, { "browserName": "phantomjs", "maxInstances": 5, "seleniumProtocol": "WebDriver" } ], "configuration": { "proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy", "maxSession": <%= node[:selenium][:node][:max_session] %>, "port": <%= node[:selenium][:node][:port] %>, "host": "<%= node[:fqdn] %>", "register": true, "registerCycle": <%= node[:selenium][:node][:register_cycle] %>, "hubPort": <%= node[:selenium][:server][:port] %> } } And my Driver class: ... def remote_driver @browser = Watir::Browser.new(:remote, :url => "http://myhub.com:4444/wd/hub", :http_client => client, :desired_capabilities => capabilities ) end def capabilities Selenium::WebDriver::Remote::Capabilities.send( "firefox", :javascript_enabled => true, :css_selectors_enabled => true, :takes_screenshot => true ) end def client client = Selenium::WebDriver::Remote::Http::Default.new client.timeout = 360 client end ... I still don't know how to use specified node for my task. If I try to start a driver adding :name => "firefox important node" and extend nodeConfig.json.erb's configuration with environments: - name: "firefox important node" browser: "*firefox" - name: "Firefox36 on Linux" browser: "*firefox" selenium just starts random firefox browser on a random node. How can I control it?

    Read the article

  • User Control as container at design time

    - by Luca
    I'm designing a simple expander control. I've derived from UserControl, drawn inner controls, built, run; all ok. Since an inner Control is a Panel, I'd like to use it as container at design time. Indeed I've used the attributes: [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] Great I say. But it isn't... The result is that I can use it as container at design time but: The added controls go back the inner controls already embedded in the user control Even if I push to top a control added at design time, at runtime it is back again on controls embedded to the user control I cannot restrict the container area at design time into a Panel area What am I missing? Here is the code for completeness... why this snippet of code is not working? [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] public partial class ExpanderControl : UserControl { public ExpanderControl() { InitializeComponent(); .... [System.Security.Permissions.PermissionSet(System.Security.Permissions.SecurityAction.Demand, Name = "FullTrust")] internal class ExpanderControlDesigner : ControlDesigner { private ExpanderControl MyControl; public override void Initialize(IComponent component) { base.Initialize(component); MyControl = (ExpanderControl)component; // Hook up events ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); s.SelectionChanged += new EventHandler(OnSelectionChanged); c.ComponentRemoving += new ComponentEventHandler(OnComponentRemoving); } private void OnSelectionChanged(object sender, System.EventArgs e) { } private void OnComponentRemoving(object sender, ComponentEventArgs e) { } protected override void Dispose(bool disposing) { ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); // Unhook events s.SelectionChanged -= new EventHandler(OnSelectionChanged); c.ComponentRemoving -= new ComponentEventHandler(OnComponentRemoving); base.Dispose(disposing); } public override System.ComponentModel.Design.DesignerVerbCollection Verbs { get { DesignerVerbCollection v = new DesignerVerbCollection(); v.Add(new DesignerVerb("&asd", new EventHandler(null))); return v; } } } I've found many resources (Interaction, designed, limited area), but nothing was usefull for being operative... Actually there is a trick, since System.Windows.Forms classes can be designed (as usual) and have a correct behavior at runtime (TabControl, for example).

    Read the article

  • Send Mail through Jsp page.

    - by sourabhtaletiya
    hi friends ,i have tried alot to send mail via jsp page but i am not succeded. A error is coming javax.servlet.ServletException: 530 5.7.0 Must issue a STARTTLS command first. x1sm5029316wbx.19 <html> <head> <title>JSP JavaMail Example </title> </head> <body> <%@ page import="java.util.*" %> <%@ page import="javax.mail.*" %> <%@ page import="javax.mail.internet.*" %> <%@ page import="javax.activation.*" %> <% java.security.Security.addProvider(new com.sun.net.ssl.internal.ssl.Provider()); Properties props = System.getProperties(); props.put("mail.smtp.starttls.enable","true"); props.put("mail.smtp.starttls.required","true"); String host = "smtp.gmail.com"; String to = request.getParameter("to"); String from = request.getParameter("from"); String subject = request.getParameter("subject"); String messageText = request.getParameter("body"); boolean sessionDebug = false; props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.transport.protocol", "smtp"); props.put("mail.smtp.port", "25"); props.put("mail.smtp.auth", "true"); props.put("mail.debug", "true"); props.put("mail.smtp.socketFactory.port","25"); props.put("mail.smtp.starttls.enable","true"); Session mailSession = Session.getDefaultInstance(props, null); mailSession.setDebug(sessionDebug); Message msg = new MimeMessage(mailSession); props.put("mail.smtp.starttls.enable","true"); msg.setFrom(new InternetAddress(from)); InternetAddress[] address = {new InternetAddress(to)}; msg.setRecipients(Message.RecipientType.TO, address); msg.setSubject(subject); msg.setSentDate(new Date()); msg.setText(messageText); props.put("mail.smtp.starttls.enable","true"); Transport tr = mailSession.getTransport("smtp"); tr.connect(host, "sourabh.web7", "june251989"); msg.saveChanges(); // don't forget this props.put("mail.smtp.starttls.enable","true"); tr.sendMessage(msg, msg.getAllRecipients()); tr.close(); // Transport.send(msg); /* out.println("Mail was sent to " + to); out.println(" from " + from); out.println(" using host " + host + ".");*/ %> </table> </body> </html>

    Read the article

  • Help with strange memory behavior. Looking for leaks both in my brain and in my code.

    - by BastiBechtold
    I spent the last few days trying to find memory leaks in a program we are developing. First of all, I tried using some leak detectors. After fixing a few issues, they do not find any leaks any more. However, I am also monitoring my application using perfmon.exe. Performance Monitor reports that 'Private Bytes' and 'Working Set - Private' are steadily rising when the app is used. To me, this suggests that the program is using more and more memory the longer it runs. Internal resources seem to be stable however, so this sounds like leaking to me. The program is loading a DLL at runtime. I suspect that these leaks or whatever they are occur in that library and get purged when the library is unloaded, hence they won't get picked up by the leak detectors. I used both DevPartner BoundsChecker and Virtual Leak Detector to look for memory leaks. Both supposedly catch leaks in DLLs. Also, the memory consumption is increasing in steps and those steps roughly, but not exactly, coincide with certain GUI actions I perform in the application. If these were errors in our code, they should get triggered every single time the actions are performed and not just most of the time. Whenever I am confronted with so much strangeness, I begin to question my basic assumptions. So I turn to you, who know everything, for suggestions. Is there a flaw in my assumptions? Do you have an idea of how to go about troubleshooting a problem like this? Edit: I am currently using Microsoft Visual C++ (x86) on Windows 7 64. Edit2: I just used IBM Purify to hunt for leaks. First of all, it lists a full 30% of the program as leaked memory. This can not be true. I guess it is identifying the whole DLL as leaked or something like that. However, if I search for new leaks every few actions, it reports leaks that correspond with the size increase reported by Performance Monitor. This could be a lead to a leak. Sadly, I am only using the trial version of Purify, so it won't show me the actual location of those leaks. (These leaks only show up at runtime. When the program exits, there are no leaks whatsoever reported by any tool.)

    Read the article

  • How and why do I set up a C# build machine?

    - by mmr
    Hi all, I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts? What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests. What kind of hardware will I need for this? Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to. How often should we make this kind of build? How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so? Is there anything else I'm not seeing here? I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know. EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX. Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.

    Read the article

  • Problem determining how to order F# types due to circular references

    - by James Black
    I have some types that extend a common type, and these are my models. I then have DAO types for each model type for CRUD operations. I now have a need for a function that will allow me to find an id given any model type, so I created a new type for some miscellaneous functions. The problem is that I don't know how to order these types. Currently I have models before dao, but I somehow need DAOMisc before CityDAO and CityDAO before DAOMisc, which isn't possible. The simple approach would be to put this function in each DAO, referring to just the types that can come before it, so, State comes before City as State has a foreign key relationship with City, so the miscellaneous function would be very short. But, this just strikes me as wrong, so I am not certain how to best approach this. Here is my miscellaneous type, where BaseType is a common type for all my models. type DAOMisc = member internal self.FindIdByType item = match(item:BaseType) with | :? StateType as i -> let a = (StateDAO()).Retrieve i a.Head.Id | :? CityType as i -> let a = (CityDAO()).Retrieve i a.Head.Id | _ -> -1 Here is one dao type. CommonDAO actually has the code for the CRUD operations, but that is not important here. type CityDAO() = inherit CommonDAO<CityType>("city", ["name"; "state_id"], (fun(reader) -> [ while reader.Read() do let s = new CityType() s.Id <- reader.GetInt32 0 s.Name <- reader.GetString 1 s.StateName <- reader.GetString 3 ]), list.Empty ) This is my model type: type CityType() = inherit BaseType() let mutable name = "" let mutable stateName = "" member this.Name with get() = name and set restnameval=name <- restnameval member this.StateName with get() = stateName and set stateidval=stateName <- stateidval override this.ToSqlValuesList = [this.Name;] override this.ToFKValuesList = [StateType(Name=this.StateName);] The purpose for this FindIdByType function is that I want to find the id for a foreign key relationship, so I can set the value in my model and then have the CRUD functions do the operations with all the correct information. So, City needs the id for the state name, so I would get the state name, put it into the state type, then call this function to get the id for that state, so my city insert will also include the id for the foreign key. This seems to be the best approach, in a very generic way to handle inserts, which is the current problem I am trying to solve.

    Read the article

  • What to name column in database table that holds versioning number

    - by rwmnau
    I'm trying to figure out what to call the column in my database table that holds an INT to specific "record version". I'm currently using "RecordOrder", but I don't like that, because people think higher=newer, but the way I'm using it, lower=newer (with "1" being the current record, "2" being the second most current, "3" older still, and so on). I've considered "RecordVersion", but I'm afraid that would have the same problem. Any other suggestions? "RecordAge"? I'm doing this because when I insert into the table, instead of having to find out what version is next, then run the risk of having that number stolen from me before I write, I just insert insert with a "RecordOrder" of 0. There's a trigger on the table AFTER INSERT that increments all the "RecordOrder" numbers for that key by 1, so the record I just inserted becomes "1", and all others are increased by 1. That way, you can get a person's current record by selection RecordOrder=1, instead of getting the MAX(RecordOrder) and then selecting that. PS - I'm also open to criticism about why this is a terrible idea and I should be incrementing this index instead. This just seemed to make lookups much easier, but if it's a bad idea, please enlighten me! Some details about the data, as an example: I have the following database table: CREATE TABLE AmountDue ( CustomerNumber INT, AmountDue DECIMAL(14,2), RecordOrder SMALLINT, RecordCreated DATETIME ) A subset of my data looks like this: CustomerNumber Amountdue RecordOrder RecordCreated 100 0 1 2009-12-19 05:10:10.123 100 10.05 2 2009-12-15 06:12:10.123 100 100.00 3 2009-12-14 14:19:10.123 101 5.00 1 2009-11-14 05:16:10.123 In this example, there are three rows for customer 100 - they owed $100, then $10.05, and now they owe nothing. Let me know if I need to clarify it some more. UPDATE: The "RecordOrder" and "RecordCreated" columns are not available to the user - they're only there for internal use, and to help figure out which is the current customer record. Also, I could use it to return an appropriately-ordered customer history, though I could just as easily do that with the date. I can accomplish the same thing as an incrementing "Record Version" with just the RecordCreated date, I suppose, but that removes the convenience of knowing that RecordOrder=1 is the current record, and I'm back to doing a sub-query with MAX or MIN on the DateTime to determine the most recent record.

    Read the article

  • XmlHttpRequest bug?

    - by valdo
    Hello all. I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job. Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread). Nice-to-have requirements: Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file. It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status. I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode. However I noticed that after some period it just stops working. That is, after several successful file downloads it stops downloading anything. I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state). The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object. Does anyone know something about this? Is this a known bug? I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working. BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.

    Read the article

  • Is there a recommended approach to handle saving data in response to within-site navigation without

    - by Carvell Fenton
    Hello all, Preamble to scope my question: I have a web app (or site, this is an internal LAN site) that uses jQuery and AJAX extensively to dynamically load the content section of the UI in the browser. A user navigates the app using a navigation menu. Clicking an item in the navigation menu makes an AJAX call to php, and php then returns the content that is used to populate the central content section. One of the pages served back by php has a table form, set up like a spreadsheet, that the user enters values into. This table is always kept in sync with data in the database. So, when the table is created, is it populated with the relevant database data. Then when the user makes a change in a "cell", that change immediately is written back to the database so the table and database are always in sync. This approach was take to reassure users that the data they entered has been saved (long story...), and to alleviate them from having to click a save button of some kind. So, this always in sync idea is great, except that a user can enter a value in a cell, not take focus out of the cell, and then take any number of actions that would cause that last value to be lost: e.g. navigate to another section of the site via the navigation menu, log out of the app, close the browser, etc. End of preamble, on to the issue: I initially thought that wasn't a problem, because I would just track what data was "dirty" or not saved, and then in the onunload event I would do a final write to the database. Herein lies the rub: because of my clever (or not so clever, not sure) use of AJAX and dynamically loading the content section, the user never actually leaves the original url, or page, when the above actions are taken, with the exception of closing the browser. Therefore, the onunload event does not fire, and I am back to losing the last data again. My question, is there a recommended way to handle figuring out if a person is navigating away from a "section" of your app when content is dynamically loaded this way? I can come up with a solution I think, that involves globals and tracking the currently viewed page, but I thought I would check if there might be a more elegant solution out there, or a change I could make in my design, that would make this work. Thanks in advance as always!

    Read the article

  • PDF Report generation

    - by IniTech
    EDIT : I completed this project using ABCpdf. For anyone interested, I love this product and their support is A+. Everything I listed as a 'Con' for the HTML - PDF solution was easily doable in ABCpdf. I've been charged with creating a data driven pdf report. After reviewing the plethora of options, I have narrowed it down to 2. I need you all to to help me decide, or offer alternatives I haven't considered. Here are the requirements: 100% Data driven Eventually PDF (a stop in HTML is fine, so long as it is converted) Can be run with multiple sets of data (the layout is always the same, the data is variable) Contains normal analysis-style copy (saved in DB with html markup) Contains tables (data for tables is generated at run-time) Header/Page # on each page Table of Contents .NET (VB or C#) Done quickly Now, because of the fact that the report is going to be generated with multiple sets of data, I don't think a stamped pdf template will work since I won't know how long or how many pages a certain piece of the report could require. So, I think my best options are: Programmatic creation using an iText-like solution. Generate in HTML and convert to PDF using a third-party application (ABCPdf is the tool I have played with so far) Both solutions have their pro's and con's. Programmatic solution: Pros: Flexible Easy page numbering/page header/table of contents Free Cons: Time consuming (to write a layer on top of iText to do what I need and keep maintainable) Since the copy is already stored in the db with html markup, I would have to parse through the data before I place it into the pdf, ensuring I don't have to break the paragraph into chunks so I can apply bold, italic, underline, etc. to specific phrases. This seems like a huge PITA, and I hope I am wrong about that assumption. HTML - PDF Pros: Easy to generate from db (no parsing necessary) Many tools for conversion Uses technology I am already familiar with Built-in "Print Preview" - not a req, but nice Cons: (Edited after project completion. All of my assumptions were incorrect and ABCpdf is awesome) 1. Almost impossible to generate page headers - Not True 2. Very difficult to generate page numbers Not True 3. Nearly impossible to generate table of contents Not True 4. (Cross-browser support isn't a con; Since its internal, I can dictate what browser to use) 5. Conversion tool quirks - may not convert exactly as rendered in browser Not True 6. Overall, I think it would be very hard to format the HTML exactly as I would want it to appear/convert to PDF. Not True That's it - I need the communitys help in deciding which way I should go. I might be wrong about some of my Pro/Con assumptions. If I am, please tell me. All thoughts and suggestions are welcome and appreciated. Thanks

    Read the article

  • Simaltaneous connections with PHP and SOAP?

    - by Dov
    I'm new to using SOAP and understanding the utmost basics of it. I create a client resource/connection, I then run some queries in a loop and I'm done. The issue I am having is when I increase the iterations of the loop, ie: from 100 to 1000, it seems to run out of memory and drops an internal server error. How could I possibly run either a) multiple simaltaneous connections or b) create a connection, 100 iterations, close connection, create connection.. etc. "a)" looks to be the better option but I have no clue as to how to get it up and running whilst keeping memory (I assume opening and closing connections) at a minimum. Thanks in advance! index.php <?php // set loops to 0 $loops = 0; // connection credentials and settings $location = 'https://theconsole.com/'; $wsdl = $location.'?wsdl'; $username = 'user'; $password = 'pass'; // include the console and client classes include "class_console.php"; include "class_client.php"; // create a client resource / connection $client = new Client($location, $wsdl, $username, $password); while ($loops <= 100) { $dostuff; } ?> class_console.php <?php class Console { // the connection resource private $connection = NULL; /** * When this object is instantiated a connection will be made to the console */ public function __construct($location, $wsdl, $username, $password, $proxyHost = NULL, $proxyPort = NULL) { if(is_null($proxyHost) || is_null($proxyPort)) $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password)); else $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password, 'proxy_host' => $proxyHost, 'proxy_port' => $proxyPort)); $connection->__setLocation($location); $this->connection = $connection; return $this->connection; } /** * Will print any type of data to screen, where supported by print_r * * @param $var - The data to print to screen * @return $this->connection - The connection resource **/ public function screen($var) { print '<pre>'; print_r($var); print '</pre>'; return $this->connection; } /** * Returns a server / connection resource * * @return $this->connection - The connection resource */ public function srv() { return $this->connection; } } ?>

    Read the article

  • iPhone Debugger Message -- Weird

    - by Bill Shiff
    Hello, I have an iPhone app that I've been working on and have recently upgraded my version of XCode. Since the upgrade, I can build and debug in the iPhone Simulator just fine, but when I try to debug on an attached device I get the following messages: From Xcode4: GNU gdb 6.3.50-20050815 (Apple version gdb-1510) (Fri Oct 22 04:12:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 sharedlibrary apply-load-rules all warning: Unable to read symbols from "dyld" (prefix __dyld_) (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MessageUI.framework/MessageUI (file not found). warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MapKit.framework/MapKit (file not found). warning: Unable to read symbols from "MapKit" (not yet mapped into memory). warning: Unable to read symbols from "Foundation" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/UIKit.framework/UIKit (file not found). warning: Unable to read symbols from "UIKit" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/CoreGraphics.framework/CoreGraphics (file not found). warning: Unable to read symbols from "CoreGraphics" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). warning: Unable to read symbols from "QuartzCore" (not yet mapped into memory). warning: Unable to read symbols from "libgcc_s.1.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libSystem.B.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libobjc.A.dylib" (not yet mapped into memory). warning: Unable to read symbols from "CoreFoundation" (not yet mapped into memory). target remote-mobile /tmp/.XcodeGDBRemote-3836-28 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none [Switching to thread 11523] [Switching to thread 11523] gdb stack crawl at point of internal error: 0 gdb-arm-apple-darwin 0x0013216e internal_vproblem + 316

    Read the article

  • How to Convert arrays or SimpleXML-Objects into an XML-String

    - by streetparade
    I want to create a xml from a given string, i have a function but i didn't wrote it.It seems a bit cryptical too. Can please some one review it and give me some Ideas, how it could be written clearer for everybody? /** * Converts arrays or SimpleXML-Objects into an XML-String * @params mixed Accepts an array or xml string with data to Post * @params integer DO NOT PROVIDE. Internal Usage for recursion only */ private function mixedDataToXML($data, $level = 1) { if(!$data){ return FALSE; } if(is_array($data)) { $xml = ''; if ($level==1) { $xml .= '<?xml version="1.0" encoding="ISO-8859-1"?>'."\n"; } foreach ($data as $key => $value) { $key = strtolower($key); if (is_array($value)) { $multi_tags = false; foreach($value as $key2=>$value2) { if (is_array($value2)) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value2, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; $multi_tags = true; } else { if (trim($value2)!='') { if (htmlspecialchars($value2)!=$value2) { $xml .= str_repeat("\t",$level). "<$key><![CDATA[$value2]]>". "</$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value2</$key>\n"; } } $multi_tags = true; } } if (!$multi_tags and count($value)>0) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; } } else { if (trim($value)!='') { if (htmlspecialchars($value)!=$value) { $xml .= str_repeat("\t",$level)."<$key>". "<![CDATA[$value]]></$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value</$key>\n"; } } } } return $xml; }else{ return (string)$data; } }

    Read the article

  • Memory Leak with Swing Drag and Drop

    - by tom
    I have a JFrame that accepts top-level drops of files. However after a drop has occurred, references to the frame are held indefinitely inside some Swing internal classes. I believe that disposing of the frame should release all of its resources, so what am I doing wrong? Example import java.awt.datatransfer.DataFlavor; import java.io.File; import java.util.List; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.TransferHandler; public class DnDLeakTester extends JFrame { public static void main(String[] args) { new DnDLeakTester(); //Prevent main from returning or the jvm will exit while (true) { try { Thread.sleep(10000); } catch (InterruptedException e) { } } } public DnDLeakTester() { super("I'm leaky"); add(new JLabel("Drop stuff here")); setTransferHandler(new TransferHandler() { @Override public boolean canImport(final TransferSupport support) { return (support.isDrop() && support .isDataFlavorSupported(DataFlavor.javaFileListFlavor)); } @Override public boolean importData(final TransferSupport support) { if (!canImport(support)) { return false; } try { final List<File> files = (List<File>) support.getTransferable().getTransferData(DataFlavor.javaFileListFlavor); for (final File f : files) { System.out.println(f.getName()); } } catch (Exception e) { e.printStackTrace(); } return true; } }); setDefaultCloseOperation(DISPOSE_ON_CLOSE); pack(); setVisible(true); } } To reproduce, run the code and drop some files on the frame. Close the frame so it's disposed of. To verify the leak I take a heap dump using JConsole and analyse it with the Eclipse Memory Analysis tool. It shows that sun.awt.AppContext is holding a reference to the frame through its hashmap. It looks like TransferSupport is at fault. What am I doing wrong? Should I be asking the DnD support code to clean itself up somehow? I'm running JDK 1.6 update 19.

    Read the article

  • Hide public method used to help test a .NET assembly

    - by ChrisW
    I have a .NET assembly, to be released. Its release build includes: A public, documented API of methods which people are supposed to use A public but undocumented API of other methods, which exist only in order to help test the assembly, and which people are not supposed to use The assembly to be released is a custom control, not an application. To regression-test it, I run it in a testing framework/application, which uses (in addition to the public/documented API) some advanced/undocumented methods which are exported from the control. For the public methods which I don't want people to use, I excluded them from the documentation using the <exclude> tag (supported by the Sandcastle Help File Builder), and the [EditorBrowsable] attribute, for example like this: /// <summary> /// Gets a <see cref="IEditorTransaction"/> instance, which helps /// to combine several DOM edits into a single transaction, which /// can be undone and redone as if they were a single, atomic operation. /// </summary> /// <returns>A <see cref="IEditorTransaction"/> instance.</returns> IEditorTransaction createEditorTransaction(); /// <exclude/> [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); This successfully removes the method from the API documentation, and from Intellisense. However, if in a sample application program I right-click on an instance of the interface to see its definition in the metadata, I can still see the method, and the [EditorBrowsable] attribute as well, for example: // Summary: // Gets a ModelText.ModelDom.Nodes.IEditorTransaction instance, which helps // to combine several DOM edits into a single transaction, which can be undone // and redone as if they were a single, atomic operation. // // Returns: // A ModelText.ModelDom.Nodes.IEditorTransaction instance. IEditorTransaction createEditorTransaction(); // [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); Questions: Is there a way to hide a public method, even from the meta data? If not then instead, for this scenario, would you recommend making the methods internal and using the InternalsVisibleTo attribute? Or would you recommend some other way, and if so what and why? Thank you.

    Read the article

  • NHibernate unintentional lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Switch between speakerphone and headset on Android

    - by user210504
    Hi! I wish to know if there is a way, using which we can switch between the speaker and headset dynamically in an android application. I am using this sample code, I found online for my experiments final float frequency = 440; float increment = (float)(2*Math.PI) * frequency / 44100; // angular increment for each sample float angle = 0; AndroidAudioDevice device = new AndroidAudioDevice( ); AudioManager am = (AudioManager)getSystemService(AUDIO_SERVICE); am.setMode(AudioManager.MODE_IN_CALL); float samples[] = new float[1024]; int count = 0; while( count < 10 ) { count++; for( int i = 0; i < samples.length; i++ ) { samples[i] = (float)Math.sin( angle ) ; angle += increment; } device.writeSamples( samples ); } device.stop(); am.setMode(AudioManager.MODE_NORMAL); ---- next class public class AndroidAudioDevice { AudioTrack track; short[] buffer = new short[1024]; public AndroidAudioDevice( ) { int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT ); track = new AudioTrack( AudioManager.STREAM_VOICE_CALL, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, minSize, AudioTrack.MODE_STREAM); track.play(); } public void writeSamples(float[] samples) { fillBuffer( samples ); track.write( buffer, 0, samples.length ); } private void fillBuffer( float[] samples ) { if( buffer.length < samples.length ) buffer = new short[samples.length]; for( int i = 0; i < samples.length; i++ ) buffer[i] = (short)(samples[i] * Short.MAX_VALUE);; } public void stop() { track.stop(); } } As per my understanding this should play audio on headset, because we have not enabled the speaker phone. However, the audio is playing on the speaker phone. 1 Am I doing something wrong here? 2 What would be a way to switch between internal speaker and speaker phone dynamically for same code peice Any help will be appreciated.

    Read the article

  • How to remove the explicit dependencies to other projects' libraries in Eclipse launch configuration

    - by euluis
    In Eclipse it is possible to create launch configurations in a project, specifying the runtime dependencies from another project. A problem I found was that if you have a multiple project workspace, being possible that each project has its own libraries, it is easy to add explicit dependencies in a secondary project to libraries that are of another project and therefore subject to change. An example of this problem follows: proj1 +-- src +-- lib +-- jar1-v1.0.jar +-- jar2-v1.0.jar proj2 +-- src +-- proj2-tests.launch I don't have a dependency from the code in proj2/src to the libraries in proj1/lib. Nevertheless, I do have a dependency from proj2/src to proj1/src, although since there is an internal dependency in the code in proj1/src to its libraries jar1-v1.0.jar and jar2.v1.0.jar, I have to add a dependency in proj2-tests.launch to the libraries in proj1/lib. This translates to the following ugly lines in proj2-tests.launch: <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry path="3" projectName="proj1" type="1"/> "/> <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry internalArchive="/proj1/lib/jar1-v1.0.jar" path="3" type="2"/> "/> <listEntry value="<?xml version="1.0" encoding="UTF-8" standalone="no"?> <runtimeClasspathEntry internalArchive="/proj1/lib/jar2-v1.0.jar" path="3" type="2"/> "/> This wouldn't be a big problem if there wasn't the need from time to time to evolve the software, upgrade the libraries and etc. Consider the common need to upgrade the libraries jar1-v1.0.jar and jar2-v1.0.jar to their versions v1.1. Consider that you have about 10 projects in one workspace, having about 5 libraries each and about 4 launch configurations. You get a maintenance overhead in doing a simple upgrade of a library, which normally must imply changes in files for which there wasn't the need for. Or maybe I'm doing something wrong... What I would like to state is proj2 depends on proj1 and on its libraries and having this translated to simply that in the *.launch files. Is that possible?

    Read the article

  • How to Work Around Limitations in Generic Type Constraints in C#?

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • C strange array behaviour

    - by LukeN
    After learning that both strncmp is not what it seems to be and strlcpy not being available on my operating system (Linux), I figured I could try and write it myself. I found a quote from Ulrich Drepper, the libc maintainer, who posted an alternative to strlcpy using mempcpy. I don't have mempcpy either, but it's behaviour was easy to replicate. First of, this is the testcase I have #include <stdio.h> #include <string.h> #define BSIZE 10 void insp(const char* s, int n) { int i; for (i = 0; i < n; i++) printf("%c ", s[i]); printf("\n"); for (i = 0; i < n; i++) printf("%02X ", s[i]); printf("\n"); return; } int copy_string(char *dest, const char *src, int n) { int r = strlen(memcpy(dest, src, n-1)); dest[r] = 0; return r; } int main() { char b[BSIZE]; memset(b, 0, BSIZE); printf("Buffer size is %d", BSIZE); insp(b, BSIZE); printf("\nFirst copy:\n"); copy_string(b, "First", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); printf("\nSecond copy:\n"); copy_string(b, "Second", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); return 0; } And this is its result: Buffer size is 10 00 00 00 00 00 00 00 00 00 00 First copy: F i r s t b = 46 69 72 73 74 00 62 20 3D 00 b = 'First' Second copy: S e c o n d 53 65 63 6F 6E 64 00 00 01 00 b = 'Second' You can see in the internal representation (the lines insp() created) that there's some noise mixed in, like the printf() format string in the inspection after the first copy, and a foreign 0x01 in the second copy. The strings are copied intact and it correctly handles too long source strings (let's ignore the possible issue with passing 0 as length to copy_string for now, I'll fix that later). But why are there foreign array contents (from the format string) inside my destination? It's as if the destination was actually RESIZED to match the new length.

    Read the article

  • Generic Type constraint in .net

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • Converting C source to C++

    - by Barry Kelly
    How would you go about converting a reasonably large (300K), fairly mature C codebase to C++? The kind of C I have in mind is split into files roughly corresponding to modules (i.e. less granular than a typical OO class-based decomposition), using internal linkage in lieu private functions and data, and external linkage for public functions and data. Global variables are used extensively for communication between the modules. There is a very extensive integration test suite available, but no unit (i.e. module) level tests. I have in mind a general strategy: Compile everything in C++'s C subset and get that working. Convert modules into huge classes, so that all the cross-references are scoped by a class name, but leaving all functions and data as static members, and get that working. Convert huge classes into instances with appropriate constructors and initialized cross-references; replace static member accesses with indirect accesses as appropriate; and get that working. Now, approach the project as an ill-factored OO application, and write unit tests where dependencies are tractable, and decompose into separate classes where they are not; the goal here would be to move from one working program to another at each transformation. Obviously, this would be quite a bit of work. Are there any case studies / war stories out there on this kind of translation? Alternative strategies? Other useful advice? Note 1: the program is a compiler, and probably millions of other programs rely on its behaviour not changing, so wholesale rewriting is pretty much not an option. Note 2: the source is nearly 20 years old, and has perhaps 30% code churn (lines modified + added / previous total lines) per year. It is heavily maintained and extended, in other words. Thus, one of the goals would be to increase mantainability. [For the sake of the question, assume that translation into C++ is mandatory, and that leaving it in C is not an option. The point of adding this condition is to weed out the "leave it in C" answers.]

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >