Search Results

Search found 14265 results on 571 pages for 'technical favorite features'.

Page 545/571 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • What IDE to use for Python

    - by husayt
    As a Python newbie, it is interesting to know what IDE's ("GUIs/editors") others use for Python coding. If you can just give the name (e.g. Textpad, Eclipse ..) that will be enough. If it is already mentioned, you can just vote for it. But if you can also give some more comparative information, that will be much appreciated. Thanks. Update: Results so far PyDev with Eclipse (CP, F, AC, PD, EM, SI, MLS, UML, SC, UT, LN, CF, BM) Komodo (CP, C/F, MLS, PD, AC, SC, SI, BM, LN, CF, CT) Emacs (CP, F, AC, MLS, PD, EM, SC, SI, BM, LN, CF, CT, UT, UML) Vim (CP, F, AC, MLS, SI, BM, LN, CF ) TextMate (Mac, CT, CF, MLS, SI, BM, LN) Gedit (Linux, F, AC, MLS, BM, LN, CT [sort of]) Idle (CP, F, AC) PIDA (Linux, CP, F, AC, MLS, SI, BM, LN, CF)(VIM Based) NotePad++ (Windows) BlueFish (Linux) JEdit (CP, F, BM, LN, CF, MLS) E-Texteditor (TextMate Clone for Windows) WingIde (CP, C, AC, MLS (support for C), PD, EM, SC, SI, BM, LN, CF, CT, UT) Eric Ide (CP, F, AC, PD, EM, SI, LN, CF, UT) Pyscripter (Windows, F, AC, PD, EM, SI, LN, CT, UT) ConTEXT (Windows, C) SPE (F, AC, UML) SciTE (CP, F, MLS, EM, BM, LN, CF, CT, SH) Zeus (W, C, BM, LN, CF, SI, SC, CT) NetBeans (CP, F, PD, UML, AC, MLS, SC, SI, BM, LN, CF, CT, UT, RAD) DABO (CP) BlackAdder (C, CP, CF, SI) PythonWin (W, F, AC, PD, SI, BM, CF) Geany (CP, F, very limited AC, MLS, SI, BM, LN, CF) UliPad (CP, F, AC, PD, MLS, SI, LI, CT, UT, BM) Boa Constructor (CP, F, AC, PD, EM, SI, BM, LN, UML, CF, CT) ScriptDev (W, C, AC, MLS, PD, EM, SI, BM, LN, CF, CT) Spider (CP, F, AC) Editra (CP, F, AC, MLS, SC, SI, BM, LN, CF) Pfaide (Windows, C, AC, MLS, SI, BM, LN, CF, CT) KDevelop (CP, F, MLS, SC, SI, BM, LN, CF) Acronyms used: CP - Cross Platfom C - Commercial F - Free AC - Automatic Code-completion MLS - Multi-Language Support PD - Integrated Python Debugging EM - ErrorMarkup SC - Source Control integration SI - Smart Indent BM - Bracket Matching LN - Line Numbering UML - UML editing / viewing CF - Code Folding CT - Code Templates UT - Unit Testing UID - Gui Designer (e.g. QT, Eric, ..) DB - integrated database support RAD - Rapid app development support I don't mention basics like Syntax highlighting as I expect these by default. This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers. PS. Can you help me to add features of the above editors to the list (like autocomplete, debugging, or etc)?

    Read the article

  • Insert a doctype into an XML document (Java/ SAX)

    - by Thom Nichols
    Imagine you have an XML document and imagine you have the DTD but the document itself doesn't actually specify a DOCTYPE ... How would you insert the DOCTYPE declaration, preferably by specifying it on the parser (similar to how you can set the schema for a document that will be parsed) or by inserting the necessary SAX events via an XMLFilter or the like? I've found many references to EntityResolver, but that is what's invoked once a DOCTYPE is found during parsing and it's used to point to a local DTD file. EntityResolver2 appears to have what I'm looking for but I haven't found any examples of usage. This is the closest I've come thus far: (code is Groovy, but close enough that you should be able to understand it...) import org.xml.sax.* import org.xml.sax.ext.* import org.xml.sax.helpers.* class XmlFilter extends XMLFilterImpl { public XmlFilter( XMLReader reader ) { super(reader) } @Override public void startDocument() { super.startDocument() super.resolveEntity( null, 'file:///./entity.dtd') println "filter startDocument" } } class MyHandler extends DefaultHandler2 { public InputSource resolveEntity(String name, String publicId, String baseURI, String systemId) { println "entity: $name, $publicId, $baseURI, $systemId" return new InputSource(new StringReader('<!ENTITY asdf "&#161;">')) } } def handler = new MyHandler() def parser = XMLReaderFactory.createXMLReader() parser.setFeature 'http://xml.org/sax/features/use-entity-resolver2', true def filter = new XmlFilter( parser ) filter.setContentHandler( handler ) filter.setEntityResolver( handler ) filter.parse( new InputSource(new StringReader('''<?xml version="1.0" ?> <test>one &asdf; two! &nbsp; &iexcl;&pound;&cent;</test>''')) ); I see resolveEntity called but still hit org.xml.sax.SAXParseException: The entity "asdf" was referenced, but not declared. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1231) at org.xml.sax.helpers.XMLFilterImpl.parse(XMLFilterImpl.java:333) I guess this is because there's no way to add SAX events that the parser knows about, I can only add events via a filter that's upstream from the parser which are passed along to the ContentHandler. So the document has to be valid going into the XMLReader. Any way around this? I know I can modify the raw stream to add a doctype or possibly do a transform to set a DTD... Any other options?

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • ASP.NET MVC 2 generation of the List/Index view

    - by Klas Mellbourn
    ASP.NET MVC 2 has powerful features for generating the model-dependent content of the Edit view (using EditorForModel) and Details view (using DisplayForModel) that automatically utilizes metadata and editor (or display) templates: <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend><%= Html.LabelForModel() %></legend> <%= Html.EditorForModel() %> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> However, I cannot find any comparable tools for the "last" step of generating the Index view (a.k.a. the List view). There I have to hard code the columns first in the row representing the headers and then inside the foreach loop: <h2>Index</h2> <table> <tr> <th></th> <th> ID </th> <th> Foo </th> <th> Bar </th> </tr> <% foreach (var item in Model) { %> <tr> <td> <%= Html.ActionLink("Edit", "Edit", new { id=item.ID }) %> | <%= Html.ActionLink("Details", "Details", new { id=item.ID })%> | <%= Html.ActionLink("Delete", "Delete", new { id=item.ID })%> </td> <td> <%= Html.Encode(item.ID) %> </td> <td> <%= Html.Encode(item.Foo) %> </td> <td> <%= Html.Encode(String.Format("{0:g}", item.Bar)) %> </td> </tr> <% } %> </table> What would be the best way to generate the columns (utlizing metadata such as HiddenInput), with the aim of making the Index view as free of model particulars as Edit and Details?

    Read the article

  • Which OAuth library do you find works best for Objective-C/iPhone?

    - by Brennan
    I have been looking to switch to OAuth for my Twitter integration code and now that there is a deadline in less than 7 weeks (see countdown link) it is even more important to make the jump to OAuth. I have been doing Basic Authentication which is extremely easy. Unfortunately OAuth does not appear to be something that I would whip together in a couple of hours. http://www.countdowntooauth.com/ So I am looking to use a library. I have put together the following list. MPOAuth MGTwitterEngine OAuthConsumer I see that MPOAuth has some great features with a good deal of testing code in place but there is one big problem. It does not work. The sample iPhone project that is supposed to authenticate with Twitter causes an error which others have identified and logged as a bug. http://code.google.com/p/mpoauthconnection/issues/detail?id=29 The last code change was March 11 and this bug was filed on March 30. It has been over a month and this critical bug has not been fixed yet. So I have moved on to MGTwitterEngine. I pulled down the source code and loaded it up in Xcode. Immediately I find that there are a few dependencies and the README file does not have a clear list of steps to fetch those dependencies and integrate them with the project so that it builds successfully. I see this as a sign that the project is not mature enough for prime time. I see also that the project references 2 libraries for JSON when one should be enough. One is TouchJSON which has worked well for me so I am again discouraged from relying on this project for my applications. I did find that MGTwitterEngine makes use of OAuthConsumer which is one of many OAuth projects hosted by an OAuth project on Google Code. http://code.google.com/p/oauth/ http://code.google.com/p/oauthconsumer/wiki/UsingOAuthConsumer It looks like OAuthConsumer is a good choice at first glance. It is hosted with other OAuth libraries and has some nice documentation with it. I pulled down the code and it builds without errors but it does have many warnings. And when I run the new Build and Analyze feature in Xcode 3.2 I see 50 analyzer results. Many are marked as potential memory leaks which would likely lead to instability in any app which uses this library. It seems there is no clear winner and I have to go with something before the big Twitter OAuth deadline. Any suggestions?

    Read the article

  • Why does my UIActivityIndicatorView only display once?

    - by Schwigg
    I'm developing an iPhone app that features a tab-bar based navigation with five tabs. Each tab contains a UITableView whose data is retrieved remotely. Ideally, I would like to use a single UIActivityIndicatorView (a subview of the window) that is started/stopped during this remote retrieval - once per tab. Here's how I set up the spinner in the AppDelegate: - (void)applicationDidFinishLaunching:(UIApplication *)application { [window addSubview:rootController.view]; activityIndicator = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleWhiteLarge]; [activityIndicator setCenter:CGPointMake(160, 200)]; [window addSubview:activityIndicator]; [window makeKeyAndVisible]; } Since my tabs were all performing a similiar function, I created a base class that all of my tabs' ViewControllers inherit from. Here is the method I'm using to do the remote retrieval: - (void)parseXMLFileAtURL:(NSString *)URL { NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init]; AppDelegate *appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate]; NSLog(@"parseXMLFileAtURL started."); [appDelegate.activityIndicator startAnimating]; NSLog(@"appDelegate.activityIndicator: %@", appDelegate.activityIndicator); articles = [[NSMutableArray alloc] init]; NSURL *xmlURL = [NSURL URLWithString:URL]; rssParser = [[NSXMLParser alloc] initWithContentsOfURL:xmlURL]; [rssParser setDelegate:self]; [rssParser setShouldProcessNamespaces:NO]; [rssParser setShouldReportNamespacePrefixes:NO]; [rssParser setShouldResolveExternalEntities:NO]; [rssParser parse]; NSLog(@"parseXMLFileAtURL finished."); [appDelegate.activityIndicator stopAnimating]; [apool release]; } This method is being called by each view controller as follows: - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; if ([articles count] == 0) { NSString *path = @"http://www.myproject.com/rss1.xml"; [self performSelectorInBackground:@selector(parseXMLFileAtURL:) withObject:path]; } } This works great while the application loads the first tab's content. I'm presented with the empty table and the spinner. As soon as the content loads, the spinner goes away. Strangely, when I click the second tab, the NSLog messages from the -parseXMLFileAtURL: method show up in the log, but the screen hangs on the first tab's view and I do not see the spinner. As soon as the content is done downloading, the second tab's view appears. I suspect this has something to do with threading, with which I'm still becoming acquainted. Am I doing something obviously wrong here?

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • SubSonic 2.x now supports TVP's - SqlDbType.Structure / DataTables for SQL Server 2008

    - by ElHaix
    For those interested, I have now modified the SubSonic 2.x code to recognize and support DataTable parameter types. You can read more about SQL Server 2008 features here: http://download.microsoft.com/download/4/9/0/4906f81b-eb1a-49c3-bb05-ff3bcbb5d5ae/SQL%20SERVER%202008-RDBMS/T-SQL%20Enhancements%20with%20SQL%20Server%202008%20-%20Praveen%20Srivatsav.pdf What this enhancement will now allow you to do is to create a partial StoredProcedures.cs class, with a method that overrides the stored procedure wrapper method. A bit about good form: My DAL has no direct table access, and my DB only has execute permissions for that user to my sprocs. As such, SubSonic only generates the AllStructs and StoredProcedures classes. The SPROC: ALTER PROCEDURE [dbo].[testInsertToTestTVP] @UserDetails TestTVP READONLY, @Result INT OUT AS BEGIN SET NOCOUNT ON; SET @Result = -1 --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] ON INSERT INTO [dbo].[tbl_TestTVP] ( [GroupInsertID], [FirstName], [LastName] ) SELECT [GroupInsertID], [FirstName], [LastName] FROM @UserDetails IF @@ROWCOUNT > 0 BEGIN SET @Result = 1 SELECT @Result RETURN @Result END --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] OFF END The TVP: CREATE TYPE [dbo].[TestTVP] AS TABLE( [GroupInsertID] [varchar](50) NOT NULL, [FirstName] [varchar](50) NOT NULL, [LastName] [varchar](50) NOT NULL ) GO The the auto gen tool runs, it creates the following erroneous method: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(string UserDetails, int? Result) { SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); sp.Command.AddParameter("@UserDetails", UserDetails, DbType.AnsiString, null, null); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } It sets UserDetails as type string. As it's good form to have two folders for a SubSonic DAL - Custom and Generated, I created a StoredProcedures.cs partial class in Custom that looks like this: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(DataTable dt, int? Result) { DataSet ds = new DataSet(); SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); // TODO: Modify the SubSonic code base in sp.Command.AddParameter to accept // a parameter type of System.Data.SqlDbType.Structured, as it currently only accepts // System.Data.DbType. //sp.Command.AddParameter("@UserDetails", dt, System.Data.SqlDbType.Structured null, null); sp.Command.AddParameter("@UserDetails", dt, SqlDbType.Structured); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } As you can see, the method signature now contains a DataTable, and with my modification to the SubSonic framework, this now works perfectly. I'm wondering if the SubSonic guys can modify the auto-gen to recognize a TVP in a sproc signature, as to avoid having to re-write the warpper? Does SubSonic 3.x support Structured data types? Also, I'm sure many will be interested in using this code, so where can I upload the new code? Thanks.

    Read the article

  • Problem simulating HTTP POST using HttpClient

    - by user560904
    I am trying to programatically send a HTTP Post request using HttpClient to http://ojp.nationalrail.co.uk/en/s/planjourney/query but it is not liking the request I send it. I copied the headers and body from what Chrome browser sends so it is identical but it doesn't like what I send as the HTML mentions there's an error. <div class="padding"> <h1 class="sifr"><strong>Sorry</strong>, something went wrong</h1> <div class="error-message"> <div class="error-message-padding"> <h2>There is a problem with the page you are trying to access.</h2> <p>It is possible that it was either moved, it doesn't exist or we are experiencing some technical difficulties.</p> <p>We are sorry for the inconvenience.</p> </div> </div> </div> Here is my Java program which uses HttpClient: package com.tixsnif; import org.apache.http.*; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.message.BasicNameValuePair; import org.apache.http.protocol.HTTP; import java.io.*; import java.util.*; import java.util.zip.GZIPInputStream; public class WebScrapingTesting { public static void main(String[] args) throws Exception { String target = "http://ojp.nationalrail.co.uk/en/s/planjourney/query"; HttpClient client = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(target); BasicNameValuePair[] params = { new BasicNameValuePair("jpState", "single"), new BasicNameValuePair("commandName", "journeyPlannerCommand"), new BasicNameValuePair("from.searchTerm", "Basingstoke"), new BasicNameValuePair("to.searchTerm", "Reading"), new BasicNameValuePair("timeOfOutwardJourney.arrivalOrDeparture", "DEPART"), new BasicNameValuePair("timeOfOutwardJourney.monthDay", "Today"), new BasicNameValuePair("timeOfOutwardJourney.hour", "10"), new BasicNameValuePair("timeOfOutwardJourney.minute", "15"), new BasicNameValuePair("timeOfReturnJourney.arrivalOrDeparture", "DEPART"), new BasicNameValuePair("timeOfReturnJourney.monthDay", "Today"), new BasicNameValuePair("timeOfReturnJourney.hour", "18"), new BasicNameValuePair("timeOfReturnJourney.minute", "15"), new BasicNameValuePair("_includeOvertakenTrains", "on"), new BasicNameValuePair("viaMode", "VIA"), new BasicNameValuePair("via.searchTerm", "Station name / code"), new BasicNameValuePair("offSetOption", "0"), new BasicNameValuePair("_reduceTransfers", "on"), new BasicNameValuePair("operatorMode", "SHOW"), new BasicNameValuePair("operator.code", ""), new BasicNameValuePair("_lookForSleeper", "on"), new BasicNameValuePair("_directTrains", "on")}; httpPost.setHeader("Host", "ojp.nationalrail.co.uk"); httpPost.setHeader("User-Agent", "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_4; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.231 Safari/534.10"); httpPost.setHeader("Accept-Encoding", "gzip,deflate,sdch"); httpPost.setHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,**/*//*;q=0.8"); httpPost.setHeader("Accept-Language", "en-us,en;q=0.8"); httpPost.setHeader("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); httpPost.setHeader("Origin", "http://www.nationalrail.co.uk/"); httpPost.setHeader("Referer", "http://www.nationalrail.co.uk/"); httpPost.setHeader("Content-Type", "application/x-www-form-urlencoded"); httpPost.setHeader("Cookie", "JSESSIONID=B2A3419B79C5D999CA4806B459675CCD.app201; Path=/"); UrlEncodedFormEntity urlEncodedFormEntity = new UrlEncodedFormEntity(Arrays.asList(params)); urlEncodedFormEntity.setContentEncoding(HTTP.UTF_8); httpPost.setEntity(urlEncodedFormEntity); HttpResponse response = client.execute(httpPost); InputStream input = response.getEntity().getContent(); GZIPInputStream gzip = new GZIPInputStream(input); InputStreamReader isr = new InputStreamReader(gzip); BufferedReader br = new BufferedReader(isr); String line = null; while((line = br.readLine()) != null) { System.out.printf("\n%s", line); } client.getConnectionManager().shutdown(); } } I keep the JSESSION ID updated if it expires but there seems to be another problem that I cannot see. Am I missing something rather obvious? He

    Read the article

  • Best practices for sending automated daily emails from web service

    - by Tauren
    I am running a web service that currently sends confirmation emails out to new users via the gmail smtp servers. As I'm only getting a few new users each day, this hasn't been a problem. I've recently added new features to the webapp that will require a customized message to be sent out to each user every day. Think of this as similar to the regular messages LinkedIn sends out that give you a status report on the activity in your network. Every user's message will be different. With thousands of users, this means thousands of unique messages will be sent each day. Edit: I've since found that these types of email are called "transactional or relationship messages". Spamtacular has a good article on differentiating between marketing and transactional email. I don't think using gmail's smtp servers will cut it anymore, but I don't know that for sure. I don't know what gmail's maximum outgoing messages per account is (it might be 100/day), but they limit outgoing mail to 500 recipients per message. I'm not sending a single message to 500 recipients, but I'm going to be sending 1000's of customized messages with each recipient getting one per day. I'm interested to learn any best practices for doing this (especially for Java-based webapps). Here are some of my thoughts and concerns on it: Should I set up my own outgoing mail server? If I do this, it seems like I'll have all sorts of other issues to worry about, such as preventing mail server abuse, monitoring bounces, allowing ways to opt-out of emails, etc. Are there any tools or services to help with this? Maybe something like OpenEMM or a services like MailChimp? But those seem focused more toward email marketing campaigns. I don't think I should have the webapp itself handle sending emails as it currently is for new user signups. I'm thinking I should setup a separate messaging server that can access the same backend/datastore as the webapp. Thoughts on this? Should I consider setting up some sort of message queueing service to help with this, such as JMS, RabbitMQ, ActiveMQ, etc.? Do I need to provide users a way to opt-out? Do I need to flag these as bulk messages? I don't really consider these email marketing messages, but I'm unsure what is considered appropriate or proper netiquette. Any advice is appreciated. I'm also very interested in open source tools or web services that simplify things and could help me to ramp up as quickly as possible. Thanks!

    Read the article

  • Gwt-gdata authentication

    - by Raffo
    Hi guys, I'm writing an application with GWT and I found on the internet that there's a library to use easily gdata features. In particular I need to use the integration with Google Calendar. I followed the official guide on gwt-gdata site to do the authentication ( http://code.google.com/p/gwt-gdata/wiki/GettingStarted ) but unfortunately, I got an error. This is the error: 17:59:12.463 [ERROR] [testmappa] Unable to load module entry point class testMappa.client.TestMappa (see associated exception for details) com.google.gwt.core.client.JavaScriptException: (TypeError): $wnd.google.accounts is undefined fileName: http://127.0.0.1:8888 lineNumber: 29 stack: ("http://www.google.com/calendar/feeds/")@http://127.0.0.1:8888:29 connect("http://127.0.0.1:8888/TestMappa.html?gwt.codesvr=127.0.0.1:9997","`gL1<a3s4B&Y{(Ci","127.0.0.1:9997","testmappa","2.0")@:0 ((void 0),"testmappa","http://127.0.0.1:8888/testmappa/")@http://127.0.0.1:8888/testmappa/hosted.html?testmappa:264 z()@http://127.0.0.1:8888/testmappa/testmappa.nocache.js:2 (-2)@http://127.0.0.1:8888/testmappa/testmappa.nocache.js:8 at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:195) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:120) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:507) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:264) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91) at com.google.gwt.accounts.client.User.login(User.java) at testMappa.client.TestMappa.onModuleLoad(TestMappa.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:637) I'm not able to understand the reason of this error. My code is simply: String scope = "http://www.google.com/calendar/feeds/"; User.login(scope); And as far as I know it should work as it is. I don't know what to do and I'm here to ask how to solve this problem and if I can directly use gdata native java library, but I believe this other thing is not possible to be done for client-side gwt code (since the code is going to be converted to javascript). Thank you.

    Read the article

  • Fulltext search for django : Mysql not so bad ? (vs sphinx, xapian)

    - by Eric
    I am studying fulltext search engines for django. It must be simple to install, fast indexing, fast index update, not blocking while indexing, fast search. After reading many web pages, I put in short list : Mysql MYISAM fulltext, djapian/python-xapian, and django-sphinx I did not choose lucene because it seems complex, nor haystack as it has less features than djapian/django-sphinx (like fields weighting). Then I made some benchmarks, to do so, I collected many free books on the net to generate a database table with 1 485 000 records (id,title,body), each record is about 600 bytes long. From the database, I also generated a list of 100 000 existing words and shuffled them to create a search list. For the tests, I made 2 runs on my laptop (4Go RAM, Dual core 2.0Ghz): the first one, just after a server reboot to clear all caches, the second is done juste after in order to test how good are cached results. Here are the "home made" benchmark results : 1485000 records with Title (150 bytes) and body (450 bytes) Mysql 5.0.75/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 7m14.146s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 Mysql 5.5.4 m3/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 6m08.154s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 1 thread, 100000 searchs with single word randomly taken from database : First run : 9m09s next run : 5m38s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:15.007353 1 thread, boolean search : 1000 x (+word1 +word2) First run : 0:00:21.205404 next run : 0:00:00.145098 Djapian Fulltext : ========================================================================== Full indexing : 84m7.601s 1 thread, 1000 searchs with single word randomly taken from database with prefetch : First run : 0:02:28.085680 next run : 0:00:14.300236 python-xapian Fulltext : ========================================================================== 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:26.402084 next run : 0:00:00.695092 django-sphinx Fulltext : ========================================================================== Full indexing : 1m25.957s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:30.073001 next run : 0:00:05.203294 1 thread, 100000 searchs with single word randomly taken from database : First run : 12m48s next run : 9m45s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:23.535319 1 thread, boolean search : 1000 x (word1 word2) First run : 0:00:20.856486 next run : 0:00:03.005416 As you can see, Mysql is not so bad at all for fulltext search. In addition, its query cache is very efficient. Mysql seems to me a good choice as there is nothing to install (I need just to write a small script to synchronize an Innodb production table to a MyISAM search table) and as I do not really need advanced search feature like stemming etc... Here is the question : What do you think about Mysql fulltext search engine vs sphinx and xapian ?

    Read the article

  • Variables are "lost" somewhere, then reappear... all with no error thrown.

    - by rob - not a robber
    Hi All, I'm probably going to make myself look like a fool with my horrible scripting but here we go. I have a form that I am collecting a bunch of checkbox info from using a binary method. ON/SET=1 !ISSET=0 Anyway, all seems to be going as planned except for the query bit. When I run the script, it runs through and throws no errors, but it's not doing what I think I am telling it to. I've hard coded the values in the query and left them in the script and it DOES update the DB. I've also tried echoing all the needed variables after the script runs and exiting right after so I can audit them... and they're all there. Here's an example. ####FEATURES RECORD UPDATE ### HERE I DECIDE TO RUN THE SCRIPT BASED ON WHETHER AN IMAGE BUTTON WAS USED if (isset($_POST["button_x"])) { ### HERE I AM ASSIGNING 1 OR 0 TO A VAR BASED ON WHTER THE CHECKBOX WAS SET if (isset($_POST["pool"])) $pool=1; if (!isset($_POST["pool"])) $pool=0; if (isset($_POST["darts"])) $darts=1; if (!isset($_POST["darts"])) $darts=0; if (isset($_POST["karaoke"])) $karaoke=1; if (!isset($_POST["karaoke"])) $karaoke=0; if (isset($_POST["trivia"])) $trivia=1; if (!isset($_POST["trivia"])) $trivia=0; if (isset($_POST["wii"])) $wii=1; if (!isset($_POST["wii"])) $wii=0; if (isset($_POST["guitarhero"])) $guitarhero=1; if (!isset($_POST["guitarhero"])) $guitarhero=0; if (isset($_POST["megatouch"])) $megatouch=1; if (!isset($_POST["megatouch"])) $megatouch=0; if (isset($_POST["arcade"])) $arcade=1; if (!isset($_POST["arcade"])) $arcade=0; if (isset($_POST["jukebox"])) $jukebox=1; if (!isset($_POST["jukebox"])) $jukebox=0; if (isset($_POST["dancefloor"])) $dancefloor=1; if (!isset($_POST["dancefloor"])) $dancefloor=0; ### I'VE DONE LOADS OF PERMUTATIONS HERE... HARD SET THE 1/0 VARS AND LEFT THE $estab_id TO BE PICKED UP. SET THE $estab_id AND LEFT THE COLUMN DATA TO BE PICKED UP. ALL NO GOOD. IT _DOES_ WORK IF I HARD SET ALL VARS THOUGH mysql_query("UPDATE thedatabase SET pool_table='$pool', darts='$darts', karoke='$karaoke', trivia='$trivia', wii='$wii', megatouch='$megatouch', guitar_hero='$guitarhero', arcade_games='$arcade', dancefloor='$dancefloor' WHERE establishment_id='22'"); ###WEIRD THING HERE IS IF I ECHO THE VARS AT THIS POINT AND THEN EXIT(); they all show up as intended. header("location:theadminfilething.php"); exit(); THANKS ALL!!!

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • Updating / refreshing a live geojson layer | JSON & JS Variable

    - by Ozaki
    TLDR I am trying to get my geoJSON layer to update, currently it will 1. Create the vector mark, 2. Remove the vector mark, 3. Set the JS variable for lat and lon, 4. Unset the variable??? :S Hey S O. I have a geojson layer set up as follows: //GeoJSON Layer// var layer1 = new OpenLayers.Layer.GML("My GeoJSON Layer", "coordinates", {format: OpenLayers.Format.GeoJSON, styleMap: style_red}); My features are set up as follows: var latitude = 0.0; // as 0.0 it will draw the point in. var longitude = 0.0; // as 0.0 it will draw the point in. //var longitude = getlongitude; where getlongitude = JSON string of longitude. var point = new OpenLayers.Geometry.Point(longitude, latitude); pointFeature = new OpenLayers.Feature.Vector(point, null, style_red); var style_red = OpenLayers.Util.extend({}, layer_style); style_red.strokeColor = "red"; style_red.fillColor = "black"; style_red.fillOpacity = 0.5; style_red.graphicName = "circle"; style_red.pointRadius = 3.8; style_red.strokeWidth = 2; style_red.strokeLinecap = "butt"; and my layer updating function: function UpdateLayer(){ var p = new OpenLayers.Format.GeoJSON({ 'internalProjection': new OpenLayers.Projection("EPSG:900913"), 'externalProjection': new OpenLayers.Projection("EPSG:4326") }); var url = "coordinates"; OpenLayers.loadURL(url, {}, null, function(r) { var f = p.read(r.responseText); map.layers[2].destroyFeatures(); map.layers[2].addFeatures(pointFeature); }); setTimeout("UpdateLayer()",1000) } Any idea what I am doing wrong or what I am missing? Edit1 It now removes the feature (was map.layers[1]) previously... But will not add the new feature.. Edit2 I managed to get it to redraw a point but not with live data. It should draw the point at what (latitude) & (longitude) are equal to. I am trying to set latitude & longitude to some JSON string but every time straight after it sets the variable it changes it back to "undefined" as soon as it passes the line after var latitude? (using firebug & firequery to debug)

    Read the article

  • If Nvidia Shield can stream a game via WiFi (~150-300Mbps), where is the 1-10Gbps wired streaming?

    - by Enigma
    Facts: It is surprising and uncharacteristic that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. 150-300mbps WiFi is in no way superior to a 1000mbps+ LAN connection aside from well wireless mobility. Throughout time, (since the internet was created) wired services have **always come first yet in this particular case, the opposite seems to be true. We had wired internet first, wired audio streaming, and wired video streaming all before their wireless counterparts. Why? Largely because the wireless bandwidth was and is inferior. Even today despite being significantly better and capable of a lot more, it is still inferior to a wired connection. Situation: Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable to me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. While this secondary project will provide a wired connection, it still shouldn't be necessary to purchase a Shield to have this benefit. Not only this but Intel's WiDi claims game streaming support as well - wirelessly. Where is the wired streaming? All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I should be able to stream to my second desktop or my laptop both of which by hardware comparison are superior to the Shield. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? Reiteration of questions: Is there a technical reason (non marketing) for why Nvidia opted to bottleneck the streaming service with a wireless connection limiting the resolution to 720p and introducing intermittent video choppiness when on a wired connection one could achieve, presumably, 1080p with significantly less or zero choppiness? Is there anything limiting developers from creating a PC/Desktop application emulating the same H.264 decoding functionality that circumvents the need to get an Nvidia Shield altogether? (It is not a matter of being too cheap to support Nvidia - I have many Nvidia cards that aren't being used. One should not have to purchase specialty hardware when = hardware already exists) Same questions go for Intel Widi also. I am just utterly perplexed that there are wireless live streaming solution and yet no wired. How on earth can wireless be the goto transmission medium? Is there another solution that takes advantage of H.264 video compression allowing live streaming over a wired connection? (*) - Perhaps this isn't the first but afaik it is the first complete package. (**) - I cant back that up with hard evidence/links but someone probably could. Edit: Maybe this will be the solution I am looking for but I still find it hard to believe that they would be the first and after wireless solutions already exist. In-home Streaming You can play all your Windows and Mac games on your SteamOS machine, too. Just turn on your existing computer and run Steam as you always have - then your SteamOS machine can stream those games over your home network straight to your TV! - http://store.steampowered.com/livingroom/SteamOS/

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • WinQual: Why would WER not accept code-signing certificates?

    - by Ian Boyd
    In 2005 i tried to establish a WinQual account with Microsoft, so i could pick up our (if any) crash dump files submitted automatically through Windows Error Reporting (WER). i was not allowed to have my crash dumps, because i don't have a Verisign certificate. Instead i have a cheaper one, generated by a Verisign subsidiary: Thawte. The method in which you join is: you digitally sign a sample exe they provide. This proves that you are the same signer that signed apps that they got crash dumps from in the wild. Cryptographically, the private key is needed to generate a digital signature on an executable. Only the holder of that private key can create a signature with for the matching public key. It doesn't matter who generated that private key. That includes certificates that are generated from: self-signing Wells Fargo DigiCert SecureTrust Trustware QuoVadis GoDaddy Entrust Cybertrust GeoTrust GlobalSign Comodo Thawte Verisign Yet Microsof's WinQual only accepts digital certificates generated by Verisign. Not even Verisign's subsidiaries are good enough (Thawte). Can anyone think of any technical, legal or ethical reason why Microsoft doesn't want to accept code-signing certificates? The WinQual site says: Why Is a Digital Certificate Required for Winqual Membership? A digital certificate helps protect your company from individuals who seek to impersonate members of your staff or who would otherwise commit acts of fraud against your company. Using a digital certificate enables proof of an identity for a user or an organization. Is somehow a Thawte digital certificate not secure? Two years later, i sent a reminder notice to WinQual that i've been waiting to be able to get at my crash dumps. The response from WinQual team was: Hello, Thanks for the reminder. We have notified the appropriate people that this is still a request. In 2008 i asked this question in a Microsoft support forum, and the response was: We are only setup to accept VeriSign Certificates at this point. We have not had an overwhelming demand to support other types of certificates. What can it possibly mean to not be "setup" to accept other kinds of certificates? If the thumbprint of the key that signed the WinQual.exe test app is the same as the thumbprint that signed the executable who's crash dump you got in the wild: it is proven - they are my crash dumps, give them to me. And it's not like there's a special API to check if a Verisign digital signature is valid, as opposed to all other digital signatures. A valid signature is valid no matter who generated the key. Microsoft is free to not trust the signer, but that's not the same as identity. So that is my question, can anyone think of any practical reason why WinQual isn't setup to support digital signatures? One person theorized that the answer is that they're just lazy: Not that I know but I would assume that the team running the winQual system is a live team and not a dev team - as in, personality and skillset geared towards maintenance of existing systems. I could be wrong though. They don't want to do work to change it. But can anyone think of anything that would need to be changed? It's the same logic no matter what generated the key: "does the thumbprint match". What am i missing?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Challenge: Neater way of currying or partially applying C#4's string.Join

    - by Damian Powell
    Background I recently read that .NET 4's System.String class has a new overload of the Join method. This new overload takes a separator, and an IEnumerable<T> which allows arbitrary collections to be joined into a single string without the need to convert to an intermediate string array. Cool! That means I can now do this: var evenNums = Enumerable.Range(1, 100) .Where(i => i%2 == 0); var list = string.Join(",",evenNums); ...instead of this: var evenNums = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .Select(i => i.ToString()) .ToArray(); var list = string.Join(",", evenNums); ...thus saving on a conversion of every item to a string, and then the allocation of an array. The Problem However, being a fan of the functional style of programming in general, and method chaining in C# in particular, I would prefer to be able to write something like this: var list = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .string.Join(","); This is not legal C# though. The closest I've managed to get is this: var list = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .ApplyTo( Functional.Curry<string, IEnumerable<object>, string> (string.Join)(",") ); ...using the following extension methods: public static class Functional { public static TRslt ApplyTo<TArg, TRslt>(this TArg arg, Func<TArg, TRslt> func) { return func(arg); } public static Func<T1, Func<T2, TResult>> Curry<T1, T2, TResult>(this Func<T1, T2, TResult> func) { Func<Func<T1, T2, TResult>, Func<T1, Func<T2, TResult>>> curried = f => x => y => f(x, y); return curried(func); } } This is quite verbose, requires explicit definition of the parameters and return type of the string.Join overload I want to use, and relies upon C#4's variance features because we are defining one of the arguments as IEnumerable rather than IEnumerable. The Challenge Can you find a neater way of achieving this using the method-chaining style of programming?

    Read the article

  • Recommended ASP.NET Shared Hosting (USA)

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta ASP.NET frameworks support. Not a big deal to me. If you have alternative suggestions rather than your experience with discountasp, I'd like to know how their toolset is. Do you have complete control over your DB in terms of adding users, and same goes for the web app pool, etc.? Discountasp.net's control panel rocks. I don't want to loose the ability to at least control and add virtual directories, recycle my dedicated app pool myself, backup my sql database myself, through tools which is what discountasp does give you. I'd also want to know that the hoster at least gets the latest and greatest in terms of non-beta ASP.NET related frameworks available to its shared hosters.

    Read the article

  • Would Ruby on Rails be appropriate for this Foreign Language project?

    - by Lynne Overesch-Maister
    I'm a Spanish professor & computer groupie. About 15 years ago, I authored in HyperCard a series of verb conjugation programs that are now completely out of date with respect to System OS X. I would like to redo these programs myself because I had a lot of fun doing them last time (mostly I coded while my son played in Leaps and Bounds, you know, one of those places where parents take their kids & let them run wild through the tubes...). Colleagues have mentioned using Flash, Director, and various other solutions, but I saw a presentation on RoR at our SIDLIT conference today, and was inspired. I will be parsing and comparing strings (and there are other features on top of that, but that is the main one), "adding" strings relationally indexed in some kind of database(s). It will also have to handle various foreign characters (accents, upside down question marks, etc.). On top of the main process of the program, it will have to provide a practice vs. test mode, keep track of specific answers as well as totals right/wrong, and print a report. Would this be either easier and/or more efficiently done in RoR than in other languages. I am pretty sure that it will work on a Microsoft server, right? Because I think that is where most of our stuff is. I would be programming either on a Mac or a PC, whichever you think is easier. So, in summary, is RoR the way for me to go with this project? If I have some (little) experience programming in Hypercard and C, should I be able to pick RoR up fairly quickly? What things will I need to start (I already saw something called Redhills foreign key migration plugin, which I assume would be beneficial)? I still have my old scripts from hypercard, however what I would really like to do is to combine all six of my former tense-specific programs into one larger program. I figure that it wouldn't be too hard to reference the individual tenses in some way--could that be a class? Many thanks for any help you can give me on this forum.

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >